markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Simulation
n_loop = 10 rnd = np.random.RandomState(7) labels = np.arange(C).repeat(100) results = {} for N in ns: num_iters = int(len(labels) / N) total_samples_for_bounds = float(num_iters * N * (n_loop)) for _ in range(n_loop): rnd.shuffle(labels) for batch_id in range(len(labels) // N): if len(set(labels[N * batch_id:N * (batch_id + 1)])) == C: results[N] = results.get(N, 0.) + N / total_samples_for_bounds else: results[N] = results.get(N, 0.) + 0. xs = [] ys = [] for k, v in results.items(): print(k, v) ys.append(v) xs.append(k) plt.plot(ns, ps, label="Theoretical") plt.plot(xs, ys, label="Empirical") plt.ylabel("probability") plt.xlabel("$K+1$") plt.title("CIFAR-100 simulation") plt.legend()
_____no_output_____
MIT
code/notebooks/coupon.ipynb
nzw0301/Understanding-Negative-Samples-in-Instance-Discriminative-Self-supervised-Representation-Learning
3K Rice Genome GWAS Dataset Export Usage Data for this was exported as single Hail MatrixTable (`.mt`) as well as individual variants (`csv.gz`), samples (`csv`), and call datasets (`zarr`).
from pathlib import Path import pandas as pd import numpy as np import hail as hl import zarr hl.init() path = Path('~/data/gwas/rice-snpseek/1M_GWAS_SNP_Dataset/rg-3k-gwas-export').expanduser() path !du -sh {str(path)}/*
582M /home/eczech/data/gwas/rice-snpseek/1M_GWAS_SNP_Dataset/rg-3k-gwas-export/rg-3k-gwas-export.calls.zarr 336K /home/eczech/data/gwas/rice-snpseek/1M_GWAS_SNP_Dataset/rg-3k-gwas-export/rg-3k-gwas-export.cols.csv 471M /home/eczech/data/gwas/rice-snpseek/1M_GWAS_SNP_Dataset/rg-3k-gwas-export/rg-3k-gwas-export.mt 7.5M /home/eczech/data/gwas/rice-snpseek/1M_GWAS_SNP_Dataset/rg-3k-gwas-export/rg-3k-gwas-export.rows.csv.gz
Apache-2.0
notebooks/organism/rice/rg-export-usage.ipynb
tomwhite/gwas-analysis
Hail
# The entire table with row, col, and call data: hl.read_matrix_table(str(path / 'rg-3k-gwas-export.mt')).describe()
---------------------------------------- Global fields: None ---------------------------------------- Column fields: 's': str 'acc_seq_no': int64 'acc_stock_id': int64 'acc_gs_acc': float64 'acc_gs_variety_name': str 'acc_igrc_acc_src': int64 'pt_APANTH_REPRO': float64 'pt_APSH': float64 'pt_APCO_REV_POST': float64 'pt_APCO_REV_REPRO': float64 'pt_AWCO_LREV': float64 'pt_AWCO_REV': float64 'pt_AWDIST': float64 'pt_BLANTHPR_VEG': float64 'pt_BLANTHDI_VEG': float64 'pt_BLPUB_VEG': float64 'pt_BLSCO_ANTH_VEG': float64 'pt_BLSCO_REV_VEG': float64 'pt_CCO_REV_VEG': float64 'pt_CUAN_REPRO': float64 'pt_ENDO': float64 'pt_FLA_EREPRO': float64 'pt_FLA_REPRO': float64 'pt_INANTH': float64 'pt_LIGCO_REV_VEG': float64 'pt_LIGSH': float64 'pt_LPCO_REV_POST': float64 'pt_LPPUB': float64 'pt_LSEN': float64 'pt_NOANTH': float64 'pt_PEX_REPRO': float64 'pt_PTH': float64 'pt_SCCO_REV': float64 'pt_SECOND_BR_REPRO': float64 'pt_SLCO_REV': float64 'pt_SPKF': float64 'pt_SLLT_CODE': float64 ---------------------------------------- Row fields: 'locus': locus<GRCh37> 'alleles': array<str> 'rsid': str 'cm_position': float64 ---------------------------------------- Entry fields: 'GT': call ---------------------------------------- Column key: ['s'] Row key: ['locus', 'alleles'] ----------------------------------------
Apache-2.0
notebooks/organism/rice/rg-export-usage.ipynb
tomwhite/gwas-analysis
Pandas Sample data contains phenotypes prefixed by `pt_` and `s` (sample_id) in the MatrixTable matches to the `s` in this table, as does the order:
pd.read_csv(path / 'rg-3k-gwas-export.cols.csv').head()
_____no_output_____
Apache-2.0
notebooks/organism/rice/rg-export-usage.ipynb
tomwhite/gwas-analysis
Variant data shouldn't be needed for much, but it's here:
pd.read_csv(path / 'rg-3k-gwas-export.rows.csv.gz').head()
_____no_output_____
Apache-2.0
notebooks/organism/rice/rg-export-usage.ipynb
tomwhite/gwas-analysis
Zarr Call data (dense and mean imputed in this case) can be sliced from a zarr array:
gt = zarr.open(str(path / 'rg-3k-gwas-export.calls.zarr'), mode='r') # Get calls for 10 variants and 5 samples gt[5:15, 5:10]
_____no_output_____
Apache-2.0
notebooks/organism/rice/rg-export-usage.ipynb
tomwhite/gwas-analysis
Selecting Phenotypes Pick a phenotype: - Definitions are in https://s3-ap-southeast-1.amazonaws.com/oryzasnp-atcg-irri-org/3kRG-phenotypes/3kRG_PhenotypeData_v20170411.xlsx - The ">2007 Dictionary" sheet- Choose one with low sparsity
df = pd.read_csv(path / 'rg-3k-gwas-export.cols.csv') df.info() # First 1k variants with samples having data for this phenotype mask = df['pt_FLA_REPRO'].notnull() gtp = gt[:1000][:,mask] gtp.shape, gtp.dtype
_____no_output_____
Apache-2.0
notebooks/organism/rice/rg-export-usage.ipynb
tomwhite/gwas-analysis
PageRank Performance Benchmarking Skip notebook testThis notebook benchmarks performance of running PageRank within cuGraph against NetworkX. NetworkX contains several implementations of PageRank. This benchmark will compare cuGraph versus the defaukt Nx implementation as well as the SciPy versionNotebook Credits Original Authors: Bradley Rees Last Edit: 08/16/2020 RAPIDS Versions: 0.15Test Hardware GV100 32G, CUDA 10,0 Intel(R) Core(TM) CPU i7-7800X @ 3.50GHz 32GB system memory Test Data| File Name | Num of Vertices | Num of Edges ||:---------------------- | --------------: | -----------: || preferentialAttachment | 100,000 | 999,970 || caidaRouterLevel | 192,244 | 1,218,132 || coAuthorsDBLP | 299,067 | 1,955,352 || dblp-2010 | 326,186 | 1,615,400 || citationCiteseer | 268,495 | 2,313,294 || coPapersDBLP | 540,486 | 30,491,458 || coPapersCiteseer | 434,102 | 32,073,440 || as-Skitter | 1,696,415 | 22,190,596 | Timing What is not timed: Reading the dataWhat is timmed: (1) creating a Graph, (2) running PageRankThe data file is read in once for all flavors of PageRank. Each timed block will craete a Graph and then execute the algorithm. The results of the algorithm are not compared. If you are interested in seeing the comparison of results, then please see PageRank in the __notebooks__ repo. NOTICE_You must have run the __dataPrep__ script prior to running this notebook so that the data is downloaded_See the README file in this folder for a discription of how to get the data Now load the required libraries
# Import needed libraries import gc import time import rmm import cugraph import cudf # NetworkX libraries import networkx as nx from scipy.io import mmread try: import matplotlib except ModuleNotFoundError: os.system('pip install matplotlib') import matplotlib.pyplot as plt; plt.rcdefaults() import numpy as np
_____no_output_____
Apache-2.0
notebooks/cugraph_benchmarks/pagerank_benchmark.ipynb
hlinsen/cugraph
Define the test data
# Test File data = { 'preferentialAttachment' : './data/preferentialAttachment.mtx', 'caidaRouterLevel' : './data/caidaRouterLevel.mtx', 'coAuthorsDBLP' : './data/coAuthorsDBLP.mtx', 'dblp' : './data/dblp-2010.mtx', 'citationCiteseer' : './data/citationCiteseer.mtx', 'coPapersDBLP' : './data/coPapersDBLP.mtx', 'coPapersCiteseer' : './data/coPapersCiteseer.mtx', 'as-Skitter' : './data/as-Skitter.mtx' }
_____no_output_____
Apache-2.0
notebooks/cugraph_benchmarks/pagerank_benchmark.ipynb
hlinsen/cugraph
Define the testing functions
# Data reader - the file format is MTX, so we will use the reader from SciPy def read_mtx_file(mm_file): print('Reading ' + str(mm_file) + '...') M = mmread(mm_file).asfptype() return M # CuGraph PageRank def cugraph_call(M, max_iter, tol, alpha): gdf = cudf.DataFrame() gdf['src'] = M.row gdf['dst'] = M.col print('\tcuGraph Solving... ') t1 = time.time() # cugraph Pagerank Call G = cugraph.DiGraph() G.from_cudf_edgelist(gdf, source='src', destination='dst', renumber=False) df = cugraph.pagerank(G, alpha=alpha, max_iter=max_iter, tol=tol) t2 = time.time() - t1 return t2 # Basic NetworkX PageRank def networkx_call(M, max_iter, tol, alpha): nnz_per_row = {r: 0 for r in range(M.get_shape()[0])} for nnz in range(M.getnnz()): nnz_per_row[M.row[nnz]] = 1 + nnz_per_row[M.row[nnz]] for nnz in range(M.getnnz()): M.data[nnz] = 1.0/float(nnz_per_row[M.row[nnz]]) M = M.tocsr() if M is None: raise TypeError('Could not read the input graph') if M.shape[0] != M.shape[1]: raise TypeError('Shape is not square') # should be autosorted, but check just to make sure if not M.has_sorted_indices: print('sort_indices ... ') M.sort_indices() z = {k: 1.0/M.shape[0] for k in range(M.shape[0])} print('\tNetworkX Solving... ') # start timer t1 = time.time() Gnx = nx.DiGraph(M) pr = nx.pagerank(Gnx, alpha, z, max_iter, tol) t2 = time.time() - t1 return t2 # SciPy PageRank def networkx_scipy_call(M, max_iter, tol, alpha): nnz_per_row = {r: 0 for r in range(M.get_shape()[0])} for nnz in range(M.getnnz()): nnz_per_row[M.row[nnz]] = 1 + nnz_per_row[M.row[nnz]] for nnz in range(M.getnnz()): M.data[nnz] = 1.0/float(nnz_per_row[M.row[nnz]]) M = M.tocsr() if M is None: raise TypeError('Could not read the input graph') if M.shape[0] != M.shape[1]: raise TypeError('Shape is not square') # should be autosorted, but check just to make sure if not M.has_sorted_indices: print('sort_indices ... ') M.sort_indices() z = {k: 1.0/M.shape[0] for k in range(M.shape[0])} # SciPy Pagerank Call print('\tSciPy Solving... ') t1 = time.time() Gnx = nx.DiGraph(M) pr = nx.pagerank_scipy(Gnx, alpha, z, max_iter, tol) t2 = time.time() - t1 return t2
_____no_output_____
Apache-2.0
notebooks/cugraph_benchmarks/pagerank_benchmark.ipynb
hlinsen/cugraph
Run the benchmarks
# arrays to capture performance gains time_cu = [] time_nx = [] time_sp = [] perf_nx = [] perf_sp = [] names = [] # init libraries by doing a simple task v = './data/preferentialAttachment.mtx' M = read_mtx_file(v) trapids = cugraph_call(M, 100, 0.00001, 0.85) del M for k,v in data.items(): gc.collect() # Saved the file Name names.append(k) # read the data M = read_mtx_file(v) # call cuGraph - this will be the baseline trapids = cugraph_call(M, 100, 0.00001, 0.85) time_cu.append(trapids) # Now call NetworkX tn = networkx_call(M, 100, 0.00001, 0.85) speedUp = (tn / trapids) perf_nx.append(speedUp) time_nx.append(tn) # Now call SciPy tsp = networkx_scipy_call(M, 100, 0.00001, 0.85) speedUp = (tsp / trapids) perf_sp.append(speedUp) time_sp.append(tsp) print("cuGraph (" + str(trapids) + ") Nx (" + str(tn) + ") SciPy (" + str(tsp) + ")" ) del M
_____no_output_____
Apache-2.0
notebooks/cugraph_benchmarks/pagerank_benchmark.ipynb
hlinsen/cugraph
plot the output
%matplotlib inline plt.figure(figsize=(10,8)) bar_width = 0.35 index = np.arange(len(names)) _ = plt.bar(index, perf_nx, bar_width, color='g', label='vs Nx') _ = plt.bar(index + bar_width, perf_sp, bar_width, color='b', label='vs SciPy') plt.xlabel('Datasets') plt.ylabel('Speedup') plt.title('PageRank Performance Speedup') plt.xticks(index + (bar_width / 2), names) plt.xticks(rotation=90) # Text on the top of each barplot for i in range(len(perf_nx)): plt.text(x = (i - 0.55) + bar_width, y = perf_nx[i] + 25, s = round(perf_nx[i], 1), size = 12) for i in range(len(perf_sp)): plt.text(x = (i - 0.1) + bar_width, y = perf_sp[i] + 25, s = round(perf_sp[i], 1), size = 12) plt.legend() plt.show()
_____no_output_____
Apache-2.0
notebooks/cugraph_benchmarks/pagerank_benchmark.ipynb
hlinsen/cugraph
Dump the raw stats
perf_nx perf_sp time_cu time_nx time_sp
_____no_output_____
Apache-2.0
notebooks/cugraph_benchmarks/pagerank_benchmark.ipynb
hlinsen/cugraph
My Notebook 2
import os print("I am notebook 2")
_____no_output_____
BSD-3-Clause
nbcollection/tests/data/my_notebooks/sub_path1/notebook2.ipynb
jonathansick/nbcollection
We'll continue to make use of the fuel economy dataset in this workspace.
fuel_econ = pd.read_csv('./data/fuel_econ.csv') fuel_econ.head()
_____no_output_____
MIT
Matplotlib/Violin_and_Box_Plot_Practice.ipynb
iamleeg/AIPND
**Task**: What is the relationship between the size of a car and the size of its engine? The cars in this dataset are categorized into one of five different vehicle classes based on size. Starting from the smallest, they are: {Minicompact Cars, Subcompact Cars, Compact Cars, Midsize Cars, and Large Cars}. The vehicle classes can be found in the 'VClass' variable, while the engine sizes are in the 'displ' column (in liters). **Hint**: Make sure that the order of vehicle classes makes sense in your plot!
# YOUR CODE HERE car_classes = ['Minicompact Cars', 'Subcompact Cars', 'Compact Cars', 'Midsize Cars', 'Large Cars'] vclasses = pd.api.types.CategoricalDtype(ordered = True, categories = car_classes) fuel_econ['VClass'] = fuel_econ['VClass'].astype(vclasses) sb.violinplot(data = fuel_econ, x = 'VClass', y = 'displ') plt.xticks(rotation = 15) # run this cell to check your work against ours violinbox_solution_1()
I used a violin plot to depict the data in this case; you might have chosen a box plot instead. One of the interesting things about the relationship between variables is that it isn't consistent. Compact cars tend to have smaller engine sizes than the minicompact and subcompact cars, even though those two vehicle sizes are smaller. The box plot would make it easier to see that the median displacement for the two smallest vehicle classes is greater than the third quartile of the compact car class.
MIT
Matplotlib/Violin_and_Box_Plot_Practice.ipynb
iamleeg/AIPND
Langmuir-enhanced entrainmentThis notebook reproduces Fig. 15 of [Li et al., 2019](https://doi.org/10.1029/2019MS001810).
import sys import numpy as np from scipy import stats import matplotlib.pyplot as plt from matplotlib import cm from mpl_toolkits.axes_grid1.inset_locator import inset_axes sys.path.append("../../../gotmtool") from gotmtool import * def plot_hLL_dpedt(hLL, dpedt, casename_list, ax=None, xlabel_on=True): if ax is None: ax = plt.gca() idx_WD05 = [('WD05' in casename) for casename in casename_list] idx_WD08 = [('WD08' in casename) for casename in casename_list] idx_WD10 = [('WD10' in casename) for casename in casename_list] b0_str = [casename[2:4] for casename in casename_list] b0 = np.array([float(tmp[0])*100 if 'h' in tmp else float(tmp) for tmp in b0_str]) b0_min = b0.min() b0_max = b0.max() ax.plot(hLL, dpedt, color='k', linewidth=1, linestyle=':', zorder=1) im = ax.scatter(hLL[idx_WD05], dpedt[idx_WD05], c=b0[idx_WD05], marker='d', edgecolors='k', linewidth=1, zorder=2, label='$U_{10}=5$ m s$^{-1}$', cmap='bone_r', vmin=b0_min, vmax=b0_max) ax.scatter(hLL[idx_WD08], dpedt[idx_WD08], c=b0[idx_WD08], marker='s', edgecolors='k', linewidth=1, zorder=2, label='$U_{10}=8$ m s$^{-1}$', cmap='bone_r', vmin=b0_min, vmax=b0_max) ax.scatter(hLL[idx_WD10], dpedt[idx_WD10], c=b0[idx_WD10], marker='^', edgecolors='k', linewidth=1, zorder=2, label='$U_{10}=10$ m s$^{-1}$', cmap='bone_r', vmin=b0_min, vmax=b0_max) ax.legend(loc='upper left') # add colorbar ax_inset = inset_axes(ax, width="30%", height="3%", loc='lower right', bbox_to_anchor=(-0.05, 0.1, 1, 1), bbox_transform=ax.transAxes, borderpad=0,) cb = plt.colorbar(im, cax=ax_inset, orientation='horizontal', shrink=0.35, ticks=[5, 100, 300, 500]) cb.ax.set_xticklabels(['-5','-100','-300','-500']) ax.text(0.75, 0.2, '$Q_0$ (W m$^{-2}$)', color='black', transform=ax.transAxes, fontsize=10, va='top', ha='left') # get axes ratio ll, ur = ax.get_position() * plt.gcf().get_size_inches() width, height = ur - ll axes_ratio = height / width # add arrow and label add_arrow(ax, 0.6, 0.2, 0.3, 0.48, axes_ratio, color='gray', text='Increasing Convection') add_arrow(ax, 0.3, 0.25, -0.2, 0.1, axes_ratio, color='black', text='Increasing Langmuir') add_arrow(ax, 0.65, 0.75, -0.25, 0.01, axes_ratio, color='black', text='Increasing Langmuir') ax.set_xscale('log') ax.set_yscale('log') if xlabel_on: ax.set_xlabel('$h/\kappa L$', fontsize=14) ax.set_ylabel('$d\mathrm{PE}/dt$', fontsize=14) ax.set_xlim([3e-3, 4e1]) ax.set_ylim([2e-4, 5e-2]) # set the tick labels font for label in (ax.get_xticklabels() + ax.get_yticklabels()): label.set_fontsize(14) def plot_hLL_R(hLL, R, colors, legend_list, ax=None, xlabel_on=True): if ax is None: ax = plt.gca() ax.axhline(y=1, linewidth=1, color='black') nm = R.shape[0] for i in np.arange(nm): ax.scatter(hLL, R[i,:], color=colors[i], edgecolors='k', linewidth=0.5, zorder=10) ax.set_xscale('log') ax.set_xlim([3e-3, 4e1]) if xlabel_on: ax.set_xlabel('$h/L_L$', fontsize=14) ax.set_ylabel('$R$', fontsize=14) # set the tick labels font for label in (ax.get_xticklabels() + ax.get_yticklabels()): label.set_fontsize(14) # legend if nm > 1: xshift = 0.2 + 0.05*(11-nm) xx = np.arange(nm)+1 xx = xx*0.06+xshift yy = np.ones(xx.size)*0.1 for i in np.arange(nm): ax.text(xx[i], yy[i], legend_list[i], color='black', transform=ax.transAxes, fontsize=12, rotation=45, va='bottom', ha='left') ax.scatter(xx[i], 0.07, s=60, color=colors[i], edgecolors='k', linewidth=1, transform=ax.transAxes) def add_arrow(ax, x, y, dx, dy, axes_ratio, color='black', text=None): ax.arrow(x, y, dx, dy, width=0.006, color=color, transform=ax.transAxes) if text is not None: dl = np.sqrt(dx**2+dy**2) xx = x + 0.5*dx + dy/dl*0.06 yy = y + 0.5*dy - dx/dl*0.06 angle = np.degrees(np.arctan(dy/dx*axes_ratio)) ax.text(xx, yy, text, color=color, transform=ax.transAxes, fontsize=11, rotation=angle, va='center', ha='center')
_____no_output_____
MIT
examples/Entrainment-LF17/plot_Entrainment-LF17.ipynb
jithuraju1290/gotmtool
Load LF17 data
# load LF17 data lf17_data = np.load('LF17_dPEdt.npz') us0 = lf17_data['us0'] b0 = lf17_data['b0'] ustar = lf17_data['ustar'] hb = lf17_data['hb'] dpedt = lf17_data['dpedt'] casenames = lf17_data['casenames'] ncase = len(casenames) # get parameter h/L_L= w*^3/u*^2/u^s(0) inds = us0==0 us0[inds] = np.nan hLL = b0*hb/ustar**2/us0
_____no_output_____
MIT
examples/Entrainment-LF17/plot_Entrainment-LF17.ipynb
jithuraju1290/gotmtool
Compute the rate of change in potential energy in GOTM runs
turbmethods = [ 'GLS-C01A', 'KPP-CVMix', 'KPPLT-VR12', 'KPPLT-LF17', ] ntm = len(turbmethods) cmap = cm.get_cmap('rainbow') if ntm == 1: colors = ['gray'] else: colors = cmap(np.linspace(0,1,ntm)) m = Model(name='Entrainment-LF17', environ='../../.gotm_env.yaml') gotmdir = m.environ['gotmdir_run']+'/'+m.name print(gotmdir) # Coriolis parameter (s^{-1}) f = 4*np.pi/86400*np.sin(np.pi/4) # Inertial period (s) Ti = 2*np.pi/f # get dPEdt from GOTM run rdpedt = np.zeros([ntm, ncase]) for i in np.arange(ntm): print(turbmethods[i]) for j in np.arange(ncase): sim = Simulation(path=gotmdir+'/'+casenames[j]+'/'+turbmethods[i]) var_gotm = sim.load_data().Epot epot_gotm = var_gotm.data.squeeze() dtime = var_gotm.time - var_gotm.time[0] time_gotm = (dtime.dt.days*86400.+dtime.dt.seconds).data # starting index for the last inertial period t0_gotm = time_gotm[-1]-Ti tidx0_gotm = np.argmin(np.abs(time_gotm-t0_gotm)) # linear fit xx_gotm = time_gotm[tidx0_gotm:]-time_gotm[tidx0_gotm] yy_gotm = epot_gotm[tidx0_gotm:]-epot_gotm[tidx0_gotm] slope_gotm, intercept_gotm, r_value_gotm, p_value_gotm, std_err_gotm = stats.linregress(xx_gotm,yy_gotm) rdpedt[i,j] = slope_gotm/dpedt[j] fig, axarr = plt.subplots(2, 1, sharex='col') fig.set_size_inches(6, 7) plt.subplots_adjust(left=0.15, right=0.95, bottom=0.09, top=0.95, hspace=0.1) plot_hLL_dpedt(hLL, dpedt, casenames, ax=axarr[0]) plot_hLL_R(hLL, rdpedt, colors, turbmethods, ax=axarr[1]) axarr[0].text(0.04, 0.14, '(a)', color='black', transform=axarr[0].transAxes, fontsize=14, va='top', ha='left') axarr[1].text(0.88, 0.94, '(b)', color='black', transform=axarr[1].transAxes, fontsize=14, va='top', ha='left')
_____no_output_____
MIT
examples/Entrainment-LF17/plot_Entrainment-LF17.ipynb
jithuraju1290/gotmtool
Statistics **Quick intro to the following packages**- `hepstats`.I will not discuss here the `pyhf` package, which is very niche.Please refer to the [GitHub repository](https://github.com/scikit-hep/pyhf) or related material at https://scikit-hep.org/resources. **`hepstats` - statistics tools and utilities**The package contains 2 submodules:- `hypotests`: provides tools to do hypothesis tests such as discovery test and computations of upper limits or confidence intervals.- `modeling`: includes the Bayesian Block algorithm that can be used to improve the binning of histograms.Note: feel free to complement the introduction below with the several tutorials available from the [GitHub repository](https://github.com/scikit-hep/hepstats). **1. Adaptive binning determination**The Bayesian Block algorithm produces histograms that accurately represent the underlying distribution while being robust to statistical fluctuations.
import numpy as np import matplotlib.pyplot as plt from hepstats.modeling import bayesian_blocks data = np.append(np.random.laplace(size=10000), np.random.normal(5., 1., size=15000)) bblocks = bayesian_blocks(data) plt.hist(data, bins=1000, label='Fine Binning', density=True) plt.hist(data, bins=bblocks, label='Bayesian Blocks', histtype='step', linewidth=2, density=True) plt.legend(loc=2);
_____no_output_____
BSD-3-Clause
05-statistics.ipynb
eduardo-rodrigues/2020-03-03_DESY_Scikit-HEP_HandsOn
Tirmzi Analysisn=1000 m+=1000 nm-=120 istep= 4 min=150 max=700
import sys sys.path import matplotlib.pyplot as plt import numpy as np import os from scipy import signal ls import capsol.newanalyzecapsol as ac ac.get_gridparameters import glob folders = glob.glob("FortranOutputTest/*/") folders all_data= dict() for folder in folders: params = ac.get_gridparameters(folder + 'capsol.in') data = ac.np.loadtxt(folder + 'Z-U.dat') process_data = ac.process_data(params, data, smoothing=False, std=5*10**-9) all_data[folder]= (process_data) all_params= dict() for folder in folders: params=ac.get_gridparameters(folder + 'capsol.in') all_params[folder]= (params) all_data all_data.keys() for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 1.0}: data=all_data[key] thickness =all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] plt.plot(data['z'], data['c'], label= f'{rtip} nm, {er}, {thickness} nm') plt.title('C v. Z for 1nm thick sample') plt.ylabel("C(m)") plt.xlabel("Z(m)") plt.legend() plt.savefig("C' v. Z for 1nm thick sample 06-28-2021.png") for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 10.0}: data=all_data[key] thickness =all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] plt.plot(data['z'], data['c'], label= f'{rtip} nm, {er}, {thickness} nm') plt.title('C v. Z for 10nm thick sample') plt.ylabel("C(m)") plt.xlabel("Z(m)") plt.legend() plt.savefig("C' v. Z for varying sample thickness, 06-28-2021.png") for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 100.0}: data=all_data[key] thickness =all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] plt.plot(data['z'], data['c'], label= f'{rtip} nm, {er}, {thickness} nm') plt.title('C v. Z for 100nm sample') plt.ylabel("C(m)") plt.xlabel("Z(m)") plt.legend() plt.savefig("C' v. Z for varying sample thickness, 06-28-2021.png") for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 500.0}: data=all_data[key] thickness =all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] plt.plot(data['z'], data['c'], label= f'{rtip} nm, {er}, {thickness} nm') plt.title('C v. Z for 500nm sample') plt.ylabel("C(m)") plt.xlabel("Z(m)") plt.legend() plt.savefig("C' v. Z for varying sample thickness, 06-28-2021.png")
No handles with labels found to put in legend.
MIT
data/Output-Python/Tirmzi_istep4-Copy2.ipynb
maroniea/xsede-spm
cut off last experiment because capacitance was off the scale
for params in all_params.values(): print(params['Thickness_sample']) print(params['m-']) all_params for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 1.0}: data=all_data[key] thickness=all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] s=slice(4,-3) plt.plot(data['z'][s], data['cz'][s], label=f'{rtip} nm, {er}, {thickness} nm' ) plt.title('Cz vs. Z for 1.0nm') plt.ylabel("Cz") plt.xlabel("Z(m)") plt.legend() plt.savefig("Cz v. Z for varying sample thickness, 06-28-2021.png") for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 10.0}: data=all_data[key] thickness=all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] s=slice(4,-3) plt.plot(data['z'][s], data['cz'][s], label=f'{rtip} nm, {er}, {thickness} nm' ) plt.title('Cz vs. Z for 10.0nm') plt.ylabel("Cz") plt.xlabel("Z(m)") plt.legend() plt.savefig("Cz v. Z for varying sample thickness, 06-28-2021.png") for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 100.0}: data=all_data[key] thickness=all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] s=slice(4,-3) plt.plot(data['z'][s], data['cz'][s], label=f'{rtip} nm, {er}, {thickness} nm' ) plt.title('Cz vs. Z for 100.0nm') plt.ylabel("Cz") plt.xlabel("Z(m)") plt.legend() plt.savefig("Cz v. Z for varying sample thickness, 06-28-2021.png") for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 500.0}: data=all_data[key] thickness=all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] s=slice(4,-3) plt.plot(data['z'][s], data['cz'][s], label=f'{rtip} nm, {er}, {thickness} nm' ) plt.title('Cz vs. Z for 500.0nm') plt.ylabel("Cz") plt.xlabel("Z(m)") plt.legend() plt.savefig("Cz v. Z for varying sample thickness, 06-28-2021.png") hoepker_data= np.loadtxt("Default Dataset (2).csv" , delimiter= ",") hoepker_data for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 1.0}: data=all_data[key] thickness=all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] s=slice(5,-5) plt.plot(data['z'][s], data['czz'][s], label=f'{rtip} nm, {er}, {thickness} nm' ) plt.title('Czz vs. Z for 1.0nm') plt.ylabel("Czz") plt.xlabel("Z(m)") plt.legend() plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png") params for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 10.0}: data=all_data[key] thickness=all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] s=slice(5,-5) plt.plot(data['z'][s], data['czz'][s], label=f'{rtip} nm, {er}, {thickness} nm' ) plt.title('Czz vs. Z for 10.0nm') plt.ylabel("Czz") plt.xlabel("Z(m)") plt.legend() plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png") for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 100.0}: data=all_data[key] thickness=all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] s=slice(5,-5) plt.plot(data['z'][s], data['czz'][s], label=f'{rtip} nm, {er}, {thickness} nm' ) plt.title('Czz vs. Z for 100.0nm') plt.ylabel("Czz") plt.xlabel("Z(m)") plt.legend() plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png") for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 500.0}: data=all_data[key] thickness=all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] s=slice(5,-5) plt.plot(data['z'][s], data['czz'][s], label=f'{rtip} nm, {er}, {thickness} nm' ) plt.title('Czz vs. Z for 500.0 nm') plt.ylabel("Czz") plt.xlabel("Z(m)") plt.legend() plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png") for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 1.0}: data=all_data[key] thickness=all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] s=slice(8,-8) plt.plot(data['z'][s], data['alpha'][s], label=f'{rtip} nm, {er}, {thickness} nm' ) plt.title('alpha vs. Z for 1.0nm') plt.ylabel("$\\alpha$") plt.xlabel("Z(m)") plt.legend() plt.savefig("Alpha v. Z for varying sample thickness, 06-28-2021.png") for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 10.0}: data=all_data[key] thickness=all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] s=slice(8,-8) plt.plot(data['z'][s], data['alpha'][s], label=f'{rtip} nm, {er}, {thickness} nm' ) plt.title('Alpha vs. Z for 10.0 nm') plt.ylabel("$\\alpha$") plt.xlabel("Z(m)") plt.legend() plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png") for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 100.0}: data=all_data[key] thickness=all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] s=slice(8,-8) plt.plot(data['z'][s], data['alpha'][s], label=f'{rtip} nm, {er}, {thickness} nm' ) plt.title('Alpha vs. Z for 100.0nm') plt.ylabel("$\\alpha$") plt.xlabel("Z(m)") plt.legend() plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png") for key in {key: params for key, params in all_params.items() if params['Thickness_sample'] == 500.0}: data=all_data[key] thickness=all_params[key]['Thickness_sample'] rtip= all_params[key]['Rtip'] er=all_params[key]['eps_r'] s=slice(8,-8) plt.plot(data['z'][s], data['alpha'][s], label=f'{rtip} nm, {er}, {thickness} nm' ) plt.title('Alpha vs. Z for 500.0nm') plt.ylabel("$\\alpha$") plt.xlabel("Z(m)") plt.legend() plt.savefig("Czz v. Z for varying sample thickness, 06-28-2021.png") data from scipy.optimize import curve_fit def Cz_model(z, a, n, b,): return(a*z**n + b) all_data.keys() data= all_data['capsol-calc\\0001-capsol\\'] z= data['z'][1:-1] cz= data['cz'][1:-1] popt, pcov= curve_fit(Cz_model, z, cz, p0=[cz[0]*z[0], -1, 0]) a=popt[0] n=popt[1] b=popt[2] std_devs= np.sqrt(pcov.diagonal()) sigma_a = std_devs[0] sigma_n = std_devs[1] model_output= Cz_model(z, a, n, b) rmse= np.sqrt(np.mean((cz - model_output)**2)) f"a= {a} ยฑ {sigma_a}" f"n= {n}ยฑ {sigma_n}" model_output "Root Mean Square Error" rmse/np.mean(-cz)
_____no_output_____
MIT
data/Output-Python/Tirmzi_istep4-Copy2.ipynb
maroniea/xsede-spm
Q-learning - Initialize $V(s)$ arbitrarily- Repeat for each episode- Initialize s- Repeat (for each step of episode)- - $\alpha \leftarrow$ action given by $\pi$ for $s$- - Take action a, observe reward r, and next state s'- - $V(s) \leftarrow V(s) + \alpha [r = \gamma V(s') - V(s)]$ - - $s \leftarrow s'$- until $s$ is terminal
import td import scipy as sp ฮฑ = 0.05 ฮณ = 0.1 td_learning = td.TD(ฮฑ, ฮณ)
_____no_output_____
MIT
notebooks/TD Learning Black Scholes.ipynb
FinTechies/HedgingRL
Black Scholes $${\displaystyle d_{1}={\frac {1}{\sigma {\sqrt {T-t}}}}\left[\ln \left({\frac {S_{t}}{K}}\right)+(r-q+{\frac {1}{2}}\sigma ^{2})(T-t)\right]}$$ $${\displaystyle C(S_{t},t)=e^{-r(T-t)}[FN(d_{1})-KN(d_{2})]\,}$$ $${\displaystyle d_{2}=d_{1}-\sigma {\sqrt {T-t}}={\frac {1}{\sigma {\sqrt {T-t}}}}\left[\ln \left({\frac {S_{t}}{K}}\right)+(r-q-{\frac {1}{2}}\sigma ^{2})(T-t)\right]}$$
d_1 = lambda ฯƒ, T, t, S, K: 1. / ฯƒ / np.sqrt(T - t) * (np.log(S / K) + 0.5 * (ฯƒ ** 2) * (T-t)) d_2 = lambda ฯƒ, T, t, S, K: 1. / ฯƒ / np.sqrt(T - t) * (np.log(S / K) - 0.5 * (ฯƒ ** 2) * (T-t)) call = lambda ฯƒ, T, t, S, K: S * sp.stats.norm.cdf( d_1(ฯƒ, T, t, S, K) ) - K * sp.stats.norm.cdf( d_2(ฯƒ, T, t, S, K) ) plt.plot(np.linspace(0.1, 4., 100), call(1., 1., .9, np.linspace(0.1, 4., 100), 1.)) d_1(1., 1., 0., 1.9, 1) plt.plot(d_1(1., 1., 0., np.linspace(0.1, 2.9, 10), 1)) plt.plot(np.linspace(0.01, 1.9, 100), sp.stats.norm.cdf(d_1(1., 1., 0.2, np.linspace(0.01, 1.9, 100), 1))) plt.plot(np.linspace(0.01, 1.9, 100), sp.stats.norm.cdf(d_1(1., 1., 0.6, np.linspace(0.01, 1.9, 100), 1))) plt.plot(np.linspace(0.01, 1.9, 100), sp.stats.norm.cdf(d_1(1., 1., 0.9, np.linspace(0.01, 1.9, 100), 1))) plt.plot(np.linspace(0.01, 1.9, 100), sp.stats.norm.cdf(d_1(1., 1., 0.99, np.linspace(0.01, 1.9, 100), 1))) def iterate_series(n=1000, S0 = 1): while True: r = np.random.randn((n)) S = np.cumsum(r) + S0 yield S, r def iterate_world(n=1000, S0=1, N=5): for (s, r) in take(N, iterate_series(n=n, S0=S0)): t, t_0 = 0, 0 for t in np.linspace(0, len(s)-1, 100): r = s[int(t)] / s[int(t_0)] yield r, s[int(t)] t_0 = t from cytoolz import take import gym import gym_bs from test_cem_future import * import pandas as pd import numpy as np # df.iloc[3] = (0.2, 1, 3) df rwd, df, agent = noisy_evaluation(np.array([0.1, 0, 0])) rwd df agent; env.observation_space
_____no_output_____
MIT
notebooks/TD Learning Black Scholes.ipynb
FinTechies/HedgingRL
Plotting with Matplotlib IPython works with the [Matplotlib](http://matplotlib.org/) plotting library, which integrates Matplotlib with IPython's display system and event loop handling. matplotlib mode To make plots using Matplotlib, you must first enable IPython's matplotlib mode.To do this, run the `%matplotlib` magic command to enable plotting in the current Notebook.This magic takes an optional argument that specifies which Matplotlib backend should be used. Most of the time, in the Notebook, you will want to use the `inline` backend, which will embed plots inside the Notebook:
%matplotlib inline
_____no_output_____
BSD-3-Clause
001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/Plotting in the Notebook.ipynb
willirath/jupyter-jsc-notebooks
You can also use Matplotlib GUI backends in the Notebook, such as the Qt backend (`%matplotlib qt`). This will use Matplotlib's interactive Qt UI in a floating window to the side of your browser. Of course, this only works if your browser is running on the same system as the Notebook Server. You can always call the `display` function to paste figures into the Notebook document. Making a simple plot With matplotlib enabled, plotting should just work.
import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 3*np.pi, 500) plt.plot(x, np.sin(x**2)) plt.title('A simple chirp');
_____no_output_____
BSD-3-Clause
001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/Plotting in the Notebook.ipynb
willirath/jupyter-jsc-notebooks
These images can be resized by dragging the handle in the lower right corner. Double clicking will return them to their original size. One thing to be aware of is that by default, the `Figure` object is cleared at the end of each cell, so you will need to issue all plotting commands for a single figure in a single cell. Loading Matplotlib demos with %load IPython's `%load` magic can be used to load any Matplotlib demo by its URL:
# %load http://matplotlib.org/mpl_examples/showcase/integral_demo.py """ Plot demonstrating the integral as the area under a curve. Although this is a simple example, it demonstrates some important tweaks: * A simple line plot with custom color and line width. * A shaded region created using a Polygon patch. * A text label with mathtext rendering. * figtext calls to label the x- and y-axes. * Use of axis spines to hide the top and right spines. * Custom tick placement and labels. """ import numpy as np import matplotlib.pyplot as plt from matplotlib.patches import Polygon def func(x): return (x - 3) * (x - 5) * (x - 7) + 85 a, b = 2, 9 # integral limits x = np.linspace(0, 10) y = func(x) fig, ax = plt.subplots() plt.plot(x, y, 'r', linewidth=2) plt.ylim(bottom=0) # Make the shaded region ix = np.linspace(a, b) iy = func(ix) verts = [(a, 0)] + list(zip(ix, iy)) + [(b, 0)] poly = Polygon(verts, facecolor='0.9', edgecolor='0.5') ax.add_patch(poly) plt.text(0.5 * (a + b), 30, r"$\int_a^b f(x)\mathrm{d}x$", horizontalalignment='center', fontsize=20) plt.figtext(0.9, 0.05, '$x$') plt.figtext(0.1, 0.9, '$y$') ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.xaxis.set_ticks_position('bottom') ax.set_xticks((a, b)) ax.set_xticklabels(('$a$', '$b$')) ax.set_yticks([]) plt.show()
_____no_output_____
BSD-3-Clause
001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/Plotting in the Notebook.ipynb
willirath/jupyter-jsc-notebooks
Matplotlib 1.4 introduces an interactive backend for use in the notebook,called 'nbagg'. You can enable this with `%matplotlib notebook`.With this backend, you will get interactive panning and zooming of matplotlib figures in your browser.
%matplotlib widget plt.figure() x = np.linspace(0, 5 * np.pi, 1000) for n in range(1, 4): plt.plot(np.sin(n * x)) plt.show()
_____no_output_____
BSD-3-Clause
001-Jupyter/001-Tutorials/003-IPython-in-Depth/examples/IPython Kernel/Plotting in the Notebook.ipynb
willirath/jupyter-jsc-notebooks
Let's start by importing the libraries that we need for this exercise.
import numpy as np import tensorflow as tf import matplotlib.pyplot as plt import matplotlib from sklearn.model_selection import train_test_split #matplotlib settings matplotlib.rcParams['xtick.major.size'] = 7 matplotlib.rcParams['xtick.labelsize'] = 'x-large' matplotlib.rcParams['ytick.major.size'] = 7 matplotlib.rcParams['ytick.labelsize'] = 'x-large' matplotlib.rcParams['xtick.top'] = False matplotlib.rcParams['ytick.right'] = False matplotlib.rcParams['ytick.direction'] = 'in' matplotlib.rcParams['xtick.direction'] = 'in' matplotlib.rcParams['font.size'] = 15 matplotlib.rcParams['figure.figsize'] = [7,7] #We need the astroml library to fetch the photometric datasets of sdss qsos and stars pip install astroml from astroML.datasets import fetch_dr7_quasar from astroML.datasets import fetch_sdss_sspp quasars = fetch_dr7_quasar() stars = fetch_sdss_sspp() # Data procesing taken from #https://www.astroml.org/book_figures/chapter9/fig_star_quasar_ROC.html by Jake Van der Plus # stack colors into matrix X Nqso = len(quasars) Nstars = len(stars) X = np.empty((Nqso + Nstars, 4), dtype=float) X[:Nqso, 0] = quasars['mag_u'] - quasars['mag_g'] X[:Nqso, 1] = quasars['mag_g'] - quasars['mag_r'] X[:Nqso, 2] = quasars['mag_r'] - quasars['mag_i'] X[:Nqso, 3] = quasars['mag_i'] - quasars['mag_z'] X[Nqso:, 0] = stars['upsf'] - stars['gpsf'] X[Nqso:, 1] = stars['gpsf'] - stars['rpsf'] X[Nqso:, 2] = stars['rpsf'] - stars['ipsf'] X[Nqso:, 3] = stars['ipsf'] - stars['zpsf'] y = np.zeros(Nqso + Nstars, dtype=int) y[:Nqso] = 1 X = X/np.max(X, axis=0) # split into training and test sets X_train, X_test, y_train, y_test = train_test_split(X, y, train_size = 0.9) #Now let's build a simple Sequential model in which fully connected layers come after one another model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), #this flattens input tf.keras.layers.Dense(128, activation = "relu"), tf.keras.layers.Dense(64, activation = "relu"), tf.keras.layers.Dense(32, activation = "relu"), tf.keras.layers.Dense(32, activation = "relu"), tf.keras.layers.Dense(1, activation="sigmoid") ]) model.compile(optimizer='adam', loss='binary_crossentropy') history = model.fit(X_train, y_train, validation_data = (X_test, y_test), batch_size = 32, epochs=20, verbose = 1) loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(len(loss)) plt.plot(epochs, loss, lw = 5, label='Training loss') plt.plot(epochs, val_loss, lw = 5, label='validation loss') plt.title('Loss') plt.legend(loc=0) plt.show() prob = model.predict_proba(X_test) #model probabilities from sklearn.metrics import confusion_matrix from sklearn.metrics import roc_curve fpr, tpr, thresholds = roc_curve(y_test, prob) plt.loglog(fpr, tpr, lw = 4) plt.xlabel('false positive rate') plt.ylabel('true positive rate') plt.xlim(0.0, 0.15) plt.ylim(0.6, 1.01) plt.show() plt.plot(thresholds, tpr, lw = 4) plt.plot(thresholds, fpr, lw = 4) plt.xlim(0,1) plt.yscale("log") plt.show() #plt.xlabel('false positive rate') #plt.ylabel('true positive rate') ##plt.xlim(0.0, 0.15) #plt.ylim(0.6, 1.01) #Now let's look at the confusion matrix y_pred = model.predict(X_test) z_pred = np.zeros(y_pred.shape[0], dtype = int) mask = np.where(y_pred>.5)[0] z_pred[mask] = 1 confusion_matrix(y_test, z_pred.astype(int)) import os, signal os.kill(os.getpid(), signal.SIGKILL)
_____no_output_____
MIT
day2/nn_qso_finder.ipynb
mjvakili/MLcourse
df1.query('age == 10') You can also achieve this result via the traditional filtering method. filter_1 = df['Mon'] > df['Tues'] df[filter_1] If needed you can also use an environment variable to filter your data. Make sure to put an "@" sign in front of your variable within the string. dinner_limit=120 df.query('Thurs > @dinner_limit')
_____no_output_____
MIT
PandasQureys.ipynb
nealonleo9/SQL
Udacity PyTorch Scholarship Final Lab Challenge Guide **A hands-on guide to get 90% + accuracy and complete the challenge** **By [Soumya Ranjan Behera](https://www.linkedin.com/in/soumya044)** This Tutorial will be divided into Two Parts, [1. Model Building and Training](https://www.kaggle.com/soumya044/udacity-pytorch-final-lab-guide-part-1/) [2. Submit in Udcaity's Workspace for evaluation](https://www.kaggle.com/soumya044/udacity-pytorch-final-lab-guide-part-2/) **Note:** This tutorial is like a template or guide for newbies to overcome the fear of the final lab challenge. My intent is not to promote plagiarism or any means of cheating. Users are encourage to take this tutorial as a baseline and build their own better model. Cheers! **Fork this Notebook and Run it from Top-To-Bottom Step by Step** Part 1: Build and Train a Model **Credits:** The dataset credit goes to [Lalu Erfandi Maula Yusnu](https://www.kaggle.com/nunenuh) 1. Import Data set and visualiza some data
import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) import os print(os.listdir("../input/")) # Any results you write to the current directory are saved as output.
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
**Import some visualization Libraries**
import matplotlib.pyplot as plt %matplotlib inline import cv2 # Set Train and Test Directory Variables TRAIN_DATA_DIR = "../input/flower_data/flower_data/train/" VALID_DATA_DIR = "../input/flower_data/flower_data/valid/" #Visualiza Some Images of any Random Directory-cum-Class FILE_DIR = str(np.random.randint(1,103)) print("Class Directory: ",FILE_DIR) for file_name in os.listdir(os.path.join(TRAIN_DATA_DIR, FILE_DIR))[1:3]: img_array = cv2.imread(os.path.join(TRAIN_DATA_DIR, FILE_DIR, file_name)) img_array = cv2.resize(img_array,(224, 224), interpolation = cv2.INTER_CUBIC) plt.imshow(img_array) plt.show() print(img_array.shape)
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
2. Data Preprocessing (Image Augmentation) **Import PyTorch libraries**
import torch import torchvision from torchvision import datasets, models, transforms import torch.nn as nn torch.__version__
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
**Note:** **Look carefully! Kaggle uses v1.0.0 while Udcaity workspace has v0.4.0 (Some issues may arise but we'll solve them)**
# check if CUDA is available train_on_gpu = torch.cuda.is_available() if not train_on_gpu: print('CUDA is not available. Training on CPU ...') else: print('CUDA is available! Training on GPU ...')
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
**Make a Class Variable i.e a list of Target Categories (List of 102 species) **
# I used os.listdir() to maintain the ordering classes = os.listdir(VALID_DATA_DIR)
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
**Load and Transform (Image Augmentation)** Soucre: https://github.com/udacity/deep-learning-v2-pytorch/blob/master/convolutional-neural-networks/cifar-cnn/cifar10_cnn_augmentation.ipynb
# Load and transform data using ImageFolder # VGG-16 Takes 224x224 images as input, so we resize all of them data_transform = transforms.Compose([transforms.RandomResizedCrop(224), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]) train_data = datasets.ImageFolder(TRAIN_DATA_DIR, transform=data_transform) test_data = datasets.ImageFolder(VALID_DATA_DIR, transform=data_transform) # print out some data stats print('Num training images: ', len(train_data)) print('Num test images: ', len(test_data))
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
Find more on Image Transforms using PyTorch Here (https://pytorch.org/docs/stable/torchvision/transforms.html) 3. Make a DataLoader
# define dataloader parameters batch_size = 32 num_workers=0 # prepare data loaders train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers, shuffle=True) test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers, shuffle=True)
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
**Visualize Sample Images**
# Visualize some sample data # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # convert images to numpy for display # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) plt.imshow(np.transpose(images[idx], (1, 2, 0))) ax.set_title(classes[labels[idx]])
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
**Here plt.imshow() clips our data into [0,....,255] range to show the images. The Warning message is due to our Transform Function. We can Ignore it.** 4. Use a Pre-Trained Model (VGG16) Here we used a VGG16. You can experiment with other models. References: https://github.com/udacity/deep-learning-v2-pytorch/blob/master/transfer-learning/Transfer_Learning_Solution.ipynb **Try More Models: ** https://pytorch.org/docs/stable/torchvision/models.html
# Load the pretrained model from pytorch model = models.<ModelNameHere>(pretrained=True) print(model)
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
We can see from above output that the last ,i.e, 6th Layer is a Fully-connected Layer with in_features=4096, out_features=1000
print(model.classifier[6].in_features) print(model.classifier[6].out_features) # The above lines work for vgg only. For other models refer to print(model) and look for last FC layer
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
**Freeze Training for all 'Features Layers', Only Train Classifier Layers**
# Freeze training for all "features" layers for param in model.features.parameters(): param.requires_grad = False #For models like ResNet or Inception use the following, # Freeze training for all "features" layers # for _, param in model.named_parameters(): # param.requires_grad = False
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
Let's Add our own Last Layer which will have 102 out_features for 102 species
# VGG16 n_inputs = model.classifier[6].in_features #Others # n_inputs = model.fc.in_features # add last linear layer (n_inputs -> 102 flower classes) # new layers automatically have requires_grad = True last_layer = nn.Linear(n_inputs, len(classes)) # VGG16 model.classifier[6] = last_layer # Others #model.fc = last_layer # if GPU is available, move the model to GPU if train_on_gpu: model.cuda() # check to see that your last layer produces the expected number of outputs #VGG print(model.classifier[6].out_features) #Others #print(model.fc.out_features)
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
5. Specify our Loss Function and Optimzer
import torch.optim as optim # specify loss function (categorical cross-entropy) criterion = #TODO # specify optimizer (stochastic gradient descent) and learning rate = 0.01 or 0.001 optimizer = #TODO
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
6. Train our Model and Save necessary checkpoints
# Define epochs (between 50-200) epochs = 20 # initialize tracker for minimum validation loss valid_loss_min = np.Inf # set initial "min" to infinity # Some lists to keep track of loss and accuracy during each epoch epoch_list = [] train_loss_list = [] val_loss_list = [] train_acc_list = [] val_acc_list = [] # Start epochs for epoch in range(epochs): #adjust_learning_rate(optimizer, epoch) # monitor training loss train_loss = 0.0 val_loss = 0.0 ################### # train the model # ################### # Set the training mode ON -> Activate Dropout Layers model.train() # prepare model for training # Calculate Accuracy correct = 0 total = 0 # Load Train Images with Labels(Targets) for data, target in train_loader: if train_on_gpu: data, target = data.cuda(), target.cuda() # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model output = model(data) if type(output) == tuple: output, _ = output # Calculate Training Accuracy predicted = torch.max(output.data, 1)[1] # Total number of labels total += len(target) # Total correct predictions correct += (predicted == target).sum() # calculate the loss loss = criterion(output, target) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update running training loss train_loss += loss.item()*data.size(0) # calculate average training loss over an epoch train_loss = train_loss/len(train_loader.dataset) # Avg Accuracy accuracy = 100 * correct / float(total) # Put them in their list train_acc_list.append(accuracy) train_loss_list.append(train_loss) # Implement Validation like K-fold Cross-validation # Set Evaluation Mode ON -> Turn Off Dropout model.eval() # Required for Evaluation/Test # Calculate Test/Validation Accuracy correct = 0 total = 0 with torch.no_grad(): for data, target in test_loader: if train_on_gpu: data, target = data.cuda(), target.cuda() # Predict Output output = model(data) if type(output) == tuple: output, _ = output # Calculate Loss loss = criterion(output, target) val_loss += loss.item()*data.size(0) # Get predictions from the maximum value predicted = torch.max(output.data, 1)[1] # Total number of labels total += len(target) # Total correct predictions correct += (predicted == target).sum() # calculate average training loss and accuracy over an epoch val_loss = val_loss/len(test_loader.dataset) accuracy = 100 * correct/ float(total) # Put them in their list val_acc_list.append(accuracy) val_loss_list.append(val_loss) # Print the Epoch and Training Loss Details with Validation Accuracy print('Epoch: {} \tTraining Loss: {:.4f}\t Val. acc: {:.2f}%'.format( epoch+1, train_loss, accuracy )) # save model if validation loss has decreased if val_loss <= valid_loss_min: print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format( valid_loss_min, val_loss)) # Save Model State on Checkpoint torch.save(model.state_dict(), 'model.pt') valid_loss_min = val_loss # Move to next epoch epoch_list.append(epoch + 1)
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
Load Model State from Checkpoint
model.load_state_dict(torch.load('model.pt'))
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
Save the whole Model (Pickling)
#Save/Pickle the Model torch.save(model, 'classifier.pth')
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
7. Visualize Model Training and Validation
# Training / Validation Loss plt.plot(epoch_list,train_loss_list) plt.plot(val_loss_list) plt.xlabel("Epochs") plt.ylabel("Loss") plt.title("Training/Validation Loss vs Number of Epochs") plt.legend(['Train', 'Valid'], loc='upper right') plt.show() # Train/Valid Accuracy plt.plot(epoch_list,train_acc_list) plt.plot(val_acc_list) plt.xlabel("Epochs") plt.ylabel("Training/Validation Accuracy") plt.title("Accuracy vs Number of Epochs") plt.legend(['Train', 'Valid'], loc='best') plt.show()
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
From the above graphs we get some really impressive results **Overall Accuracy**
val_acc = sum(val_acc_list[:]).item()/len(val_acc_list) print("Validation Accuracy of model = {} %".format(val_acc))
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
8. Test our Model Performance
# obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() img = images.numpy() # move model inputs to cuda, if GPU available if train_on_gpu: images = images.cuda() model.eval() # Required for Evaluation/Test # get sample outputs output = model(images) if type(output) == tuple: output, _ = output # convert output probabilities to predicted class _, preds_tensor = torch.max(output, 1) preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy()) # plot the images in the batch, along with predicted and true labels fig = plt.figure(figsize=(20, 5)) for idx in np.arange(12): ax = fig.add_subplot(3, 4, idx+1, xticks=[], yticks=[]) plt.imshow(np.transpose(img[idx], (1, 2, 0))) ax.set_title("Pr: {} Ac: {}".format(classes[preds[idx]], classes[labels[idx]]), color=("green" if preds[idx]==labels[idx].item() else "red"))
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
**We can see that the Correctly Classifies Results are Marked in "Green" and the misclassifies ones are "Red"** 8.1 Test our Model Performance with Gabriele Picco's Program **Credits: ** **Gabriele Picco** (https://github.com/GabrielePicco/deep-learning-flower-identifier) **Special Instruction:** 1. **Uncomment the following two code cells while running the notebook.**2. Comment these two blocks while **Commit**, otherwise you will get an error "Too many Output Files" in Kaggle Only.3. If you find a solution to this then let me know.
# !git clone https://github.com/GabrielePicco/deep-learning-flower-identifier # !pip install airtable # import sys # sys.path.insert(0, 'deep-learning-flower-identifier') # from test_model_pytorch_facebook_challenge import calc_accuracy # calc_accuracy(model, input_image_size=224, use_google_testset=False)
_____no_output_____
MIT
udacity-pytorch-final-lab-guide-part-1.ipynb
styluna7/notebooks
Exercise: Find correspondences between old and modern english The purpose of this execise is to use two vecsigrafos, one built on UMBC and Wordnet and another one produced by directly running Swivel against a corpus of Shakespeare's complete works, to try to find corelations between old and modern English, e.g. "thou" -> "you", "dost" -> "do", "raiment" -> "clothing". For example, you can try to pick a set of 100 words in "ye olde" English corpus and see how they correlate to UMBC over WordNet. ![William Shaespeare](https://github.com/HybridNLP2018/tutorial/blob/master/images/220px-Shakespeare.jpg?raw=1) Next, we prepare the embeddings from the Shakespeare corpus and load a UMBC vecsigrafo, which will provide the two vector spaces to correlate. Download a small text corpus First, we download the corpus into our environment. We will use the Shakespeare's complete works corpus, published as part of Project Gutenberg and pbublicly available.
import os %ls #!rm -r tutorial !git clone https://github.com/HybridNLP2018/tutorial
fatal: destination path 'tutorial' already exists and is not an empty directory.
MIT
06_shakespeare_exercise.ipynb
flaviomerenda/tutorial
Let us see if the corpus is where we think it is:
%cd tutorial/lit %ls
/content/tutorial/lit coocs/ shakespeare_complete_works.txt swivel/ wget-log
MIT
06_shakespeare_exercise.ipynb
flaviomerenda/tutorial
Downloading Swivel
!wget http://expertsystemlab.com/hybridNLP18/swivel.zip !unzip swivel.zip !rm swivel/* !rm swivel.zip
Redirecting output to โ€˜wget-log.1โ€™. Archive: swivel.zip inflating: swivel/analogy.cc inflating: swivel/distributed.sh inflating: swivel/eval.mk inflating: swivel/fastprep.cc inflating: swivel/fastprep.mk inflating: swivel/glove_to_shards.py inflating: swivel/nearest.py inflating: swivel/prep.py inflating: swivel/README.md inflating: swivel/swivel.py inflating: swivel/text2bin.py inflating: swivel/vecs.py inflating: swivel/wordsim.py
MIT
06_shakespeare_exercise.ipynb
flaviomerenda/tutorial
Learn the Swivel embeddings over the Old Shakespeare corpus Calculating the co-occurrence matrix
corpus_path = '/content/tutorial/lit/shakespeare_complete_works.txt' coocs_path = '/content/tutorial/lit/coocs' shard_size = 512 freq=3 !python /content/tutorial/scripts/swivel/prep.py --input={corpus_path} --output_dir={coocs_path} --shard_size={shard_size} --min_count={freq} %ls {coocs_path} | head -n 10
col_sums.txt col_vocab.txt row_sums.txt row_vocab.txt shard-000-000.pb shard-000-001.pb shard-000-002.pb shard-000-003.pb shard-000-004.pb shard-000-005.pb
MIT
06_shakespeare_exercise.ipynb
flaviomerenda/tutorial
Learning the embeddings from the matrix
vec_path = '/content/tutorial/lit/vec/' !python /content/tutorial/scripts/swivel/swivel.py --input_base_path={coocs_path} \ --output_base_path={vec_path} \ --num_epochs=20 --dim=300 \ --submatrix_rows={shard_size} --submatrix_cols={shard_size}
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/input.py:187: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the `tf.data` module. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/input.py:187: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version. Instructions for updating: To construct input pipelines, use the `tf.data` module. WARNING:tensorflow:From /content/tutorial/scripts/swivel/swivel.py:495: Supervisor.__init__ (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version. Instructions for updating: Please switch to tf.train.MonitoredTrainingSession WARNING:tensorflow:Issue encountered when serializing global_step. Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore. 'Tensor' object has no attribute 'to_proto' 2018-10-08 13:14:16.156023: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2018-10-08 13:14:16.156566: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties: name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235 pciBusID: 0000:00:04.0 totalMemory: 11.17GiB freeMemory: 11.10GiB 2018-10-08 13:14:16.156611: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0 2018-10-08 13:14:18.064223: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-10-08 13:14:18.064387: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0 2018-10-08 13:14:18.064482: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N 2018-10-08 13:14:18.064823: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. 2018-10-08 13:14:18.069298: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10759 MB memory) -> physical GPU (device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7) INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Starting standard services. INFO:tensorflow:Saving checkpoint to path /content/tutorial/lit/vec/model.ckpt INFO:tensorflow:Starting queue runners. INFO:tensorflow:global_step/sec: 0 WARNING:tensorflow:Issue encountered when serializing global_step. Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore. 'Tensor' object has no attribute 'to_proto' INFO:tensorflow:Recording summary at step 0. INFO:tensorflow:local_step=10 global_step=10 loss=19.0, 0.0% complete INFO:tensorflow:local_step=20 global_step=20 loss=18.7, 0.0% complete INFO:tensorflow:local_step=30 global_step=30 loss=19.0, 0.1% complete INFO:tensorflow:local_step=40 global_step=40 loss=18.1, 0.1% complete INFO:tensorflow:local_step=50 global_step=50 loss=19.5, 0.1% complete INFO:tensorflow:local_step=60 global_step=60 loss=17.5, 0.1% complete INFO:tensorflow:local_step=70 global_step=70 loss=17.1, 0.2% complete INFO:tensorflow:local_step=80 global_step=80 loss=17.3, 0.2% complete INFO:tensorflow:local_step=90 global_step=90 loss=18.9, 0.2% complete INFO:tensorflow:local_step=100 global_step=100 loss=16.0, 0.2% complete INFO:tensorflow:local_step=110 global_step=110 loss=153.8, 0.3% complete INFO:tensorflow:local_step=120 global_step=120 loss=12.1, 0.3% complete INFO:tensorflow:local_step=130 global_step=130 loss=27.2, 0.3% complete INFO:tensorflow:local_step=140 global_step=140 loss=20.3, 0.3% complete INFO:tensorflow:local_step=150 global_step=150 loss=18.5, 0.4% complete INFO:tensorflow:local_step=160 global_step=160 loss=14.9, 0.4% complete INFO:tensorflow:local_step=170 global_step=170 loss=17.0, 0.4% complete INFO:tensorflow:local_step=180 global_step=180 loss=14.8, 0.4% complete INFO:tensorflow:local_step=190 global_step=190 loss=13.7, 0.4% complete INFO:tensorflow:local_step=200 global_step=200 loss=17.0, 0.5% complete INFO:tensorflow:local_step=210 global_step=210 loss=22.4, 0.5% complete INFO:tensorflow:local_step=220 global_step=220 loss=21.1, 0.5% complete INFO:tensorflow:local_step=230 global_step=230 loss=15.8, 0.5% complete INFO:tensorflow:local_step=240 global_step=240 loss=14.4, 0.6% complete INFO:tensorflow:local_step=250 global_step=250 loss=27.9, 0.6% complete INFO:tensorflow:local_step=260 global_step=260 loss=22.9, 0.6% complete INFO:tensorflow:local_step=270 global_step=270 loss=22.8, 0.6% complete INFO:tensorflow:local_step=280 global_step=280 loss=21.2, 0.7% complete INFO:tensorflow:local_step=290 global_step=290 loss=23.8, 0.7% complete INFO:tensorflow:local_step=300 global_step=300 loss=24.1, 0.7% complete INFO:tensorflow:local_step=310 global_step=310 loss=23.2, 0.7% complete INFO:tensorflow:local_step=320 global_step=320 loss=23.9, 0.8% complete INFO:tensorflow:local_step=330 global_step=330 loss=21.7, 0.8% complete INFO:tensorflow:local_step=340 global_step=340 loss=23.8, 0.8% complete INFO:tensorflow:local_step=350 global_step=350 loss=23.8, 0.8% complete INFO:tensorflow:local_step=360 global_step=360 loss=23.2, 0.9% complete INFO:tensorflow:local_step=370 global_step=370 loss=23.9, 0.9% complete INFO:tensorflow:local_step=380 global_step=380 loss=21.7, 0.9% complete INFO:tensorflow:local_step=390 global_step=390 loss=22.4, 0.9% complete INFO:tensorflow:local_step=400 global_step=400 loss=23.0, 0.9% complete INFO:tensorflow:local_step=410 global_step=410 loss=23.7, 1.0% complete INFO:tensorflow:local_step=420 global_step=420 loss=22.8, 1.0% complete INFO:tensorflow:local_step=430 global_step=430 loss=20.2, 1.0% complete INFO:tensorflow:local_step=440 global_step=440 loss=22.3, 1.0% complete INFO:tensorflow:local_step=450 global_step=450 loss=24.1, 1.1% complete INFO:tensorflow:local_step=460 global_step=460 loss=23.8, 1.1% complete INFO:tensorflow:local_step=470 global_step=470 loss=22.3, 1.1% complete INFO:tensorflow:local_step=480 global_step=480 loss=24.3, 1.1% complete INFO:tensorflow:local_step=490 global_step=490 loss=23.4, 1.2% complete INFO:tensorflow:local_step=500 global_step=500 loss=23.1, 1.2% complete INFO:tensorflow:local_step=510 global_step=510 loss=18.4, 1.2% complete INFO:tensorflow:local_step=520 global_step=520 loss=22.5, 1.2% complete INFO:tensorflow:local_step=530 global_step=530 loss=26.4, 1.3% complete INFO:tensorflow:local_step=540 global_step=540 loss=20.4, 1.3% complete INFO:tensorflow:local_step=550 global_step=550 loss=24.1, 1.3% complete INFO:tensorflow:local_step=560 global_step=560 loss=23.7, 1.3% complete INFO:tensorflow:local_step=570 global_step=570 loss=22.6, 1.3% complete INFO:tensorflow:local_step=580 global_step=580 loss=19.6, 1.4% complete INFO:tensorflow:local_step=590 global_step=590 loss=25.3, 1.4% complete INFO:tensorflow:local_step=600 global_step=600 loss=23.2, 1.4% complete INFO:tensorflow:local_step=610 global_step=610 loss=23.2, 1.4% complete INFO:tensorflow:local_step=620 global_step=620 loss=22.9, 1.5% complete INFO:tensorflow:local_step=630 global_step=630 loss=22.6, 1.5% complete INFO:tensorflow:local_step=640 global_step=640 loss=23.3, 1.5% complete INFO:tensorflow:local_step=650 global_step=650 loss=22.8, 1.5% complete INFO:tensorflow:local_step=660 global_step=660 loss=24.0, 1.6% complete INFO:tensorflow:local_step=670 global_step=670 loss=22.9, 1.6% complete INFO:tensorflow:local_step=680 global_step=680 loss=23.2, 1.6% complete INFO:tensorflow:local_step=690 global_step=690 loss=18.6, 1.6% complete INFO:tensorflow:local_step=700 global_step=700 loss=21.2, 1.7% complete INFO:tensorflow:local_step=710 global_step=710 loss=24.1, 1.7% complete INFO:tensorflow:local_step=720 global_step=720 loss=22.8, 1.7% complete INFO:tensorflow:local_step=730 global_step=730 loss=23.1, 1.7% complete INFO:tensorflow:local_step=740 global_step=740 loss=23.4, 1.7% complete INFO:tensorflow:local_step=750 global_step=750 loss=22.9, 1.8% complete INFO:tensorflow:local_step=760 global_step=760 loss=19.3, 1.8% complete INFO:tensorflow:local_step=770 global_step=770 loss=22.7, 1.8% complete INFO:tensorflow:local_step=780 global_step=780 loss=23.2, 1.8% complete INFO:tensorflow:local_step=790 global_step=790 loss=23.4, 1.9% complete INFO:tensorflow:local_step=800 global_step=800 loss=24.2, 1.9% complete INFO:tensorflow:local_step=810 global_step=810 loss=23.5, 1.9% complete INFO:tensorflow:local_step=820 global_step=820 loss=22.4, 1.9% complete INFO:tensorflow:local_step=830 global_step=830 loss=23.1, 2.0% complete INFO:tensorflow:local_step=840 global_step=840 loss=22.9, 2.0% complete INFO:tensorflow:local_step=850 global_step=850 loss=21.5, 2.0% complete INFO:tensorflow:local_step=860 global_step=860 loss=23.9, 2.0% complete INFO:tensorflow:local_step=870 global_step=870 loss=20.0, 2.1% complete INFO:tensorflow:local_step=880 global_step=880 loss=22.9, 2.1% complete INFO:tensorflow:local_step=890 global_step=890 loss=21.5, 2.1% complete INFO:tensorflow:local_step=900 global_step=900 loss=22.6, 2.1% complete INFO:tensorflow:local_step=910 global_step=910 loss=23.1, 2.2% complete INFO:tensorflow:local_step=920 global_step=920 loss=22.8, 2.2% complete INFO:tensorflow:local_step=930 global_step=930 loss=22.9, 2.2% complete INFO:tensorflow:local_step=940 global_step=940 loss=22.5, 2.2% complete INFO:tensorflow:local_step=950 global_step=950 loss=22.7, 2.2% complete INFO:tensorflow:local_step=960 global_step=960 loss=20.3, 2.3% complete INFO:tensorflow:local_step=970 global_step=970 loss=20.2, 2.3% complete INFO:tensorflow:local_step=980 global_step=980 loss=19.1, 2.3% complete INFO:tensorflow:local_step=990 global_step=990 loss=21.8, 2.3% complete INFO:tensorflow:local_step=1000 global_step=1000 loss=19.9, 2.4% complete INFO:tensorflow:local_step=1010 global_step=1010 loss=22.7, 2.4% complete INFO:tensorflow:local_step=1020 global_step=1020 loss=23.2, 2.4% complete INFO:tensorflow:local_step=1030 global_step=1030 loss=20.6, 2.4% complete INFO:tensorflow:local_step=1040 global_step=1040 loss=20.4, 2.5% complete INFO:tensorflow:local_step=1050 global_step=1050 loss=22.0, 2.5% complete INFO:tensorflow:local_step=1060 global_step=1060 loss=22.3, 2.5% complete INFO:tensorflow:local_step=1070 global_step=1070 loss=23.5, 2.5% complete INFO:tensorflow:local_step=1080 global_step=1080 loss=22.6, 2.6% complete INFO:tensorflow:local_step=1090 global_step=1090 loss=22.3, 2.6% complete INFO:tensorflow:local_step=1100 global_step=1100 loss=21.7, 2.6% complete INFO:tensorflow:local_step=1110 global_step=1110 loss=20.8, 2.6% complete INFO:tensorflow:local_step=1120 global_step=1120 loss=22.3, 2.6% complete INFO:tensorflow:local_step=1130 global_step=1130 loss=20.6, 2.7% complete INFO:tensorflow:local_step=1140 global_step=1140 loss=23.4, 2.7% complete INFO:tensorflow:local_step=1150 global_step=1150 loss=18.5, 2.7% complete INFO:tensorflow:local_step=1160 global_step=1160 loss=18.3, 2.7% complete INFO:tensorflow:local_step=1170 global_step=1170 loss=23.1, 2.8% complete INFO:tensorflow:local_step=1180 global_step=1180 loss=19.3, 2.8% complete INFO:tensorflow:local_step=1190 global_step=1190 loss=16.4, 2.8% complete INFO:tensorflow:local_step=1200 global_step=1200 loss=19.2, 2.8% complete INFO:tensorflow:local_step=1210 global_step=1210 loss=20.3, 2.9% complete INFO:tensorflow:local_step=1220 global_step=1220 loss=23.3, 2.9% complete INFO:tensorflow:local_step=1230 global_step=1230 loss=19.8, 2.9% complete INFO:tensorflow:local_step=1240 global_step=1240 loss=18.6, 2.9% complete INFO:tensorflow:local_step=1250 global_step=1250 loss=22.9, 3.0% complete INFO:tensorflow:local_step=1260 global_step=1260 loss=19.7, 3.0% complete INFO:tensorflow:local_step=1270 global_step=1270 loss=20.5, 3.0% complete INFO:tensorflow:local_step=1280 global_step=1280 loss=22.1, 3.0% complete INFO:tensorflow:local_step=1290 global_step=1290 loss=20.0, 3.0% complete INFO:tensorflow:local_step=1300 global_step=1300 loss=19.4, 3.1% complete INFO:tensorflow:local_step=1310 global_step=1310 loss=19.8, 3.1% complete INFO:tensorflow:local_step=1320 global_step=1320 loss=22.6, 3.1% complete INFO:tensorflow:local_step=1330 global_step=1330 loss=23.0, 3.1% complete INFO:tensorflow:local_step=1340 global_step=1340 loss=23.1, 3.2% complete INFO:tensorflow:local_step=1350 global_step=1350 loss=22.1, 3.2% complete INFO:tensorflow:local_step=1360 global_step=1360 loss=20.8, 3.2% complete INFO:tensorflow:local_step=1370 global_step=1370 loss=22.9, 3.2% complete INFO:tensorflow:local_step=1380 global_step=1380 loss=24.6, 3.3% complete INFO:tensorflow:local_step=1390 global_step=1390 loss=20.9, 3.3% complete INFO:tensorflow:local_step=1400 global_step=1400 loss=20.9, 3.3% complete INFO:tensorflow:local_step=1410 global_step=1410 loss=19.6, 3.3% complete INFO:tensorflow:local_step=1420 global_step=1420 loss=20.1, 3.4% complete INFO:tensorflow:local_step=1430 global_step=1430 loss=23.3, 3.4% complete INFO:tensorflow:local_step=1440 global_step=1440 loss=17.6, 3.4% complete INFO:tensorflow:local_step=1450 global_step=1450 loss=18.5, 3.4% complete INFO:tensorflow:local_step=1460 global_step=1460 loss=22.2, 3.4% complete INFO:tensorflow:local_step=1470 global_step=1470 loss=22.2, 3.5% complete INFO:tensorflow:local_step=1480 global_step=1480 loss=19.0, 3.5% complete INFO:tensorflow:local_step=1490 global_step=1490 loss=22.8, 3.5% complete INFO:tensorflow:local_step=1500 global_step=1500 loss=21.0, 3.5% complete INFO:tensorflow:local_step=1510 global_step=1510 loss=23.4, 3.6% complete INFO:tensorflow:local_step=1520 global_step=1520 loss=22.2, 3.6% complete INFO:tensorflow:local_step=1530 global_step=1530 loss=19.5, 3.6% complete INFO:tensorflow:local_step=1540 global_step=1540 loss=23.1, 3.6% complete INFO:tensorflow:local_step=1550 global_step=1550 loss=18.9, 3.7% complete INFO:tensorflow:local_step=1560 global_step=1560 loss=18.2, 3.7% complete INFO:tensorflow:local_step=1570 global_step=1570 loss=22.8, 3.7% complete INFO:tensorflow:local_step=1580 global_step=1580 loss=18.0, 3.7% complete INFO:tensorflow:local_step=1590 global_step=1590 loss=22.7, 3.8% complete INFO:tensorflow:local_step=1600 global_step=1600 loss=19.2, 3.8% complete INFO:tensorflow:local_step=1610 global_step=1610 loss=18.7, 3.8% complete INFO:tensorflow:local_step=1620 global_step=1620 loss=21.3, 3.8% complete INFO:tensorflow:local_step=1630 global_step=1630 loss=19.7, 3.9% complete INFO:tensorflow:local_step=1640 global_step=1640 loss=22.5, 3.9% complete INFO:tensorflow:local_step=1650 global_step=1650 loss=20.0, 3.9% complete INFO:tensorflow:local_step=1660 global_step=1660 loss=18.5, 3.9% complete INFO:tensorflow:local_step=1670 global_step=1670 loss=19.9, 3.9% complete INFO:tensorflow:local_step=1680 global_step=1680 loss=18.2, 4.0% complete INFO:tensorflow:local_step=1690 global_step=1690 loss=19.0, 4.0% complete INFO:tensorflow:local_step=1700 global_step=1700 loss=20.2, 4.0% complete INFO:tensorflow:local_step=1710 global_step=1710 loss=21.7, 4.0% complete INFO:tensorflow:local_step=1720 global_step=1720 loss=20.4, 4.1% complete INFO:tensorflow:local_step=1730 global_step=1730 loss=23.6, 4.1% complete INFO:tensorflow:local_step=1740 global_step=1740 loss=19.1, 4.1% complete INFO:tensorflow:local_step=1750 global_step=1750 loss=23.2, 4.1% complete INFO:tensorflow:local_step=1760 global_step=1760 loss=19.2, 4.2% complete INFO:tensorflow:local_step=1770 global_step=1770 loss=18.2, 4.2% complete INFO:tensorflow:local_step=1780 global_step=1780 loss=18.7, 4.2% complete INFO:tensorflow:local_step=1790 global_step=1790 loss=18.2, 4.2% complete INFO:tensorflow:local_step=1800 global_step=1800 loss=21.7, 4.3% complete INFO:tensorflow:local_step=1810 global_step=1810 loss=18.1, 4.3% complete INFO:tensorflow:local_step=1820 global_step=1820 loss=244.1, 4.3% complete INFO:tensorflow:local_step=1830 global_step=1830 loss=22.1, 4.3% complete INFO:tensorflow:local_step=1840 global_step=1840 loss=18.9, 4.3% complete INFO:tensorflow:local_step=1850 global_step=1850 loss=19.2, 4.4% complete INFO:tensorflow:local_step=1860 global_step=1860 loss=20.4, 4.4% complete INFO:tensorflow:local_step=1870 global_step=1870 loss=20.8, 4.4% complete INFO:tensorflow:local_step=1880 global_step=1880 loss=20.4, 4.4% complete INFO:tensorflow:local_step=1890 global_step=1890 loss=22.0, 4.5% complete INFO:tensorflow:local_step=1900 global_step=1900 loss=23.0, 4.5% complete INFO:tensorflow:local_step=1910 global_step=1910 loss=20.1, 4.5% complete INFO:tensorflow:local_step=1920 global_step=1920 loss=19.1, 4.5% complete INFO:tensorflow:local_step=1930 global_step=1930 loss=19.1, 4.6% complete INFO:tensorflow:local_step=1940 global_step=1940 loss=20.4, 4.6% complete INFO:tensorflow:local_step=1950 global_step=1950 loss=19.0, 4.6% complete INFO:tensorflow:local_step=1960 global_step=1960 loss=20.4, 4.6% complete INFO:tensorflow:local_step=1970 global_step=1970 loss=19.1, 4.7% complete INFO:tensorflow:local_step=1980 global_step=1980 loss=19.8, 4.7% complete INFO:tensorflow:local_step=1990 global_step=1990 loss=21.8, 4.7% complete INFO:tensorflow:local_step=2000 global_step=2000 loss=20.6, 4.7% complete INFO:tensorflow:local_step=2010 global_step=2010 loss=22.5, 4.7% complete INFO:tensorflow:local_step=2020 global_step=2020 loss=20.2, 4.8% complete INFO:tensorflow:local_step=2030 global_step=2030 loss=18.5, 4.8% complete INFO:tensorflow:local_step=2040 global_step=2040 loss=18.8, 4.8% complete INFO:tensorflow:local_step=2050 global_step=2050 loss=19.7, 4.8% complete INFO:tensorflow:local_step=2060 global_step=2060 loss=20.7, 4.9% complete INFO:tensorflow:local_step=2070 global_step=2070 loss=21.7, 4.9% complete INFO:tensorflow:local_step=2080 global_step=2080 loss=22.1, 4.9% complete INFO:tensorflow:local_step=2090 global_step=2090 loss=18.7, 4.9% complete INFO:tensorflow:local_step=2100 global_step=2100 loss=20.6, 5.0% complete INFO:tensorflow:local_step=2110 global_step=2110 loss=21.2, 5.0% complete INFO:tensorflow:local_step=2120 global_step=2120 loss=212.7, 5.0% complete INFO:tensorflow:local_step=2130 global_step=2130 loss=19.2, 5.0% complete INFO:tensorflow:local_step=2140 global_step=2140 loss=17.8, 5.1% complete INFO:tensorflow:local_step=2150 global_step=2150 loss=18.9, 5.1% complete INFO:tensorflow:local_step=2160 global_step=2160 loss=17.9, 5.1% complete INFO:tensorflow:local_step=2170 global_step=2170 loss=18.9, 5.1% complete INFO:tensorflow:local_step=2180 global_step=2180 loss=19.7, 5.2% complete INFO:tensorflow:local_step=2190 global_step=2190 loss=18.5, 5.2% complete INFO:tensorflow:local_step=2200 global_step=2200 loss=20.4, 5.2% complete INFO:tensorflow:local_step=2210 global_step=2210 loss=18.6, 5.2% complete INFO:tensorflow:local_step=2220 global_step=2220 loss=18.4, 5.2% complete INFO:tensorflow:local_step=2230 global_step=2230 loss=17.6, 5.3% complete INFO:tensorflow:local_step=2240 global_step=2240 loss=18.9, 5.3% complete INFO:tensorflow:local_step=2250 global_step=2250 loss=19.9, 5.3% complete INFO:tensorflow:local_step=2260 global_step=2260 loss=19.4, 5.3% complete INFO:tensorflow:local_step=2270 global_step=2270 loss=290.6, 5.4% complete INFO:tensorflow:local_step=2280 global_step=2280 loss=20.2, 5.4% complete INFO:tensorflow:local_step=2290 global_step=2290 loss=20.8, 5.4% complete INFO:tensorflow:local_step=2300 global_step=2300 loss=19.2, 5.4% complete INFO:tensorflow:local_step=2310 global_step=2310 loss=19.5, 5.5% complete INFO:tensorflow:local_step=2320 global_step=2320 loss=18.5, 5.5% complete INFO:tensorflow:local_step=2330 global_step=2330 loss=18.2, 5.5% complete INFO:tensorflow:local_step=2340 global_step=2340 loss=18.8, 5.5% complete INFO:tensorflow:local_step=2350 global_step=2350 loss=17.3, 5.6% complete INFO:tensorflow:local_step=2360 global_step=2360 loss=18.3, 5.6% complete INFO:tensorflow:local_step=2370 global_step=2370 loss=18.0, 5.6% complete INFO:tensorflow:local_step=2380 global_step=2380 loss=19.3, 5.6% complete INFO:tensorflow:local_step=2390 global_step=2390 loss=19.6, 5.6% complete INFO:tensorflow:local_step=2400 global_step=2400 loss=19.9, 5.7% complete INFO:tensorflow:local_step=2410 global_step=2410 loss=18.8, 5.7% complete INFO:tensorflow:local_step=2420 global_step=2420 loss=18.8, 5.7% complete INFO:tensorflow:local_step=2430 global_step=2430 loss=20.1, 5.7% complete INFO:tensorflow:local_step=2440 global_step=2440 loss=18.4, 5.8% complete INFO:tensorflow:local_step=2450 global_step=2450 loss=19.4, 5.8% complete INFO:tensorflow:local_step=2460 global_step=2460 loss=19.8, 5.8% complete INFO:tensorflow:local_step=2470 global_step=2470 loss=17.8, 5.8% complete INFO:tensorflow:local_step=2480 global_step=2480 loss=18.5, 5.9% complete INFO:tensorflow:local_step=2490 global_step=2490 loss=19.0, 5.9% complete INFO:tensorflow:local_step=2500 global_step=2500 loss=20.0, 5.9% complete INFO:tensorflow:local_step=2510 global_step=2510 loss=18.0, 5.9% complete INFO:tensorflow:local_step=2520 global_step=2520 loss=18.9, 6.0% complete INFO:tensorflow:local_step=2530 global_step=2530 loss=20.0, 6.0% complete INFO:tensorflow:local_step=2540 global_step=2540 loss=18.9, 6.0% complete INFO:tensorflow:local_step=2550 global_step=2550 loss=18.1, 6.0% complete INFO:tensorflow:local_step=2560 global_step=2560 loss=18.9, 6.0% complete INFO:tensorflow:local_step=2570 global_step=2570 loss=19.2, 6.1% complete INFO:tensorflow:local_step=2580 global_step=2580 loss=18.5, 6.1% complete INFO:tensorflow:local_step=2590 global_step=2590 loss=17.8, 6.1% complete INFO:tensorflow:local_step=2600 global_step=2600 loss=19.5, 6.1% complete INFO:tensorflow:local_step=2610 global_step=2610 loss=18.6, 6.2% complete INFO:tensorflow:local_step=2620 global_step=2620 loss=19.9, 6.2% complete INFO:tensorflow:local_step=2630 global_step=2630 loss=19.6, 6.2% complete INFO:tensorflow:local_step=2640 global_step=2640 loss=20.1, 6.2% complete INFO:tensorflow:local_step=2650 global_step=2650 loss=18.8, 6.3% complete INFO:tensorflow:local_step=2660 global_step=2660 loss=20.1, 6.3% complete INFO:tensorflow:local_step=2670 global_step=2670 loss=20.1, 6.3% complete INFO:tensorflow:local_step=2680 global_step=2680 loss=18.0, 6.3% complete INFO:tensorflow:local_step=2690 global_step=2690 loss=18.1, 6.4% complete INFO:tensorflow:local_step=2700 global_step=2700 loss=18.3, 6.4% complete INFO:tensorflow:local_step=2710 global_step=2710 loss=18.2, 6.4% complete INFO:tensorflow:local_step=2720 global_step=2720 loss=15.2, 6.4% complete INFO:tensorflow:local_step=2730 global_step=2730 loss=19.5, 6.5% complete INFO:tensorflow:local_step=2740 global_step=2740 loss=19.8, 6.5% complete INFO:tensorflow:local_step=2750 global_step=2750 loss=18.3, 6.5% complete INFO:tensorflow:local_step=2760 global_step=2760 loss=20.5, 6.5% complete INFO:tensorflow:local_step=2770 global_step=2770 loss=20.1, 6.5% complete INFO:tensorflow:local_step=2780 global_step=2780 loss=20.4, 6.6% complete INFO:tensorflow:local_step=2790 global_step=2790 loss=19.7, 6.6% complete INFO:tensorflow:local_step=2800 global_step=2800 loss=18.9, 6.6% complete INFO:tensorflow:local_step=2810 global_step=2810 loss=19.5, 6.6% complete INFO:tensorflow:local_step=2820 global_step=2820 loss=19.4, 6.7% complete INFO:tensorflow:local_step=2830 global_step=2830 loss=18.4, 6.7% complete INFO:tensorflow:local_step=2840 global_step=2840 loss=18.0, 6.7% complete INFO:tensorflow:local_step=2850 global_step=2850 loss=18.0, 6.7% complete INFO:tensorflow:local_step=2860 global_step=2860 loss=19.5, 6.8% complete INFO:tensorflow:local_step=2870 global_step=2870 loss=18.1, 6.8% complete INFO:tensorflow:local_step=2880 global_step=2880 loss=18.3, 6.8% complete INFO:tensorflow:local_step=2890 global_step=2890 loss=21.4, 6.8% complete INFO:tensorflow:local_step=2900 global_step=2900 loss=18.6, 6.9% complete INFO:tensorflow:local_step=2910 global_step=2910 loss=19.8, 6.9% complete INFO:tensorflow:local_step=2920 global_step=2920 loss=20.7, 6.9% complete INFO:tensorflow:local_step=2930 global_step=2930 loss=17.7, 6.9% complete INFO:tensorflow:local_step=2940 global_step=2940 loss=20.3, 6.9% complete INFO:tensorflow:local_step=2950 global_step=2950 loss=19.9, 7.0% complete INFO:tensorflow:local_step=2960 global_step=2960 loss=18.7, 7.0% complete INFO:tensorflow:local_step=2970 global_step=2970 loss=20.2, 7.0% complete INFO:tensorflow:local_step=2980 global_step=2980 loss=20.1, 7.0% complete INFO:tensorflow:local_step=2990 global_step=2990 loss=18.3, 7.1% complete INFO:tensorflow:local_step=3000 global_step=3000 loss=17.8, 7.1% complete INFO:tensorflow:local_step=3010 global_step=3010 loss=18.5, 7.1% complete INFO:tensorflow:local_step=3020 global_step=3020 loss=18.3, 7.1% complete INFO:tensorflow:local_step=3030 global_step=3030 loss=19.9, 7.2% complete INFO:tensorflow:local_step=3040 global_step=3040 loss=18.1, 7.2% complete INFO:tensorflow:local_step=3050 global_step=3050 loss=18.5, 7.2% complete INFO:tensorflow:local_step=3060 global_step=3060 loss=18.7, 7.2% complete INFO:tensorflow:local_step=3070 global_step=3070 loss=20.0, 7.3% complete INFO:tensorflow:local_step=3080 global_step=3080 loss=17.9, 7.3% complete INFO:tensorflow:local_step=3090 global_step=3090 loss=20.7, 7.3% complete INFO:tensorflow:local_step=3100 global_step=3100 loss=18.3, 7.3% complete INFO:tensorflow:local_step=3110 global_step=3110 loss=22.3, 7.3% complete INFO:tensorflow:local_step=3120 global_step=3120 loss=18.0, 7.4% complete INFO:tensorflow:local_step=3130 global_step=3130 loss=19.7, 7.4% complete INFO:tensorflow:local_step=3140 global_step=3140 loss=17.6, 7.4% complete INFO:tensorflow:local_step=3150 global_step=3150 loss=18.5, 7.4% complete INFO:tensorflow:local_step=3160 global_step=3160 loss=18.5, 7.5% complete INFO:tensorflow:local_step=3170 global_step=3170 loss=18.5, 7.5% complete INFO:tensorflow:local_step=3180 global_step=3180 loss=19.3, 7.5% complete INFO:tensorflow:local_step=3190 global_step=3190 loss=18.2, 7.5% complete INFO:tensorflow:local_step=3200 global_step=3200 loss=19.3, 7.6% complete INFO:tensorflow:local_step=3210 global_step=3210 loss=18.7, 7.6% complete INFO:tensorflow:local_step=3220 global_step=3220 loss=19.5, 7.6% complete INFO:tensorflow:local_step=3230 global_step=3230 loss=21.9, 7.6% complete INFO:tensorflow:local_step=3240 global_step=3240 loss=19.9, 7.7% complete INFO:tensorflow:local_step=3250 global_step=3250 loss=19.5, 7.7% complete INFO:tensorflow:local_step=3260 global_step=3260 loss=19.1, 7.7% complete INFO:tensorflow:local_step=3270 global_step=3270 loss=19.1, 7.7% complete INFO:tensorflow:local_step=3280 global_step=3280 loss=20.3, 7.8% complete INFO:tensorflow:local_step=3290 global_step=3290 loss=19.1, 7.8% complete INFO:tensorflow:local_step=3300 global_step=3300 loss=17.8, 7.8% complete INFO:tensorflow:local_step=3310 global_step=3310 loss=18.4, 7.8% complete INFO:tensorflow:local_step=3320 global_step=3320 loss=19.8, 7.8% complete INFO:tensorflow:local_step=3330 global_step=3330 loss=19.2, 7.9% complete INFO:tensorflow:local_step=3340 global_step=3340 loss=19.1, 7.9% complete INFO:tensorflow:local_step=3350 global_step=3350 loss=20.5, 7.9% complete INFO:tensorflow:local_step=3360 global_step=3360 loss=18.5, 7.9% complete INFO:tensorflow:local_step=3370 global_step=3370 loss=18.0, 8.0% complete INFO:tensorflow:local_step=3380 global_step=3380 loss=18.9, 8.0% complete INFO:tensorflow:local_step=3390 global_step=3390 loss=19.8, 8.0% complete INFO:tensorflow:local_step=3400 global_step=3400 loss=18.8, 8.0% complete INFO:tensorflow:local_step=3410 global_step=3410 loss=19.4, 8.1% complete INFO:tensorflow:local_step=3420 global_step=3420 loss=349.4, 8.1% complete INFO:tensorflow:local_step=3430 global_step=3430 loss=18.4, 8.1% complete INFO:tensorflow:local_step=3440 global_step=3440 loss=21.1, 8.1% complete INFO:tensorflow:local_step=3450 global_step=3450 loss=17.8, 8.2% complete INFO:tensorflow:local_step=3460 global_step=3460 loss=18.1, 8.2% complete INFO:tensorflow:local_step=3470 global_step=3470 loss=20.0, 8.2% complete INFO:tensorflow:local_step=3480 global_step=3480 loss=20.4, 8.2% complete INFO:tensorflow:local_step=3490 global_step=3490 loss=18.5, 8.2% complete INFO:tensorflow:local_step=3500 global_step=3500 loss=20.0, 8.3% complete INFO:tensorflow:local_step=3510 global_step=3510 loss=19.8, 8.3% complete INFO:tensorflow:local_step=3520 global_step=3520 loss=20.1, 8.3% complete INFO:tensorflow:local_step=3530 global_step=3530 loss=18.9, 8.3% complete INFO:tensorflow:local_step=3540 global_step=3540 loss=21.8, 8.4% complete INFO:tensorflow:local_step=3550 global_step=3550 loss=19.0, 8.4% complete INFO:tensorflow:local_step=3560 global_step=3560 loss=20.7, 8.4% complete INFO:tensorflow:local_step=3570 global_step=3570 loss=18.9, 8.4% complete INFO:tensorflow:local_step=3580 global_step=3580 loss=20.7, 8.5% complete INFO:tensorflow:local_step=3590 global_step=3590 loss=18.3, 8.5% complete INFO:tensorflow:local_step=3600 global_step=3600 loss=20.9, 8.5% complete INFO:tensorflow:local_step=3610 global_step=3610 loss=18.6, 8.5% complete INFO:tensorflow:local_step=3620 global_step=3620 loss=19.0, 8.6% complete INFO:tensorflow:local_step=3630 global_step=3630 loss=20.0, 8.6% complete INFO:tensorflow:local_step=3640 global_step=3640 loss=19.1, 8.6% complete INFO:tensorflow:local_step=3650 global_step=3650 loss=19.7, 8.6% complete INFO:tensorflow:local_step=3660 global_step=3660 loss=18.7, 8.6% complete INFO:tensorflow:local_step=3670 global_step=3670 loss=19.1, 8.7% complete INFO:tensorflow:local_step=3680 global_step=3680 loss=18.2, 8.7% complete INFO:tensorflow:local_step=3690 global_step=3690 loss=17.8, 8.7% complete INFO:tensorflow:local_step=3700 global_step=3700 loss=19.2, 8.7% complete INFO:tensorflow:local_step=3710 global_step=3710 loss=17.9, 8.8% complete INFO:tensorflow:local_step=3720 global_step=3720 loss=19.5, 8.8% complete INFO:tensorflow:local_step=3730 global_step=3730 loss=14.8, 8.8% complete INFO:tensorflow:local_step=3740 global_step=3740 loss=18.2, 8.8% complete INFO:tensorflow:local_step=3750 global_step=3750 loss=18.7, 8.9% complete INFO:tensorflow:local_step=3760 global_step=3760 loss=19.4, 8.9% complete INFO:tensorflow:local_step=3770 global_step=3770 loss=19.1, 8.9% complete INFO:tensorflow:local_step=3780 global_step=3780 loss=19.2, 8.9% complete INFO:tensorflow:local_step=3790 global_step=3790 loss=19.0, 9.0% complete INFO:tensorflow:local_step=3800 global_step=3800 loss=17.7, 9.0% complete INFO:tensorflow:local_step=3810 global_step=3810 loss=20.6, 9.0% complete INFO:tensorflow:local_step=3820 global_step=3820 loss=20.0, 9.0% complete INFO:tensorflow:local_step=3830 global_step=3830 loss=18.0, 9.1% complete INFO:tensorflow:local_step=3840 global_step=3840 loss=19.3, 9.1% complete INFO:tensorflow:local_step=3850 global_step=3850 loss=19.3, 9.1% complete INFO:tensorflow:local_step=3860 global_step=3860 loss=19.4, 9.1% complete INFO:tensorflow:local_step=3870 global_step=3870 loss=19.0, 9.1% complete INFO:tensorflow:local_step=3880 global_step=3880 loss=19.2, 9.2% complete INFO:tensorflow:local_step=3890 global_step=3890 loss=17.0, 9.2% complete INFO:tensorflow:local_step=3900 global_step=3900 loss=17.3, 9.2% complete INFO:tensorflow:local_step=3910 global_step=3910 loss=20.6, 9.2% complete INFO:tensorflow:local_step=3920 global_step=3920 loss=19.5, 9.3% complete INFO:tensorflow:local_step=3930 global_step=3930 loss=21.9, 9.3% complete INFO:tensorflow:local_step=3940 global_step=3940 loss=18.9, 9.3% complete INFO:tensorflow:local_step=3950 global_step=3950 loss=17.0, 9.3% complete INFO:tensorflow:local_step=3960 global_step=3960 loss=20.4, 9.4% complete INFO:tensorflow:local_step=3970 global_step=3970 loss=22.4, 9.4% complete INFO:tensorflow:local_step=3980 global_step=3980 loss=18.6, 9.4% complete INFO:tensorflow:local_step=3990 global_step=3990 loss=20.2, 9.4% complete INFO:tensorflow:local_step=4000 global_step=4000 loss=17.9, 9.5% complete INFO:tensorflow:local_step=4010 global_step=4010 loss=18.7, 9.5% complete INFO:tensorflow:local_step=4020 global_step=4020 loss=18.5, 9.5% complete INFO:tensorflow:local_step=4030 global_step=4030 loss=18.9, 9.5% complete INFO:tensorflow:local_step=4040 global_step=4040 loss=18.4, 9.5% complete INFO:tensorflow:local_step=4050 global_step=4050 loss=18.6, 9.6% complete INFO:tensorflow:local_step=4060 global_step=4060 loss=19.6, 9.6% complete INFO:tensorflow:local_step=4070 global_step=4070 loss=19.6, 9.6% complete INFO:tensorflow:local_step=4080 global_step=4080 loss=17.9, 9.6% complete INFO:tensorflow:local_step=4090 global_step=4090 loss=18.4, 9.7% complete INFO:tensorflow:local_step=4100 global_step=4100 loss=19.5, 9.7% complete INFO:tensorflow:local_step=4110 global_step=4110 loss=19.3, 9.7% complete INFO:tensorflow:local_step=4120 global_step=4120 loss=18.5, 9.7% complete INFO:tensorflow:local_step=4130 global_step=4130 loss=19.1, 9.8% complete INFO:tensorflow:local_step=4140 global_step=4140 loss=19.5, 9.8% complete INFO:tensorflow:local_step=4150 global_step=4150 loss=19.2, 9.8% complete INFO:tensorflow:local_step=4160 global_step=4160 loss=19.1, 9.8% complete INFO:tensorflow:local_step=4170 global_step=4170 loss=18.5, 9.9% complete INFO:tensorflow:local_step=4180 global_step=4180 loss=18.6, 9.9% complete INFO:tensorflow:local_step=4190 global_step=4190 loss=18.8, 9.9% complete INFO:tensorflow:local_step=4200 global_step=4200 loss=19.0, 9.9% complete INFO:tensorflow:local_step=4210 global_step=4210 loss=18.5, 9.9% complete INFO:tensorflow:local_step=4220 global_step=4220 loss=18.7, 10.0% complete INFO:tensorflow:local_step=4230 global_step=4230 loss=15.2, 10.0% complete INFO:tensorflow:local_step=4240 global_step=4240 loss=16.0, 10.0% complete INFO:tensorflow:local_step=4250 global_step=4250 loss=14.5, 10.0% complete INFO:tensorflow:local_step=4260 global_step=4260 loss=18.5, 10.1% complete INFO:tensorflow:local_step=4270 global_step=4270 loss=18.2, 10.1% complete INFO:tensorflow:local_step=4280 global_step=4280 loss=18.5, 10.1% complete INFO:tensorflow:local_step=4290 global_step=4290 loss=18.1, 10.1% complete INFO:tensorflow:local_step=4300 global_step=4300 loss=19.8, 10.2% complete INFO:tensorflow:local_step=4310 global_step=4310 loss=17.7, 10.2% complete INFO:tensorflow:local_step=4320 global_step=4320 loss=16.6, 10.2% complete INFO:tensorflow:local_step=4330 global_step=4330 loss=17.9, 10.2% complete INFO:tensorflow:local_step=4340 global_step=4340 loss=14.6, 10.3% complete INFO:tensorflow:local_step=4350 global_step=4350 loss=18.2, 10.3% complete INFO:tensorflow:local_step=4360 global_step=4360 loss=19.5, 10.3% complete INFO:tensorflow:local_step=4370 global_step=4370 loss=19.3, 10.3% complete INFO:tensorflow:local_step=4380 global_step=4380 loss=17.6, 10.3% complete INFO:tensorflow:local_step=4390 global_step=4390 loss=18.3, 10.4% complete INFO:tensorflow:local_step=4400 global_step=4400 loss=19.1, 10.4% complete INFO:tensorflow:local_step=4410 global_step=4410 loss=18.2, 10.4% complete INFO:tensorflow:local_step=4420 global_step=4420 loss=19.6, 10.4% complete INFO:tensorflow:local_step=4430 global_step=4430 loss=19.6, 10.5% complete INFO:tensorflow:local_step=4440 global_step=4440 loss=19.1, 10.5% complete INFO:tensorflow:local_step=4450 global_step=4450 loss=19.3, 10.5% complete INFO:tensorflow:local_step=4460 global_step=4460 loss=18.5, 10.5% complete INFO:tensorflow:local_step=4470 global_step=4470 loss=19.2, 10.6% complete INFO:tensorflow:local_step=4480 global_step=4480 loss=18.9, 10.6% complete INFO:tensorflow:local_step=4490 global_step=4490 loss=19.1, 10.6% complete INFO:tensorflow:local_step=4500 global_step=4500 loss=18.8, 10.6% complete INFO:tensorflow:local_step=4510 global_step=4510 loss=19.1, 10.7% complete INFO:tensorflow:local_step=4520 global_step=4520 loss=18.1, 10.7% complete INFO:tensorflow:local_step=4530 global_step=4530 loss=18.4, 10.7% complete INFO:tensorflow:local_step=4540 global_step=4540 loss=19.6, 10.7% complete INFO:tensorflow:local_step=4550 global_step=4550 loss=18.3, 10.8% complete INFO:tensorflow:local_step=4560 global_step=4560 loss=19.5, 10.8% complete INFO:tensorflow:local_step=4570 global_step=4570 loss=18.5, 10.8% complete INFO:tensorflow:local_step=4580 global_step=4580 loss=17.7, 10.8% complete INFO:tensorflow:local_step=4590 global_step=4590 loss=18.8, 10.8% complete INFO:tensorflow:local_step=4600 global_step=4600 loss=18.1, 10.9% complete INFO:tensorflow:local_step=4610 global_step=4610 loss=18.1, 10.9% complete INFO:tensorflow:local_step=4620 global_step=4620 loss=17.7, 10.9% complete INFO:tensorflow:local_step=4630 global_step=4630 loss=19.0, 10.9% complete INFO:tensorflow:local_step=4640 global_step=4640 loss=17.9, 11.0% complete INFO:tensorflow:local_step=4650 global_step=4650 loss=18.3, 11.0% complete INFO:tensorflow:local_step=4660 global_step=4660 loss=19.1, 11.0% complete INFO:tensorflow:local_step=4670 global_step=4670 loss=17.9, 11.0% complete INFO:tensorflow:local_step=4680 global_step=4680 loss=18.6, 11.1% complete INFO:tensorflow:local_step=4690 global_step=4690 loss=18.7, 11.1% complete INFO:tensorflow:local_step=4700 global_step=4700 loss=19.2, 11.1% complete INFO:tensorflow:local_step=4710 global_step=4710 loss=19.0, 11.1% complete INFO:tensorflow:local_step=4720 global_step=4720 loss=18.3, 11.2% complete INFO:tensorflow:local_step=4730 global_step=4730 loss=18.9, 11.2% complete INFO:tensorflow:local_step=4740 global_step=4740 loss=18.8, 11.2% complete INFO:tensorflow:local_step=4750 global_step=4750 loss=18.8, 11.2% complete INFO:tensorflow:local_step=4760 global_step=4760 loss=17.4, 11.2% complete INFO:tensorflow:local_step=4770 global_step=4770 loss=18.9, 11.3% complete INFO:tensorflow:local_step=4780 global_step=4780 loss=18.5, 11.3% complete INFO:tensorflow:local_step=4790 global_step=4790 loss=18.3, 11.3% complete INFO:tensorflow:local_step=4800 global_step=4800 loss=15.0, 11.3% complete INFO:tensorflow:local_step=4810 global_step=4810 loss=19.8, 11.4% complete INFO:tensorflow:local_step=4820 global_step=4820 loss=18.7, 11.4% complete INFO:tensorflow:local_step=4830 global_step=4830 loss=19.1, 11.4% complete INFO:tensorflow:local_step=4840 global_step=4840 loss=21.1, 11.4% complete INFO:tensorflow:local_step=4850 global_step=4850 loss=15.0, 11.5% complete INFO:tensorflow:local_step=4860 global_step=4860 loss=17.8, 11.5% complete INFO:tensorflow:local_step=4870 global_step=4870 loss=17.9, 11.5% complete INFO:tensorflow:local_step=4880 global_step=4880 loss=18.0, 11.5% complete INFO:tensorflow:local_step=4890 global_step=4890 loss=18.7, 11.6% complete INFO:tensorflow:local_step=4900 global_step=4900 loss=19.0, 11.6% complete INFO:tensorflow:local_step=4910 global_step=4910 loss=19.9, 11.6% complete INFO:tensorflow:local_step=4920 global_step=4920 loss=15.3, 11.6% complete INFO:tensorflow:local_step=4930 global_step=4930 loss=19.2, 11.6% complete INFO:tensorflow:local_step=4940 global_step=4940 loss=20.4, 11.7% complete INFO:tensorflow:local_step=4950 global_step=4950 loss=19.0, 11.7% complete INFO:tensorflow:local_step=4960 global_step=4960 loss=18.1, 11.7% complete INFO:tensorflow:local_step=4970 global_step=4970 loss=19.3, 11.7% complete INFO:tensorflow:local_step=4980 global_step=4980 loss=18.8, 11.8% complete INFO:tensorflow:local_step=4990 global_step=4990 loss=17.8, 11.8% complete INFO:tensorflow:local_step=5000 global_step=5000 loss=17.7, 11.8% complete INFO:tensorflow:local_step=5010 global_step=5010 loss=18.5, 11.8% complete INFO:tensorflow:local_step=5020 global_step=5020 loss=19.8, 11.9% complete INFO:tensorflow:local_step=5030 global_step=5030 loss=18.6, 11.9% complete INFO:tensorflow:local_step=5040 global_step=5040 loss=21.3, 11.9% complete INFO:tensorflow:local_step=5050 global_step=5050 loss=17.7, 11.9% complete INFO:tensorflow:local_step=5060 global_step=5060 loss=18.8, 12.0% complete INFO:tensorflow:local_step=5070 global_step=5070 loss=19.0, 12.0% complete INFO:tensorflow:local_step=5080 global_step=5080 loss=312.3, 12.0% complete INFO:tensorflow:local_step=5090 global_step=5090 loss=18.1, 12.0% complete INFO:tensorflow:local_step=5100 global_step=5100 loss=19.9, 12.1% complete INFO:tensorflow:local_step=5110 global_step=5110 loss=19.0, 12.1% complete INFO:tensorflow:local_step=5120 global_step=5120 loss=19.5, 12.1% complete INFO:tensorflow:local_step=5130 global_step=5130 loss=17.7, 12.1% complete INFO:tensorflow:local_step=5140 global_step=5140 loss=20.0, 12.1% complete INFO:tensorflow:local_step=5150 global_step=5150 loss=19.0, 12.2% complete INFO:tensorflow:local_step=5160 global_step=5160 loss=18.7, 12.2% complete INFO:tensorflow:local_step=5170 global_step=5170 loss=19.6, 12.2% complete INFO:tensorflow:local_step=5180 global_step=5180 loss=18.5, 12.2% complete INFO:tensorflow:local_step=5190 global_step=5190 loss=19.2, 12.3% complete INFO:tensorflow:local_step=5200 global_step=5200 loss=19.7, 12.3% complete INFO:tensorflow:local_step=5210 global_step=5210 loss=19.6, 12.3% complete INFO:tensorflow:local_step=5220 global_step=5220 loss=17.9, 12.3% complete INFO:tensorflow:local_step=5230 global_step=5230 loss=18.7, 12.4% complete INFO:tensorflow:local_step=5240 global_step=5240 loss=18.1, 12.4% complete INFO:tensorflow:local_step=5250 global_step=5250 loss=19.5, 12.4% complete INFO:tensorflow:local_step=5260 global_step=5260 loss=18.8, 12.4% complete INFO:tensorflow:local_step=5270 global_step=5270 loss=18.1, 12.5% complete INFO:tensorflow:local_step=5280 global_step=5280 loss=22.3, 12.5% complete INFO:tensorflow:local_step=5290 global_step=5290 loss=18.1, 12.5% complete INFO:tensorflow:local_step=5300 global_step=5300 loss=20.0, 12.5% complete INFO:tensorflow:local_step=5310 global_step=5310 loss=18.0, 12.5% complete INFO:tensorflow:local_step=5320 global_step=5320 loss=18.0, 12.6% complete INFO:tensorflow:local_step=5330 global_step=5330 loss=19.3, 12.6% complete INFO:tensorflow:local_step=5340 global_step=5340 loss=18.2, 12.6% complete INFO:tensorflow:local_step=5350 global_step=5350 loss=15.9, 12.6% complete INFO:tensorflow:local_step=5360 global_step=5360 loss=17.4, 12.7% complete INFO:tensorflow:local_step=5370 global_step=5370 loss=17.7, 12.7% complete INFO:tensorflow:local_step=5380 global_step=5380 loss=18.0, 12.7% complete INFO:tensorflow:local_step=5390 global_step=5390 loss=19.7, 12.7% complete INFO:tensorflow:local_step=5400 global_step=5400 loss=18.2, 12.8% complete INFO:tensorflow:local_step=5410 global_step=5410 loss=19.5, 12.8% complete INFO:tensorflow:local_step=5420 global_step=5420 loss=18.5, 12.8% complete INFO:tensorflow:local_step=5430 global_step=5430 loss=19.3, 12.8% complete INFO:tensorflow:local_step=5440 global_step=5440 loss=18.3, 12.9% complete INFO:tensorflow:local_step=5450 global_step=5450 loss=17.7, 12.9% complete INFO:tensorflow:local_step=5460 global_step=5460 loss=19.4, 12.9% complete INFO:tensorflow:local_step=5470 global_step=5470 loss=17.7, 12.9% complete INFO:tensorflow:local_step=5480 global_step=5480 loss=19.2, 12.9% complete INFO:tensorflow:local_step=5490 global_step=5490 loss=18.7, 13.0% complete INFO:tensorflow:local_step=5500 global_step=5500 loss=18.1, 13.0% complete INFO:tensorflow:local_step=5510 global_step=5510 loss=18.0, 13.0% complete INFO:tensorflow:local_step=5520 global_step=5520 loss=18.7, 13.0% complete INFO:tensorflow:local_step=5530 global_step=5530 loss=19.4, 13.1% complete INFO:tensorflow:local_step=5540 global_step=5540 loss=17.7, 13.1% complete INFO:tensorflow:local_step=5550 global_step=5550 loss=19.3, 13.1% complete INFO:tensorflow:local_step=5560 global_step=5560 loss=19.8, 13.1% complete INFO:tensorflow:local_step=5570 global_step=5570 loss=19.1, 13.2% complete INFO:tensorflow:local_step=5580 global_step=5580 loss=18.2, 13.2% complete INFO:tensorflow:local_step=5590 global_step=5590 loss=18.2, 13.2% complete INFO:tensorflow:local_step=5600 global_step=5600 loss=17.6, 13.2% complete INFO:tensorflow:local_step=5610 global_step=5610 loss=17.9, 13.3% complete INFO:tensorflow:local_step=5620 global_step=5620 loss=18.4, 13.3% complete INFO:tensorflow:local_step=5630 global_step=5630 loss=18.7, 13.3% complete INFO:tensorflow:local_step=5640 global_step=5640 loss=18.5, 13.3% complete INFO:tensorflow:local_step=5650 global_step=5650 loss=19.1, 13.4% complete INFO:tensorflow:local_step=5660 global_step=5660 loss=18.9, 13.4% complete INFO:tensorflow:local_step=5670 global_step=5670 loss=18.9, 13.4% complete INFO:tensorflow:local_step=5680 global_step=5680 loss=19.6, 13.4% complete INFO:tensorflow:local_step=5690 global_step=5690 loss=18.9, 13.4% complete INFO:tensorflow:local_step=5700 global_step=5700 loss=18.7, 13.5% complete INFO:tensorflow:local_step=5710 global_step=5710 loss=17.5, 13.5% complete INFO:tensorflow:local_step=5720 global_step=5720 loss=19.5, 13.5% complete INFO:tensorflow:local_step=5730 global_step=5730 loss=20.6, 13.5% complete INFO:tensorflow:local_step=5740 global_step=5740 loss=18.0, 13.6% complete INFO:tensorflow:local_step=5750 global_step=5750 loss=19.8, 13.6% complete INFO:tensorflow:local_step=5760 global_step=5760 loss=19.7, 13.6% complete INFO:tensorflow:local_step=5770 global_step=5770 loss=18.4, 13.6% complete INFO:tensorflow:local_step=5780 global_step=5780 loss=19.4, 13.7% complete INFO:tensorflow:local_step=5790 global_step=5790 loss=20.5, 13.7% complete INFO:tensorflow:local_step=5800 global_step=5800 loss=17.6, 13.7% complete INFO:tensorflow:local_step=5810 global_step=5810 loss=18.4, 13.7% complete INFO:tensorflow:local_step=5820 global_step=5820 loss=19.1, 13.8% complete INFO:tensorflow:local_step=5830 global_step=5830 loss=20.1, 13.8% complete INFO:tensorflow:local_step=5840 global_step=5840 loss=19.0, 13.8% complete INFO:tensorflow:local_step=5850 global_step=5850 loss=18.1, 13.8% complete INFO:tensorflow:local_step=5860 global_step=5860 loss=20.0, 13.8% complete INFO:tensorflow:local_step=5870 global_step=5870 loss=19.3, 13.9% complete INFO:tensorflow:local_step=5880 global_step=5880 loss=18.4, 13.9% complete INFO:tensorflow:local_step=5890 global_step=5890 loss=18.2, 13.9% complete INFO:tensorflow:local_step=5900 global_step=5900 loss=18.1, 13.9% complete INFO:tensorflow:local_step=5910 global_step=5910 loss=18.3, 14.0% complete INFO:tensorflow:local_step=5920 global_step=5920 loss=17.3, 14.0% complete INFO:tensorflow:local_step=5930 global_step=5930 loss=19.1, 14.0% complete INFO:tensorflow:local_step=5940 global_step=5940 loss=20.5, 14.0% complete INFO:tensorflow:local_step=5950 global_step=5950 loss=17.8, 14.1% complete INFO:tensorflow:local_step=5960 global_step=5960 loss=19.8, 14.1% complete INFO:tensorflow:local_step=5970 global_step=5970 loss=18.0, 14.1% complete INFO:tensorflow:local_step=5980 global_step=5980 loss=18.3, 14.1% complete INFO:tensorflow:local_step=5990 global_step=5990 loss=18.3, 14.2% complete INFO:tensorflow:local_step=6000 global_step=6000 loss=18.8, 14.2% complete INFO:tensorflow:local_step=6010 global_step=6010 loss=267.1, 14.2% complete INFO:tensorflow:local_step=6020 global_step=6020 loss=18.8, 14.2% complete INFO:tensorflow:local_step=6030 global_step=6030 loss=21.6, 14.2% complete INFO:tensorflow:local_step=6040 global_step=6040 loss=344.7, 14.3% complete INFO:tensorflow:local_step=6050 global_step=6050 loss=18.4, 14.3% complete INFO:tensorflow:local_step=6060 global_step=6060 loss=18.8, 14.3% complete INFO:tensorflow:local_step=6070 global_step=6070 loss=18.6, 14.3% complete INFO:tensorflow:local_step=6080 global_step=6080 loss=17.2, 14.4% complete INFO:tensorflow:local_step=6090 global_step=6090 loss=18.3, 14.4% complete INFO:tensorflow:local_step=6100 global_step=6100 loss=18.7, 14.4% complete INFO:tensorflow:local_step=6110 global_step=6110 loss=18.7, 14.4% complete INFO:tensorflow:local_step=6120 global_step=6120 loss=20.3, 14.5% complete INFO:tensorflow:local_step=6130 global_step=6130 loss=18.3, 14.5% complete INFO:tensorflow:local_step=6140 global_step=6140 loss=18.8, 14.5% complete INFO:tensorflow:local_step=6150 global_step=6150 loss=318.2, 14.5% complete INFO:tensorflow:local_step=6160 global_step=6160 loss=18.1, 14.6% complete INFO:tensorflow:local_step=6170 global_step=6170 loss=18.7, 14.6% complete INFO:tensorflow:local_step=6180 global_step=6180 loss=18.3, 14.6% complete INFO:tensorflow:local_step=6190 global_step=6190 loss=20.5, 14.6% complete INFO:tensorflow:local_step=6200 global_step=6200 loss=18.8, 14.7% complete INFO:tensorflow:local_step=6210 global_step=6210 loss=18.8, 14.7% complete INFO:tensorflow:local_step=6220 global_step=6220 loss=19.4, 14.7% complete INFO:tensorflow:local_step=6230 global_step=6230 loss=19.1, 14.7% complete INFO:tensorflow:local_step=6240 global_step=6240 loss=18.8, 14.7% complete INFO:tensorflow:local_step=6250 global_step=6250 loss=19.2, 14.8% complete INFO:tensorflow:local_step=6260 global_step=6260 loss=18.9, 14.8% complete INFO:tensorflow:local_step=6270 global_step=6270 loss=19.7, 14.8% complete INFO:tensorflow:local_step=6280 global_step=6280 loss=19.7, 14.8% complete INFO:tensorflow:local_step=6290 global_step=6290 loss=18.6, 14.9% complete INFO:tensorflow:local_step=6300 global_step=6300 loss=19.1, 14.9% complete INFO:tensorflow:local_step=6310 global_step=6310 loss=18.7, 14.9% complete INFO:tensorflow:local_step=6320 global_step=6320 loss=20.3, 14.9% complete INFO:tensorflow:local_step=6330 global_step=6330 loss=19.1, 15.0% complete INFO:tensorflow:local_step=6340 global_step=6340 loss=18.6, 15.0% complete INFO:tensorflow:local_step=6350 global_step=6350 loss=19.3, 15.0% complete INFO:tensorflow:local_step=6360 global_step=6360 loss=18.2, 15.0% complete INFO:tensorflow:local_step=6370 global_step=6370 loss=18.4, 15.1% complete INFO:tensorflow:local_step=6380 global_step=6380 loss=18.9, 15.1% complete INFO:tensorflow:local_step=6390 global_step=6390 loss=17.8, 15.1% complete INFO:tensorflow:local_step=6400 global_step=6400 loss=18.8, 15.1% complete INFO:tensorflow:local_step=6410 global_step=6410 loss=17.9, 15.1% complete INFO:tensorflow:local_step=6420 global_step=6420 loss=18.6, 15.2% complete INFO:tensorflow:local_step=6430 global_step=6430 loss=18.8, 15.2% complete INFO:tensorflow:local_step=6440 global_step=6440 loss=18.5, 15.2% complete INFO:tensorflow:local_step=6450 global_step=6450 loss=19.2, 15.2% complete INFO:tensorflow:local_step=6460 global_step=6460 loss=18.6, 15.3% complete INFO:tensorflow:local_step=6470 global_step=6470 loss=18.5, 15.3% complete INFO:tensorflow:local_step=6480 global_step=6480 loss=19.0, 15.3% complete INFO:tensorflow:local_step=6490 global_step=6490 loss=18.0, 15.3% complete INFO:tensorflow:local_step=6500 global_step=6500 loss=18.4, 15.4% complete INFO:tensorflow:local_step=6510 global_step=6510 loss=17.8, 15.4% complete INFO:tensorflow:local_step=6520 global_step=6520 loss=18.0, 15.4% complete INFO:tensorflow:local_step=6530 global_step=6530 loss=18.0, 15.4% complete INFO:tensorflow:local_step=6540 global_step=6540 loss=18.7, 15.5% complete INFO:tensorflow:local_step=6550 global_step=6550 loss=18.1, 15.5% complete INFO:tensorflow:local_step=6560 global_step=6560 loss=18.5, 15.5% complete INFO:tensorflow:local_step=6570 global_step=6570 loss=17.8, 15.5% complete INFO:tensorflow:local_step=6580 global_step=6580 loss=18.0, 15.5% complete INFO:tensorflow:local_step=6590 global_step=6590 loss=19.1, 15.6% complete INFO:tensorflow:local_step=6600 global_step=6600 loss=18.0, 15.6% complete INFO:tensorflow:local_step=6610 global_step=6610 loss=18.3, 15.6% complete INFO:tensorflow:local_step=6620 global_step=6620 loss=17.6, 15.6% complete INFO:tensorflow:local_step=6630 global_step=6630 loss=18.4, 15.7% complete INFO:tensorflow:local_step=6640 global_step=6640 loss=18.6, 15.7% complete INFO:tensorflow:local_step=6650 global_step=6650 loss=19.0, 15.7% complete INFO:tensorflow:local_step=6660 global_step=6660 loss=18.2, 15.7% complete INFO:tensorflow:local_step=6670 global_step=6670 loss=17.9, 15.8% complete INFO:tensorflow:local_step=6680 global_step=6680 loss=17.9, 15.8% complete INFO:tensorflow:local_step=6690 global_step=6690 loss=19.5, 15.8% complete INFO:tensorflow:local_step=6700 global_step=6700 loss=18.4, 15.8% complete INFO:tensorflow:local_step=6710 global_step=6710 loss=17.7, 15.9% complete INFO:tensorflow:local_step=6720 global_step=6720 loss=19.2, 15.9% complete INFO:tensorflow:local_step=6730 global_step=6730 loss=18.8, 15.9% complete INFO:tensorflow:local_step=6740 global_step=6740 loss=16.3, 15.9% complete INFO:tensorflow:local_step=6750 global_step=6750 loss=18.0, 15.9% complete INFO:tensorflow:local_step=6760 global_step=6760 loss=18.1, 16.0% complete INFO:tensorflow:local_step=6770 global_step=6770 loss=18.9, 16.0% complete INFO:tensorflow:local_step=6780 global_step=6780 loss=17.9, 16.0% complete INFO:tensorflow:local_step=6790 global_step=6790 loss=17.8, 16.0% complete INFO:tensorflow:local_step=6800 global_step=6800 loss=254.3, 16.1% complete INFO:tensorflow:local_step=6810 global_step=6810 loss=19.0, 16.1% complete INFO:tensorflow:local_step=6820 global_step=6820 loss=17.6, 16.1% complete INFO:tensorflow:local_step=6830 global_step=6830 loss=19.2, 16.1% complete INFO:tensorflow:local_step=6840 global_step=6840 loss=18.3, 16.2% complete INFO:tensorflow:local_step=6850 global_step=6850 loss=18.2, 16.2% complete INFO:tensorflow:local_step=6860 global_step=6860 loss=17.9, 16.2% complete INFO:tensorflow:local_step=6870 global_step=6870 loss=18.7, 16.2% complete INFO:tensorflow:local_step=6880 global_step=6880 loss=18.8, 16.3% complete INFO:tensorflow:local_step=6890 global_step=6890 loss=18.0, 16.3% complete INFO:tensorflow:local_step=6900 global_step=6900 loss=18.3, 16.3% complete INFO:tensorflow:local_step=6910 global_step=6910 loss=18.1, 16.3% complete INFO:tensorflow:local_step=6920 global_step=6920 loss=18.0, 16.4% complete INFO:tensorflow:local_step=6930 global_step=6930 loss=21.6, 16.4% complete INFO:tensorflow:local_step=6940 global_step=6940 loss=17.4, 16.4% complete INFO:tensorflow:local_step=6950 global_step=6950 loss=15.0, 16.4% complete INFO:tensorflow:local_step=6960 global_step=6960 loss=18.0, 16.4% complete INFO:tensorflow:local_step=6970 global_step=6970 loss=21.1, 16.5% complete INFO:tensorflow:local_step=6980 global_step=6980 loss=18.5, 16.5% complete INFO:tensorflow:local_step=6990 global_step=6990 loss=18.0, 16.5% complete INFO:tensorflow:local_step=7000 global_step=7000 loss=18.5, 16.5% complete INFO:tensorflow:local_step=7010 global_step=7010 loss=18.0, 16.6% complete INFO:tensorflow:local_step=7020 global_step=7020 loss=18.3, 16.6% complete INFO:tensorflow:local_step=7030 global_step=7030 loss=18.9, 16.6% complete INFO:tensorflow:local_step=7040 global_step=7040 loss=19.0, 16.6% complete INFO:tensorflow:local_step=7050 global_step=7050 loss=19.0, 16.7% complete INFO:tensorflow:local_step=7060 global_step=7060 loss=20.2, 16.7% complete INFO:tensorflow:local_step=7070 global_step=7070 loss=17.7, 16.7% complete INFO:tensorflow:local_step=7080 global_step=7080 loss=18.0, 16.7% complete INFO:tensorflow:local_step=7090 global_step=7090 loss=19.1, 16.8% complete INFO:tensorflow:local_step=7100 global_step=7100 loss=19.0, 16.8% complete INFO:tensorflow:local_step=7110 global_step=7110 loss=18.8, 16.8% complete INFO:tensorflow:local_step=7120 global_step=7120 loss=18.8, 16.8% complete INFO:tensorflow:local_step=7130 global_step=7130 loss=18.4, 16.8% complete INFO:tensorflow:local_step=7140 global_step=7140 loss=18.6, 16.9% complete INFO:tensorflow:local_step=7150 global_step=7150 loss=17.5, 16.9% complete INFO:tensorflow:local_step=7160 global_step=7160 loss=19.0, 16.9% complete INFO:tensorflow:local_step=7170 global_step=7170 loss=17.5, 16.9% complete INFO:tensorflow:local_step=7180 global_step=7180 loss=20.1, 17.0% complete INFO:tensorflow:local_step=7190 global_step=7190 loss=19.4, 17.0% complete INFO:tensorflow:local_step=7200 global_step=7200 loss=373.0, 17.0% complete INFO:tensorflow:local_step=7210 global_step=7210 loss=19.3, 17.0% complete INFO:tensorflow:local_step=7220 global_step=7220 loss=18.1, 17.1% complete INFO:tensorflow:local_step=7230 global_step=7230 loss=19.4, 17.1% complete INFO:tensorflow:local_step=7240 global_step=7240 loss=18.6, 17.1% complete INFO:tensorflow:local_step=7250 global_step=7250 loss=18.7, 17.1% complete INFO:tensorflow:local_step=7260 global_step=7260 loss=17.7, 17.2% complete INFO:tensorflow:local_step=7270 global_step=7270 loss=19.4, 17.2% complete INFO:tensorflow:local_step=7280 global_step=7280 loss=18.2, 17.2% complete INFO:tensorflow:local_step=7290 global_step=7290 loss=18.4, 17.2% complete INFO:tensorflow:local_step=7300 global_step=7300 loss=19.7, 17.2% complete INFO:tensorflow:local_step=7310 global_step=7310 loss=19.5, 17.3% complete INFO:tensorflow:local_step=7320 global_step=7320 loss=18.4, 17.3% complete INFO:tensorflow:local_step=7330 global_step=7330 loss=18.8, 17.3% complete INFO:tensorflow:local_step=7340 global_step=7340 loss=17.9, 17.3% complete INFO:tensorflow:local_step=7350 global_step=7350 loss=18.5, 17.4% complete INFO:tensorflow:local_step=7360 global_step=7360 loss=19.0, 17.4% complete INFO:tensorflow:local_step=7370 global_step=7370 loss=18.6, 17.4% complete INFO:tensorflow:local_step=7380 global_step=7380 loss=17.8, 17.4% complete INFO:tensorflow:local_step=7390 global_step=7390 loss=19.2, 17.5% complete INFO:tensorflow:Recording summary at step 7393. INFO:tensorflow:global_step/sec: 123.316 INFO:tensorflow:local_step=7400 global_step=7400 loss=19.7, 17.5% complete INFO:tensorflow:local_step=7410 global_step=7410 loss=18.2, 17.5% complete INFO:tensorflow:local_step=7420 global_step=7420 loss=18.7, 17.5% complete INFO:tensorflow:local_step=7430 global_step=7430 loss=17.7, 17.6% complete INFO:tensorflow:local_step=7440 global_step=7440 loss=19.8, 17.6% complete INFO:tensorflow:local_step=7450 global_step=7450 loss=18.0, 17.6% complete INFO:tensorflow:local_step=7460 global_step=7460 loss=18.7, 17.6% complete INFO:tensorflow:local_step=7470 global_step=7470 loss=17.8, 17.7% complete INFO:tensorflow:local_step=7480 global_step=7480 loss=17.9, 17.7% complete INFO:tensorflow:local_step=7490 global_step=7490 loss=18.7, 17.7% complete INFO:tensorflow:local_step=7500 global_step=7500 loss=19.1, 17.7% complete INFO:tensorflow:local_step=7510 global_step=7510 loss=19.2, 17.7% complete INFO:tensorflow:local_step=7520 global_step=7520 loss=18.0, 17.8% complete INFO:tensorflow:local_step=7530 global_step=7530 loss=17.9, 17.8% complete INFO:tensorflow:local_step=7540 global_step=7540 loss=17.6, 17.8% complete INFO:tensorflow:local_step=7550 global_step=7550 loss=19.3, 17.8% complete INFO:tensorflow:local_step=7560 global_step=7560 loss=18.1, 17.9% complete INFO:tensorflow:local_step=7570 global_step=7570 loss=18.6, 17.9% complete INFO:tensorflow:local_step=7580 global_step=7580 loss=18.9, 17.9% complete INFO:tensorflow:local_step=7590 global_step=7590 loss=19.3, 17.9% complete INFO:tensorflow:local_step=7600 global_step=7600 loss=19.1, 18.0% complete INFO:tensorflow:local_step=7610 global_step=7610 loss=17.7, 18.0% complete INFO:tensorflow:local_step=7620 global_step=7620 loss=19.2, 18.0% complete INFO:tensorflow:local_step=7630 global_step=7630 loss=19.6, 18.0% complete INFO:tensorflow:local_step=7640 global_step=7640 loss=18.5, 18.1% complete INFO:tensorflow:local_step=7650 global_step=7650 loss=18.7, 18.1% complete INFO:tensorflow:local_step=7660 global_step=7660 loss=19.0, 18.1% complete INFO:tensorflow:local_step=7670 global_step=7670 loss=19.5, 18.1% complete INFO:tensorflow:local_step=7680 global_step=7680 loss=14.4, 18.1% complete INFO:tensorflow:local_step=7690 global_step=7690 loss=20.0, 18.2% complete INFO:tensorflow:local_step=7700 global_step=7700 loss=18.7, 18.2% complete INFO:tensorflow:local_step=7710 global_step=7710 loss=18.7, 18.2% complete INFO:tensorflow:local_step=7720 global_step=7720 loss=19.5, 18.2% complete INFO:tensorflow:local_step=7730 global_step=7730 loss=19.0, 18.3% complete INFO:tensorflow:local_step=7740 global_step=7740 loss=16.4, 18.3% complete INFO:tensorflow:local_step=7750 global_step=7750 loss=18.7, 18.3% complete INFO:tensorflow:local_step=7760 global_step=7760 loss=20.1, 18.3% complete INFO:tensorflow:local_step=7770 global_step=7770 loss=18.6, 18.4% complete INFO:tensorflow:local_step=7780 global_step=7780 loss=18.0, 18.4% complete INFO:tensorflow:local_step=7790 global_step=7790 loss=19.0, 18.4% complete INFO:tensorflow:local_step=7800 global_step=7800 loss=17.4, 18.4% complete INFO:tensorflow:local_step=7810 global_step=7810 loss=18.9, 18.5% complete INFO:tensorflow:local_step=7820 global_step=7820 loss=18.1, 18.5% complete INFO:tensorflow:local_step=7830 global_step=7830 loss=19.1, 18.5% complete INFO:tensorflow:local_step=7840 global_step=7840 loss=18.2, 18.5% complete INFO:tensorflow:local_step=7850 global_step=7850 loss=19.8, 18.5% complete INFO:tensorflow:local_step=7860 global_step=7860 loss=17.9, 18.6% complete INFO:tensorflow:local_step=7870 global_step=7870 loss=19.0, 18.6% complete INFO:tensorflow:local_step=7880 global_step=7880 loss=18.8, 18.6% complete INFO:tensorflow:local_step=7890 global_step=7890 loss=20.0, 18.6% complete INFO:tensorflow:local_step=7900 global_step=7900 loss=19.2, 18.7% complete INFO:tensorflow:local_step=7910 global_step=7910 loss=18.9, 18.7% complete INFO:tensorflow:local_step=7920 global_step=7920 loss=19.3, 18.7% complete INFO:tensorflow:local_step=7930 global_step=7930 loss=19.0, 18.7% complete INFO:tensorflow:local_step=7940 global_step=7940 loss=18.1, 18.8% complete INFO:tensorflow:local_step=7950 global_step=7950 loss=20.0, 18.8% complete INFO:tensorflow:local_step=7960 global_step=7960 loss=18.9, 18.8% complete INFO:tensorflow:local_step=7970 global_step=7970 loss=19.4, 18.8% complete INFO:tensorflow:local_step=7980 global_step=7980 loss=19.1, 18.9% complete INFO:tensorflow:local_step=7990 global_step=7990 loss=18.7, 18.9% complete INFO:tensorflow:local_step=8000 global_step=8000 loss=18.5, 18.9% complete INFO:tensorflow:local_step=8010 global_step=8010 loss=18.1, 18.9% complete INFO:tensorflow:local_step=8020 global_step=8020 loss=18.9, 19.0% complete INFO:tensorflow:local_step=8030 global_step=8030 loss=18.3, 19.0% complete INFO:tensorflow:local_step=8040 global_step=8040 loss=19.2, 19.0% complete INFO:tensorflow:local_step=8050 global_step=8050 loss=19.4, 19.0% complete INFO:tensorflow:local_step=8060 global_step=8060 loss=19.2, 19.0% complete INFO:tensorflow:local_step=8070 global_step=8070 loss=18.4, 19.1% complete INFO:tensorflow:local_step=8080 global_step=8080 loss=20.4, 19.1% complete INFO:tensorflow:local_step=8090 global_step=8090 loss=18.2, 19.1% complete INFO:tensorflow:local_step=8100 global_step=8100 loss=18.2, 19.1% complete INFO:tensorflow:local_step=8110 global_step=8110 loss=19.8, 19.2% complete INFO:tensorflow:local_step=8120 global_step=8120 loss=18.1, 19.2% complete INFO:tensorflow:local_step=8130 global_step=8130 loss=18.9, 19.2% complete INFO:tensorflow:local_step=8140 global_step=8140 loss=18.5, 19.2% complete INFO:tensorflow:local_step=8150 global_step=8150 loss=19.5, 19.3% complete INFO:tensorflow:local_step=8160 global_step=8160 loss=19.2, 19.3% complete INFO:tensorflow:local_step=8170 global_step=8170 loss=19.5, 19.3% complete INFO:tensorflow:local_step=8180 global_step=8180 loss=19.6, 19.3% complete INFO:tensorflow:local_step=8190 global_step=8190 loss=18.8, 19.4% complete INFO:tensorflow:local_step=8200 global_step=8200 loss=18.7, 19.4% complete INFO:tensorflow:local_step=8210 global_step=8210 loss=18.5, 19.4% complete INFO:tensorflow:local_step=8220 global_step=8220 loss=18.0, 19.4% complete INFO:tensorflow:local_step=8230 global_step=8230 loss=17.7, 19.4% complete INFO:tensorflow:local_step=8240 global_step=8240 loss=18.5, 19.5% complete INFO:tensorflow:local_step=8250 global_step=8250 loss=17.7, 19.5% complete INFO:tensorflow:local_step=8260 global_step=8260 loss=18.4, 19.5% complete INFO:tensorflow:local_step=8270 global_step=8270 loss=18.7, 19.5% complete INFO:tensorflow:local_step=8280 global_step=8280 loss=19.9, 19.6% complete INFO:tensorflow:local_step=8290 global_step=8290 loss=18.7, 19.6% complete INFO:tensorflow:local_step=8300 global_step=8300 loss=18.9, 19.6% complete INFO:tensorflow:local_step=8310 global_step=8310 loss=19.0, 19.6% complete INFO:tensorflow:local_step=8320 global_step=8320 loss=18.9, 19.7% complete INFO:tensorflow:local_step=8330 global_step=8330 loss=18.1, 19.7% complete INFO:tensorflow:local_step=8340 global_step=8340 loss=18.0, 19.7% complete INFO:tensorflow:local_step=8350 global_step=8350 loss=15.6, 19.7% complete INFO:tensorflow:local_step=8360 global_step=8360 loss=19.5, 19.8% complete INFO:tensorflow:local_step=8370 global_step=8370 loss=20.3, 19.8% complete INFO:tensorflow:local_step=8380 global_step=8380 loss=18.2, 19.8% complete INFO:tensorflow:local_step=8390 global_step=8390 loss=18.4, 19.8% complete INFO:tensorflow:local_step=8400 global_step=8400 loss=19.4, 19.8% complete INFO:tensorflow:local_step=8410 global_step=8410 loss=19.4, 19.9% complete INFO:tensorflow:local_step=8420 global_step=8420 loss=17.6, 19.9% complete INFO:tensorflow:local_step=8430 global_step=8430 loss=19.0, 19.9% complete INFO:tensorflow:local_step=8440 global_step=8440 loss=19.2, 19.9% complete INFO:tensorflow:local_step=8450 global_step=8450 loss=298.8, 20.0% complete INFO:tensorflow:local_step=8460 global_step=8460 loss=18.6, 20.0% complete INFO:tensorflow:local_step=8470 global_step=8470 loss=18.1, 20.0% complete INFO:tensorflow:local_step=8480 global_step=8480 loss=18.1, 20.0% complete INFO:tensorflow:local_step=8490 global_step=8490 loss=13.3, 20.1% complete INFO:tensorflow:local_step=8500 global_step=8500 loss=17.9, 20.1% complete INFO:tensorflow:local_step=8510 global_step=8510 loss=21.1, 20.1% complete INFO:tensorflow:local_step=8520 global_step=8520 loss=18.5, 20.1% complete INFO:tensorflow:local_step=8530 global_step=8530 loss=18.0, 20.2% complete INFO:tensorflow:local_step=8540 global_step=8540 loss=17.9, 20.2% complete INFO:tensorflow:local_step=8550 global_step=8550 loss=18.0, 20.2% complete INFO:tensorflow:local_step=8560 global_step=8560 loss=15.3, 20.2% complete INFO:tensorflow:local_step=8570 global_step=8570 loss=18.8, 20.3% complete INFO:tensorflow:local_step=8580 global_step=8580 loss=17.9, 20.3% complete INFO:tensorflow:local_step=8590 global_step=8590 loss=17.8, 20.3% complete INFO:tensorflow:local_step=8600 global_step=8600 loss=18.4, 20.3% complete INFO:tensorflow:local_step=8610 global_step=8610 loss=18.7, 20.3% complete INFO:tensorflow:local_step=8620 global_step=8620 loss=18.6, 20.4% complete INFO:tensorflow:local_step=8630 global_step=8630 loss=18.5, 20.4% complete INFO:tensorflow:local_step=8640 global_step=8640 loss=14.9, 20.4% complete INFO:tensorflow:local_step=8650 global_step=8650 loss=21.3, 20.4% complete INFO:tensorflow:local_step=8660 global_step=8660 loss=18.4, 20.5% complete INFO:tensorflow:local_step=8670 global_step=8670 loss=18.2, 20.5% complete INFO:tensorflow:local_step=8680 global_step=8680 loss=18.8, 20.5% complete INFO:tensorflow:local_step=8690 global_step=8690 loss=18.3, 20.5% complete INFO:tensorflow:local_step=8700 global_step=8700 loss=16.9, 20.6% complete INFO:tensorflow:local_step=8710 global_step=8710 loss=14.8, 20.6% complete INFO:tensorflow:local_step=8720 global_step=8720 loss=18.4, 20.6% complete INFO:tensorflow:local_step=8730 global_step=8730 loss=19.4, 20.6% complete INFO:tensorflow:local_step=8740 global_step=8740 loss=17.9, 20.7% complete INFO:tensorflow:local_step=8750 global_step=8750 loss=18.5, 20.7% complete INFO:tensorflow:local_step=8760 global_step=8760 loss=19.2, 20.7% complete INFO:tensorflow:local_step=8770 global_step=8770 loss=18.4, 20.7% complete INFO:tensorflow:local_step=8780 global_step=8780 loss=19.2, 20.7% complete INFO:tensorflow:local_step=8790 global_step=8790 loss=19.7, 20.8% complete INFO:tensorflow:local_step=8800 global_step=8800 loss=19.2, 20.8% complete INFO:tensorflow:local_step=8810 global_step=8810 loss=18.3, 20.8% complete INFO:tensorflow:local_step=8820 global_step=8820 loss=18.2, 20.8% complete INFO:tensorflow:local_step=8830 global_step=8830 loss=18.0, 20.9% complete INFO:tensorflow:local_step=8840 global_step=8840 loss=18.3, 20.9% complete INFO:tensorflow:local_step=8850 global_step=8850 loss=18.5, 20.9% complete INFO:tensorflow:local_step=8860 global_step=8860 loss=17.9, 20.9% complete INFO:tensorflow:local_step=8870 global_step=8870 loss=18.4, 21.0% complete INFO:tensorflow:local_step=8880 global_step=8880 loss=17.8, 21.0% complete INFO:tensorflow:local_step=8890 global_step=8890 loss=18.4, 21.0% complete INFO:tensorflow:local_step=8900 global_step=8900 loss=18.1, 21.0% complete INFO:tensorflow:local_step=8910 global_step=8910 loss=20.0, 21.1% complete INFO:tensorflow:local_step=8920 global_step=8920 loss=18.4, 21.1% complete INFO:tensorflow:local_step=8930 global_step=8930 loss=18.0, 21.1% complete INFO:tensorflow:local_step=8940 global_step=8940 loss=18.3, 21.1% complete INFO:tensorflow:local_step=8950 global_step=8950 loss=17.8, 21.1% complete INFO:tensorflow:local_step=8960 global_step=8960 loss=18.2, 21.2% complete INFO:tensorflow:local_step=8970 global_step=8970 loss=18.2, 21.2% complete INFO:tensorflow:local_step=8980 global_step=8980 loss=18.1, 21.2% complete INFO:tensorflow:local_step=8990 global_step=8990 loss=19.0, 21.2% complete INFO:tensorflow:local_step=9000 global_step=9000 loss=18.2, 21.3% complete INFO:tensorflow:local_step=9010 global_step=9010 loss=19.3, 21.3% complete INFO:tensorflow:local_step=9020 global_step=9020 loss=18.4, 21.3% complete INFO:tensorflow:local_step=9030 global_step=9030 loss=18.0, 21.3% complete INFO:tensorflow:local_step=9040 global_step=9040 loss=17.7, 21.4% complete INFO:tensorflow:local_step=9050 global_step=9050 loss=18.6, 21.4% complete INFO:tensorflow:local_step=9060 global_step=9060 loss=19.8, 21.4% complete INFO:tensorflow:local_step=9070 global_step=9070 loss=18.7, 21.4% complete INFO:tensorflow:local_step=9080 global_step=9080 loss=18.6, 21.5% complete INFO:tensorflow:local_step=9090 global_step=9090 loss=19.9, 21.5% complete INFO:tensorflow:local_step=9100 global_step=9100 loss=18.4, 21.5% complete INFO:tensorflow:local_step=9110 global_step=9110 loss=17.8, 21.5% complete INFO:tensorflow:local_step=9120 global_step=9120 loss=18.3, 21.6% complete INFO:tensorflow:local_step=9130 global_step=9130 loss=18.1, 21.6% complete INFO:tensorflow:local_step=9140 global_step=9140 loss=18.1, 21.6% complete INFO:tensorflow:local_step=9150 global_step=9150 loss=19.4, 21.6% complete INFO:tensorflow:local_step=9160 global_step=9160 loss=19.0, 21.6% complete INFO:tensorflow:local_step=9170 global_step=9170 loss=14.3, 21.7% complete INFO:tensorflow:local_step=9180 global_step=9180 loss=19.3, 21.7% complete INFO:tensorflow:local_step=9190 global_step=9190 loss=18.5, 21.7% complete INFO:tensorflow:local_step=9200 global_step=9200 loss=18.5, 21.7% complete INFO:tensorflow:local_step=9210 global_step=9210 loss=18.2, 21.8% complete INFO:tensorflow:local_step=9220 global_step=9220 loss=18.3, 21.8% complete INFO:tensorflow:local_step=9230 global_step=9230 loss=20.1, 21.8% complete INFO:tensorflow:local_step=9240 global_step=9240 loss=19.0, 21.8% complete INFO:tensorflow:local_step=9250 global_step=9250 loss=18.6, 21.9% complete INFO:tensorflow:local_step=9260 global_step=9260 loss=19.0, 21.9% complete INFO:tensorflow:local_step=9270 global_step=9270 loss=18.6, 21.9% complete INFO:tensorflow:local_step=9280 global_step=9280 loss=17.8, 21.9% complete INFO:tensorflow:local_step=9290 global_step=9290 loss=17.9, 22.0% complete INFO:tensorflow:local_step=9300 global_step=9300 loss=17.6, 22.0% complete INFO:tensorflow:local_step=9310 global_step=9310 loss=277.3, 22.0% complete INFO:tensorflow:local_step=9320 global_step=9320 loss=19.4, 22.0% complete INFO:tensorflow:local_step=9330 global_step=9330 loss=18.2, 22.0% complete INFO:tensorflow:local_step=9340 global_step=9340 loss=19.2, 22.1% complete INFO:tensorflow:local_step=9350 global_step=9350 loss=19.5, 22.1% complete INFO:tensorflow:local_step=9360 global_step=9360 loss=18.1, 22.1% complete INFO:tensorflow:local_step=9370 global_step=9370 loss=18.8, 22.1% complete INFO:tensorflow:local_step=9380 global_step=9380 loss=19.1, 22.2% complete INFO:tensorflow:local_step=9390 global_step=9390 loss=18.9, 22.2% complete INFO:tensorflow:local_step=9400 global_step=9400 loss=18.0, 22.2% complete INFO:tensorflow:local_step=9410 global_step=9410 loss=18.7, 22.2% complete INFO:tensorflow:local_step=9420 global_step=9420 loss=18.9, 22.3% complete INFO:tensorflow:local_step=9430 global_step=9430 loss=16.6, 22.3% complete INFO:tensorflow:local_step=9440 global_step=9440 loss=18.4, 22.3% complete INFO:tensorflow:local_step=9450 global_step=9450 loss=19.1, 22.3% complete INFO:tensorflow:local_step=9460 global_step=9460 loss=17.4, 22.4% complete INFO:tensorflow:local_step=9470 global_step=9470 loss=18.1, 22.4% complete INFO:tensorflow:local_step=9480 global_step=9480 loss=18.8, 22.4% complete INFO:tensorflow:local_step=9490 global_step=9490 loss=19.3, 22.4% complete INFO:tensorflow:local_step=9500 global_step=9500 loss=18.2, 22.4% complete INFO:tensorflow:local_step=9510 global_step=9510 loss=18.2, 22.5% complete INFO:tensorflow:local_step=9520 global_step=9520 loss=18.5, 22.5% complete INFO:tensorflow:local_step=9530 global_step=9530 loss=18.4, 22.5% complete INFO:tensorflow:local_step=9540 global_step=9540 loss=19.2, 22.5% complete INFO:tensorflow:local_step=9550 global_step=9550 loss=18.8, 22.6% complete INFO:tensorflow:local_step=9560 global_step=9560 loss=18.0, 22.6% complete INFO:tensorflow:local_step=9570 global_step=9570 loss=19.1, 22.6% complete INFO:tensorflow:local_step=9580 global_step=9580 loss=18.4, 22.6% complete INFO:tensorflow:local_step=9590 global_step=9590 loss=17.8, 22.7% complete INFO:tensorflow:local_step=9600 global_step=9600 loss=19.1, 22.7% complete INFO:tensorflow:local_step=9610 global_step=9610 loss=18.7, 22.7% complete INFO:tensorflow:local_step=9620 global_step=9620 loss=17.7, 22.7% complete INFO:tensorflow:local_step=9630 global_step=9630 loss=21.5, 22.8% complete INFO:tensorflow:local_step=9640 global_step=9640 loss=18.0, 22.8% complete INFO:tensorflow:local_step=9650 global_step=9650 loss=17.8, 22.8% complete INFO:tensorflow:local_step=9660 global_step=9660 loss=17.8, 22.8% complete INFO:tensorflow:local_step=9670 global_step=9670 loss=18.0, 22.8% complete INFO:tensorflow:local_step=9680 global_step=9680 loss=283.7, 22.9% complete INFO:tensorflow:local_step=9690 global_step=9690 loss=18.8, 22.9% complete INFO:tensorflow:local_step=9700 global_step=9700 loss=19.8, 22.9% complete INFO:tensorflow:local_step=9710 global_step=9710 loss=18.1, 22.9% complete INFO:tensorflow:local_step=9720 global_step=9720 loss=17.7, 23.0% complete INFO:tensorflow:local_step=9730 global_step=9730 loss=298.2, 23.0% complete INFO:tensorflow:local_step=9740 global_step=9740 loss=17.3, 23.0% complete INFO:tensorflow:local_step=9750 global_step=9750 loss=18.8, 23.0% complete INFO:tensorflow:local_step=9760 global_step=9760 loss=19.2, 23.1% complete INFO:tensorflow:local_step=9770 global_step=9770 loss=15.0, 23.1% complete INFO:tensorflow:local_step=9780 global_step=9780 loss=17.2, 23.1% complete INFO:tensorflow:local_step=9790 global_step=9790 loss=20.8, 23.1% complete INFO:tensorflow:local_step=9800 global_step=9800 loss=19.1, 23.2% complete INFO:tensorflow:local_step=9810 global_step=9810 loss=19.2, 23.2% complete INFO:tensorflow:local_step=9820 global_step=9820 loss=17.8, 23.2% complete INFO:tensorflow:local_step=9830 global_step=9830 loss=19.2, 23.2% complete INFO:tensorflow:local_step=9840 global_step=9840 loss=15.2, 23.3% complete INFO:tensorflow:local_step=9850 global_step=9850 loss=18.0, 23.3% complete INFO:tensorflow:local_step=9860 global_step=9860 loss=18.4, 23.3% complete INFO:tensorflow:local_step=9870 global_step=9870 loss=17.9, 23.3% complete INFO:tensorflow:local_step=9880 global_step=9880 loss=19.8, 23.3% complete INFO:tensorflow:local_step=9890 global_step=9890 loss=17.9, 23.4% complete INFO:tensorflow:local_step=9900 global_step=9900 loss=19.5, 23.4% complete INFO:tensorflow:local_step=9910 global_step=9910 loss=18.4, 23.4% complete INFO:tensorflow:local_step=9920 global_step=9920 loss=18.8, 23.4% complete INFO:tensorflow:local_step=9930 global_step=9930 loss=293.9, 23.5% complete INFO:tensorflow:local_step=9940 global_step=9940 loss=17.9, 23.5% complete INFO:tensorflow:local_step=9950 global_step=9950 loss=17.3, 23.5% complete INFO:tensorflow:local_step=9960 global_step=9960 loss=18.1, 23.5% complete INFO:tensorflow:local_step=9970 global_step=9970 loss=18.9, 23.6% complete INFO:tensorflow:local_step=9980 global_step=9980 loss=18.3, 23.6% complete INFO:tensorflow:local_step=9990 global_step=9990 loss=20.0, 23.6% complete INFO:tensorflow:local_step=10000 global_step=10000 loss=19.1, 23.6% complete INFO:tensorflow:local_step=10010 global_step=10010 loss=18.2, 23.7% complete INFO:tensorflow:local_step=10020 global_step=10020 loss=16.3, 23.7% complete INFO:tensorflow:local_step=10030 global_step=10030 loss=18.3, 23.7% complete INFO:tensorflow:local_step=10040 global_step=10040 loss=18.1, 23.7% complete INFO:tensorflow:local_step=10050 global_step=10050 loss=18.3, 23.7% complete INFO:tensorflow:local_step=10060 global_step=10060 loss=19.2, 23.8% complete INFO:tensorflow:local_step=10070 global_step=10070 loss=18.8, 23.8% complete INFO:tensorflow:local_step=10080 global_step=10080 loss=18.8, 23.8% complete INFO:tensorflow:local_step=10090 global_step=10090 loss=19.1, 23.8% complete INFO:tensorflow:local_step=10100 global_step=10100 loss=19.5, 23.9% complete INFO:tensorflow:local_step=10110 global_step=10110 loss=19.2, 23.9% complete INFO:tensorflow:local_step=10120 global_step=10120 loss=18.3, 23.9% complete INFO:tensorflow:local_step=10130 global_step=10130 loss=18.8, 23.9% complete INFO:tensorflow:local_step=10140 global_step=10140 loss=18.8, 24.0% complete INFO:tensorflow:local_step=10150 global_step=10150 loss=19.0, 24.0% complete INFO:tensorflow:local_step=10160 global_step=10160 loss=18.9, 24.0% complete INFO:tensorflow:local_step=10170 global_step=10170 loss=18.7, 24.0% complete INFO:tensorflow:local_step=10180 global_step=10180 loss=18.2, 24.1% complete INFO:tensorflow:local_step=10190 global_step=10190 loss=18.0, 24.1% complete INFO:tensorflow:local_step=10200 global_step=10200 loss=18.5, 24.1% complete INFO:tensorflow:local_step=10210 global_step=10210 loss=18.4, 24.1% complete INFO:tensorflow:local_step=10220 global_step=10220 loss=18.4, 24.1% complete INFO:tensorflow:local_step=10230 global_step=10230 loss=18.2, 24.2% complete INFO:tensorflow:local_step=10240 global_step=10240 loss=18.2, 24.2% complete INFO:tensorflow:local_step=10250 global_step=10250 loss=19.6, 24.2% complete INFO:tensorflow:local_step=10260 global_step=10260 loss=18.8, 24.2% complete INFO:tensorflow:local_step=10270 global_step=10270 loss=19.7, 24.3% complete INFO:tensorflow:local_step=10280 global_step=10280 loss=19.9, 24.3% complete INFO:tensorflow:local_step=10290 global_step=10290 loss=17.6, 24.3% complete INFO:tensorflow:local_step=10300 global_step=10300 loss=15.0, 24.3% complete INFO:tensorflow:local_step=10310 global_step=10310 loss=17.9, 24.4% complete INFO:tensorflow:local_step=10320 global_step=10320 loss=18.4, 24.4% complete INFO:tensorflow:local_step=10330 global_step=10330 loss=18.6, 24.4% complete INFO:tensorflow:local_step=10340 global_step=10340 loss=18.2, 24.4% complete INFO:tensorflow:local_step=10350 global_step=10350 loss=19.7, 24.5% complete INFO:tensorflow:local_step=10360 global_step=10360 loss=17.4, 24.5% complete INFO:tensorflow:local_step=10370 global_step=10370 loss=18.0, 24.5% complete INFO:tensorflow:local_step=10380 global_step=10380 loss=18.5, 24.5% complete INFO:tensorflow:local_step=10390 global_step=10390 loss=18.3, 24.6% complete INFO:tensorflow:local_step=10400 global_step=10400 loss=19.2, 24.6% complete INFO:tensorflow:local_step=10410 global_step=10410 loss=18.8, 24.6% complete INFO:tensorflow:local_step=10420 global_step=10420 loss=20.8, 24.6% complete INFO:tensorflow:local_step=10430 global_step=10430 loss=19.5, 24.6% complete INFO:tensorflow:local_step=10440 global_step=10440 loss=17.7, 24.7% complete INFO:tensorflow:local_step=10450 global_step=10450 loss=18.2, 24.7% complete INFO:tensorflow:local_step=10460 global_step=10460 loss=18.7, 24.7% complete INFO:tensorflow:local_step=10470 global_step=10470 loss=19.2, 24.7% complete INFO:tensorflow:local_step=10480 global_step=10480 loss=19.0, 24.8% complete INFO:tensorflow:local_step=10490 global_step=10490 loss=18.2, 24.8% complete INFO:tensorflow:local_step=10500 global_step=10500 loss=18.3, 24.8% complete INFO:tensorflow:local_step=10510 global_step=10510 loss=18.6, 24.8% complete INFO:tensorflow:local_step=10520 global_step=10520 loss=17.7, 24.9% complete INFO:tensorflow:local_step=10530 global_step=10530 loss=18.3, 24.9% complete INFO:tensorflow:local_step=10540 global_step=10540 loss=18.0, 24.9% complete INFO:tensorflow:local_step=10550 global_step=10550 loss=18.8, 24.9% complete INFO:tensorflow:local_step=10560 global_step=10560 loss=18.2, 25.0% complete INFO:tensorflow:local_step=10570 global_step=10570 loss=19.2, 25.0% complete INFO:tensorflow:local_step=10580 global_step=10580 loss=18.0, 25.0% complete INFO:tensorflow:local_step=10590 global_step=10590 loss=18.6, 25.0% complete INFO:tensorflow:local_step=10600 global_step=10600 loss=18.0, 25.0% complete INFO:tensorflow:local_step=10610 global_step=10610 loss=19.1, 25.1% complete INFO:tensorflow:local_step=10620 global_step=10620 loss=17.8, 25.1% complete INFO:tensorflow:local_step=10630 global_step=10630 loss=18.8, 25.1% complete INFO:tensorflow:local_step=10640 global_step=10640 loss=18.0, 25.1% complete INFO:tensorflow:local_step=10650 global_step=10650 loss=18.6, 25.2% complete INFO:tensorflow:local_step=10660 global_step=10660 loss=18.5, 25.2% complete INFO:tensorflow:local_step=10670 global_step=10670 loss=18.3, 25.2% complete INFO:tensorflow:local_step=10680 global_step=10680 loss=18.7, 25.2% complete INFO:tensorflow:local_step=10690 global_step=10690 loss=19.2, 25.3% complete INFO:tensorflow:local_step=10700 global_step=10700 loss=19.0, 25.3% complete INFO:tensorflow:local_step=10710 global_step=10710 loss=18.0, 25.3% complete INFO:tensorflow:local_step=10720 global_step=10720 loss=18.8, 25.3% complete INFO:tensorflow:local_step=10730 global_step=10730 loss=18.1, 25.4% complete INFO:tensorflow:local_step=10740 global_step=10740 loss=18.8, 25.4% complete INFO:tensorflow:local_step=10750 global_step=10750 loss=19.3, 25.4% complete INFO:tensorflow:local_step=10760 global_step=10760 loss=17.8, 25.4% complete INFO:tensorflow:local_step=10770 global_step=10770 loss=18.6, 25.4% complete INFO:tensorflow:local_step=10780 global_step=10780 loss=18.3, 25.5% complete INFO:tensorflow:local_step=10790 global_step=10790 loss=19.3, 25.5% complete INFO:tensorflow:local_step=10800 global_step=10800 loss=18.0, 25.5% complete INFO:tensorflow:local_step=10810 global_step=10810 loss=18.0, 25.5% complete INFO:tensorflow:local_step=10820 global_step=10820 loss=17.5, 25.6% complete INFO:tensorflow:local_step=10830 global_step=10830 loss=18.7, 25.6% complete INFO:tensorflow:local_step=10840 global_step=10840 loss=18.6, 25.6% complete INFO:tensorflow:local_step=10850 global_step=10850 loss=18.1, 25.6% complete INFO:tensorflow:local_step=10860 global_step=10860 loss=17.9, 25.7% complete INFO:tensorflow:local_step=10870 global_step=10870 loss=18.5, 25.7% complete INFO:tensorflow:local_step=10880 global_step=10880 loss=18.0, 25.7% complete INFO:tensorflow:local_step=10890 global_step=10890 loss=19.0, 25.7% complete INFO:tensorflow:local_step=10900 global_step=10900 loss=17.7, 25.8% complete INFO:tensorflow:local_step=10910 global_step=10910 loss=17.8, 25.8% complete INFO:tensorflow:local_step=10920 global_step=10920 loss=17.7, 25.8% complete INFO:tensorflow:local_step=10930 global_step=10930 loss=16.9, 25.8% complete INFO:tensorflow:local_step=10940 global_step=10940 loss=19.3, 25.9% complete INFO:tensorflow:local_step=10950 global_step=10950 loss=18.5, 25.9% complete INFO:tensorflow:local_step=10960 global_step=10960 loss=17.7, 25.9% complete INFO:tensorflow:local_step=10970 global_step=10970 loss=18.6, 25.9% complete INFO:tensorflow:local_step=10980 global_step=10980 loss=18.1, 25.9% complete INFO:tensorflow:local_step=10990 global_step=10990 loss=16.3, 26.0% complete INFO:tensorflow:local_step=11000 global_step=11000 loss=17.6, 26.0% complete INFO:tensorflow:local_step=11010 global_step=11010 loss=18.8, 26.0% complete INFO:tensorflow:local_step=11020 global_step=11020 loss=14.9, 26.0% complete INFO:tensorflow:local_step=11030 global_step=11030 loss=18.4, 26.1% complete INFO:tensorflow:local_step=11040 global_step=11040 loss=18.1, 26.1% complete INFO:tensorflow:local_step=11050 global_step=11050 loss=15.2, 26.1% complete INFO:tensorflow:local_step=11060 global_step=11060 loss=18.6, 26.1% complete INFO:tensorflow:local_step=11070 global_step=11070 loss=18.0, 26.2% complete INFO:tensorflow:local_step=11080 global_step=11080 loss=14.0, 26.2% complete INFO:tensorflow:local_step=11090 global_step=11090 loss=16.0, 26.2% complete INFO:tensorflow:local_step=11100 global_step=11100 loss=19.0, 26.2% complete INFO:tensorflow:local_step=11110 global_step=11110 loss=18.8, 26.3% complete INFO:tensorflow:local_step=11120 global_step=11120 loss=18.0, 26.3% complete INFO:tensorflow:local_step=11130 global_step=11130 loss=18.2, 26.3% complete INFO:tensorflow:local_step=11140 global_step=11140 loss=17.9, 26.3% complete INFO:tensorflow:local_step=11150 global_step=11150 loss=16.9, 26.3% complete INFO:tensorflow:local_step=11160 global_step=11160 loss=18.1, 26.4% complete INFO:tensorflow:local_step=11170 global_step=11170 loss=18.0, 26.4% complete INFO:tensorflow:local_step=11180 global_step=11180 loss=18.1, 26.4% complete INFO:tensorflow:local_step=11190 global_step=11190 loss=18.5, 26.4% complete INFO:tensorflow:local_step=11200 global_step=11200 loss=21.4, 26.5% complete INFO:tensorflow:local_step=11210 global_step=11210 loss=17.9, 26.5% complete INFO:tensorflow:local_step=11220 global_step=11220 loss=18.3, 26.5% complete INFO:tensorflow:local_step=11230 global_step=11230 loss=18.8, 26.5% complete INFO:tensorflow:local_step=11240 global_step=11240 loss=18.0, 26.6% complete INFO:tensorflow:local_step=11250 global_step=11250 loss=18.7, 26.6% complete INFO:tensorflow:local_step=11260 global_step=11260 loss=18.6, 26.6% complete INFO:tensorflow:local_step=11270 global_step=11270 loss=20.5, 26.6% complete INFO:tensorflow:local_step=11280 global_step=11280 loss=19.3, 26.7% complete INFO:tensorflow:local_step=11290 global_step=11290 loss=18.7, 26.7% complete INFO:tensorflow:local_step=11300 global_step=11300 loss=18.4, 26.7% complete INFO:tensorflow:local_step=11310 global_step=11310 loss=17.0, 26.7% complete INFO:tensorflow:local_step=11320 global_step=11320 loss=18.2, 26.7% complete INFO:tensorflow:local_step=11330 global_step=11330 loss=17.6, 26.8% complete INFO:tensorflow:local_step=11340 global_step=11340 loss=19.3, 26.8% complete INFO:tensorflow:local_step=11350 global_step=11350 loss=18.4, 26.8% complete INFO:tensorflow:local_step=11360 global_step=11360 loss=19.0, 26.8% complete INFO:tensorflow:local_step=11370 global_step=11370 loss=17.7, 26.9% complete INFO:tensorflow:local_step=11380 global_step=11380 loss=18.6, 26.9% complete INFO:tensorflow:local_step=11390 global_step=11390 loss=17.9, 26.9% complete INFO:tensorflow:local_step=11400 global_step=11400 loss=18.4, 26.9% complete INFO:tensorflow:local_step=11410 global_step=11410 loss=17.7, 27.0% complete INFO:tensorflow:local_step=11420 global_step=11420 loss=18.7, 27.0% complete INFO:tensorflow:local_step=11430 global_step=11430 loss=15.2, 27.0% complete INFO:tensorflow:local_step=11440 global_step=11440 loss=18.0, 27.0% complete INFO:tensorflow:local_step=11450 global_step=11450 loss=17.9, 27.1% complete INFO:tensorflow:local_step=11460 global_step=11460 loss=19.1, 27.1% complete INFO:tensorflow:local_step=11470 global_step=11470 loss=17.5, 27.1% complete INFO:tensorflow:local_step=11480 global_step=11480 loss=19.0, 27.1% complete INFO:tensorflow:local_step=11490 global_step=11490 loss=18.2, 27.2% complete INFO:tensorflow:local_step=11500 global_step=11500 loss=18.0, 27.2% complete INFO:tensorflow:local_step=11510 global_step=11510 loss=18.7, 27.2% complete INFO:tensorflow:local_step=11520 global_step=11520 loss=19.9, 27.2% complete INFO:tensorflow:local_step=11530 global_step=11530 loss=19.1, 27.2% complete INFO:tensorflow:local_step=11540 global_step=11540 loss=17.6, 27.3% complete INFO:tensorflow:local_step=11550 global_step=11550 loss=20.0, 27.3% complete INFO:tensorflow:local_step=11560 global_step=11560 loss=19.1, 27.3% complete INFO:tensorflow:local_step=11570 global_step=11570 loss=19.4, 27.3% complete INFO:tensorflow:local_step=11580 global_step=11580 loss=18.4, 27.4% complete INFO:tensorflow:local_step=11590 global_step=11590 loss=19.0, 27.4% complete INFO:tensorflow:local_step=11600 global_step=11600 loss=19.0, 27.4% complete INFO:tensorflow:local_step=11610 global_step=11610 loss=18.5, 27.4% complete INFO:tensorflow:local_step=11620 global_step=11620 loss=18.3, 27.5% complete INFO:tensorflow:local_step=11630 global_step=11630 loss=19.5, 27.5% complete INFO:tensorflow:local_step=11640 global_step=11640 loss=19.3, 27.5% complete INFO:tensorflow:local_step=11650 global_step=11650 loss=18.5, 27.5% complete INFO:tensorflow:local_step=11660 global_step=11660 loss=18.5, 27.6% complete INFO:tensorflow:local_step=11670 global_step=11670 loss=18.6, 27.6% complete INFO:tensorflow:local_step=11680 global_step=11680 loss=17.6, 27.6% complete INFO:tensorflow:local_step=11690 global_step=11690 loss=19.4, 27.6% complete INFO:tensorflow:local_step=11700 global_step=11700 loss=18.5, 27.6% complete INFO:tensorflow:local_step=11710 global_step=11710 loss=18.0, 27.7% complete INFO:tensorflow:local_step=11720 global_step=11720 loss=17.9, 27.7% complete INFO:tensorflow:local_step=11730 global_step=11730 loss=19.5, 27.7% complete INFO:tensorflow:local_step=11740 global_step=11740 loss=16.0, 27.7% complete INFO:tensorflow:local_step=11750 global_step=11750 loss=19.5, 27.8% complete INFO:tensorflow:local_step=11760 global_step=11760 loss=18.2, 27.8% complete INFO:tensorflow:local_step=11770 global_step=11770 loss=18.2, 27.8% complete INFO:tensorflow:local_step=11780 global_step=11780 loss=17.7, 27.8% complete INFO:tensorflow:local_step=11790 global_step=11790 loss=19.2, 27.9% complete INFO:tensorflow:local_step=11800 global_step=11800 loss=18.6, 27.9% complete INFO:tensorflow:local_step=11810 global_step=11810 loss=18.5, 27.9% complete INFO:tensorflow:local_step=11820 global_step=11820 loss=19.1, 27.9% complete INFO:tensorflow:local_step=11830 global_step=11830 loss=18.3, 28.0% complete INFO:tensorflow:local_step=11840 global_step=11840 loss=18.7, 28.0% complete INFO:tensorflow:local_step=11850 global_step=11850 loss=19.5, 28.0% complete INFO:tensorflow:local_step=11860 global_step=11860 loss=18.9, 28.0% complete INFO:tensorflow:local_step=11870 global_step=11870 loss=18.1, 28.0% complete INFO:tensorflow:local_step=11880 global_step=11880 loss=19.2, 28.1% complete INFO:tensorflow:local_step=11890 global_step=11890 loss=17.5, 28.1% complete INFO:tensorflow:local_step=11900 global_step=11900 loss=18.1, 28.1% complete INFO:tensorflow:local_step=11910 global_step=11910 loss=18.0, 28.1% complete INFO:tensorflow:local_step=11920 global_step=11920 loss=19.2, 28.2% complete INFO:tensorflow:local_step=11930 global_step=11930 loss=18.4, 28.2% complete INFO:tensorflow:local_step=11940 global_step=11940 loss=18.8, 28.2% complete INFO:tensorflow:local_step=11950 global_step=11950 loss=17.7, 28.2% complete INFO:tensorflow:local_step=11960 global_step=11960 loss=18.0, 28.3% complete INFO:tensorflow:local_step=11970 global_step=11970 loss=17.8, 28.3% complete INFO:tensorflow:local_step=11980 global_step=11980 loss=21.4, 28.3% complete INFO:tensorflow:local_step=11990 global_step=11990 loss=16.1, 28.3% complete INFO:tensorflow:local_step=12000 global_step=12000 loss=18.2, 28.4% complete INFO:tensorflow:local_step=12010 global_step=12010 loss=19.3, 28.4% complete INFO:tensorflow:local_step=12020 global_step=12020 loss=17.6, 28.4% complete INFO:tensorflow:local_step=12030 global_step=12030 loss=17.6, 28.4% complete INFO:tensorflow:local_step=12040 global_step=12040 loss=18.5, 28.4% complete INFO:tensorflow:local_step=12050 global_step=12050 loss=18.0, 28.5% complete INFO:tensorflow:local_step=12060 global_step=12060 loss=19.3, 28.5% complete INFO:tensorflow:local_step=12070 global_step=12070 loss=18.2, 28.5% complete INFO:tensorflow:local_step=12080 global_step=12080 loss=17.9, 28.5% complete INFO:tensorflow:local_step=12090 global_step=12090 loss=17.5, 28.6% complete INFO:tensorflow:local_step=12100 global_step=12100 loss=19.4, 28.6% complete INFO:tensorflow:local_step=12110 global_step=12110 loss=18.3, 28.6% complete INFO:tensorflow:local_step=12120 global_step=12120 loss=18.3, 28.6% complete INFO:tensorflow:local_step=12130 global_step=12130 loss=19.3, 28.7% complete INFO:tensorflow:local_step=12140 global_step=12140 loss=18.1, 28.7% complete INFO:tensorflow:local_step=12150 global_step=12150 loss=18.3, 28.7% complete INFO:tensorflow:local_step=12160 global_step=12160 loss=18.4, 28.7% complete INFO:tensorflow:local_step=12170 global_step=12170 loss=19.3, 28.8% complete INFO:tensorflow:local_step=12180 global_step=12180 loss=17.7, 28.8% complete INFO:tensorflow:local_step=12190 global_step=12190 loss=18.8, 28.8% complete INFO:tensorflow:local_step=12200 global_step=12200 loss=18.1, 28.8% complete INFO:tensorflow:local_step=12210 global_step=12210 loss=18.2, 28.9% complete INFO:tensorflow:local_step=12220 global_step=12220 loss=18.5, 28.9% complete INFO:tensorflow:local_step=12230 global_step=12230 loss=19.5, 28.9% complete INFO:tensorflow:local_step=12240 global_step=12240 loss=18.5, 28.9% complete INFO:tensorflow:local_step=12250 global_step=12250 loss=19.1, 28.9% complete INFO:tensorflow:local_step=12260 global_step=12260 loss=19.2, 29.0% complete INFO:tensorflow:local_step=12270 global_step=12270 loss=18.8, 29.0% complete INFO:tensorflow:local_step=12280 global_step=12280 loss=18.4, 29.0% complete INFO:tensorflow:local_step=12290 global_step=12290 loss=19.0, 29.0% complete INFO:tensorflow:local_step=12300 global_step=12300 loss=17.7, 29.1% complete INFO:tensorflow:local_step=12310 global_step=12310 loss=19.6, 29.1% complete INFO:tensorflow:local_step=12320 global_step=12320 loss=17.4, 29.1% complete INFO:tensorflow:local_step=12330 global_step=12330 loss=17.9, 29.1% complete INFO:tensorflow:local_step=12340 global_step=12340 loss=18.7, 29.2% complete INFO:tensorflow:local_step=12350 global_step=12350 loss=18.9, 29.2% complete INFO:tensorflow:local_step=12360 global_step=12360 loss=18.2, 29.2% complete INFO:tensorflow:local_step=12370 global_step=12370 loss=18.2, 29.2% complete INFO:tensorflow:local_step=12380 global_step=12380 loss=18.9, 29.3% complete INFO:tensorflow:local_step=12390 global_step=12390 loss=18.6, 29.3% complete INFO:tensorflow:local_step=12400 global_step=12400 loss=18.6, 29.3% complete INFO:tensorflow:local_step=12410 global_step=12410 loss=18.4, 29.3% complete INFO:tensorflow:local_step=12420 global_step=12420 loss=18.8, 29.3% complete INFO:tensorflow:local_step=12430 global_step=12430 loss=18.5, 29.4% complete INFO:tensorflow:local_step=12440 global_step=12440 loss=19.5, 29.4% complete INFO:tensorflow:local_step=12450 global_step=12450 loss=18.9, 29.4% complete INFO:tensorflow:local_step=12460 global_step=12460 loss=17.9, 29.4% complete INFO:tensorflow:local_step=12470 global_step=12470 loss=18.3, 29.5% complete INFO:tensorflow:local_step=12480 global_step=12480 loss=281.5, 29.5% complete INFO:tensorflow:local_step=12490 global_step=12490 loss=17.8, 29.5% complete INFO:tensorflow:local_step=12500 global_step=12500 loss=17.7, 29.5% complete INFO:tensorflow:local_step=12510 global_step=12510 loss=15.2, 29.6% complete INFO:tensorflow:local_step=12520 global_step=12520 loss=19.4, 29.6% complete INFO:tensorflow:local_step=12530 global_step=12530 loss=18.2, 29.6% complete INFO:tensorflow:local_step=12540 global_step=12540 loss=17.4, 29.6% complete INFO:tensorflow:local_step=12550 global_step=12550 loss=19.0, 29.7% complete INFO:tensorflow:local_step=12560 global_step=12560 loss=18.3, 29.7% complete INFO:tensorflow:local_step=12570 global_step=12570 loss=18.7, 29.7% complete INFO:tensorflow:local_step=12580 global_step=12580 loss=18.2, 29.7% complete INFO:tensorflow:local_step=12590 global_step=12590 loss=18.7, 29.7% complete INFO:tensorflow:local_step=12600 global_step=12600 loss=18.2, 29.8% complete INFO:tensorflow:local_step=12610 global_step=12610 loss=18.7, 29.8% complete INFO:tensorflow:local_step=12620 global_step=12620 loss=17.8, 29.8% complete INFO:tensorflow:local_step=12630 global_step=12630 loss=17.9, 29.8% complete INFO:tensorflow:local_step=12640 global_step=12640 loss=19.2, 29.9% complete INFO:tensorflow:local_step=12650 global_step=12650 loss=19.1, 29.9% complete INFO:tensorflow:local_step=12660 global_step=12660 loss=19.6, 29.9% complete INFO:tensorflow:local_step=12670 global_step=12670 loss=18.0, 29.9% complete INFO:tensorflow:local_step=12680 global_step=12680 loss=18.1, 30.0% complete INFO:tensorflow:local_step=12690 global_step=12690 loss=18.3, 30.0% complete INFO:tensorflow:local_step=12700 global_step=12700 loss=14.7, 30.0% complete INFO:tensorflow:local_step=12710 global_step=12710 loss=14.8, 30.0% complete INFO:tensorflow:local_step=12720 global_step=12720 loss=18.3, 30.1% complete INFO:tensorflow:local_step=12730 global_step=12730 loss=18.6, 30.1% complete INFO:tensorflow:local_step=12740 global_step=12740 loss=18.0, 30.1% complete INFO:tensorflow:local_step=12750 global_step=12750 loss=18.7, 30.1% complete INFO:tensorflow:local_step=12760 global_step=12760 loss=18.0, 30.2% complete INFO:tensorflow:local_step=12770 global_step=12770 loss=17.9, 30.2% complete INFO:tensorflow:local_step=12780 global_step=12780 loss=18.1, 30.2% complete INFO:tensorflow:local_step=12790 global_step=12790 loss=18.1, 30.2% complete INFO:tensorflow:local_step=12800 global_step=12800 loss=18.3, 30.2% complete INFO:tensorflow:local_step=12810 global_step=12810 loss=18.1, 30.3% complete INFO:tensorflow:local_step=12820 global_step=12820 loss=18.0, 30.3% complete INFO:tensorflow:local_step=12830 global_step=12830 loss=19.2, 30.3% complete INFO:tensorflow:local_step=12840 global_step=12840 loss=18.6, 30.3% complete INFO:tensorflow:local_step=12850 global_step=12850 loss=18.7, 30.4% complete INFO:tensorflow:local_step=12860 global_step=12860 loss=18.2, 30.4% complete INFO:tensorflow:local_step=12870 global_step=12870 loss=18.0, 30.4% complete INFO:tensorflow:local_step=12880 global_step=12880 loss=18.4, 30.4% complete INFO:tensorflow:local_step=12890 global_step=12890 loss=18.1, 30.5% complete INFO:tensorflow:local_step=12900 global_step=12900 loss=19.3, 30.5% complete INFO:tensorflow:local_step=12910 global_step=12910 loss=19.0, 30.5% complete INFO:tensorflow:local_step=12920 global_step=12920 loss=19.4, 30.5% complete INFO:tensorflow:local_step=12930 global_step=12930 loss=17.8, 30.6% complete INFO:tensorflow:local_step=12940 global_step=12940 loss=17.7, 30.6% complete INFO:tensorflow:local_step=12950 global_step=12950 loss=18.1, 30.6% complete INFO:tensorflow:local_step=12960 global_step=12960 loss=19.1, 30.6% complete INFO:tensorflow:local_step=12970 global_step=12970 loss=17.8, 30.6% complete INFO:tensorflow:local_step=12980 global_step=12980 loss=18.2, 30.7% complete INFO:tensorflow:local_step=12990 global_step=12990 loss=17.9, 30.7% complete INFO:tensorflow:local_step=13000 global_step=13000 loss=19.0, 30.7% complete INFO:tensorflow:local_step=13010 global_step=13010 loss=215.0, 30.7% complete INFO:tensorflow:local_step=13020 global_step=13020 loss=17.9, 30.8% complete INFO:tensorflow:local_step=13030 global_step=13030 loss=18.3, 30.8% complete INFO:tensorflow:local_step=13040 global_step=13040 loss=21.5, 30.8% complete INFO:tensorflow:local_step=13050 global_step=13050 loss=17.8, 30.8% complete INFO:tensorflow:local_step=13060 global_step=13060 loss=15.9, 30.9% complete INFO:tensorflow:local_step=13070 global_step=13070 loss=18.6, 30.9% complete INFO:tensorflow:local_step=13080 global_step=13080 loss=18.4, 30.9% complete INFO:tensorflow:local_step=13090 global_step=13090 loss=18.0, 30.9% complete INFO:tensorflow:local_step=13100 global_step=13100 loss=18.0, 31.0% complete INFO:tensorflow:local_step=13110 global_step=13110 loss=19.0, 31.0% complete INFO:tensorflow:local_step=13120 global_step=13120 loss=17.7, 31.0% complete INFO:tensorflow:local_step=13130 global_step=13130 loss=17.8, 31.0% complete INFO:tensorflow:local_step=13140 global_step=13140 loss=17.9, 31.0% complete INFO:tensorflow:local_step=13150 global_step=13150 loss=18.9, 31.1% complete INFO:tensorflow:local_step=13160 global_step=13160 loss=19.1, 31.1% complete INFO:tensorflow:local_step=13170 global_step=13170 loss=18.2, 31.1% complete INFO:tensorflow:local_step=13180 global_step=13180 loss=18.7, 31.1% complete INFO:tensorflow:local_step=13190 global_step=13190 loss=19.0, 31.2% complete INFO:tensorflow:local_step=13200 global_step=13200 loss=14.6, 31.2% complete INFO:tensorflow:local_step=13210 global_step=13210 loss=17.4, 31.2% complete INFO:tensorflow:local_step=13220 global_step=13220 loss=17.9, 31.2% complete INFO:tensorflow:local_step=13230 global_step=13230 loss=18.3, 31.3% complete INFO:tensorflow:local_step=13240 global_step=13240 loss=18.1, 31.3% complete INFO:tensorflow:local_step=13250 global_step=13250 loss=19.8, 31.3% complete INFO:tensorflow:local_step=13260 global_step=13260 loss=19.1, 31.3% complete INFO:tensorflow:local_step=13270 global_step=13270 loss=19.0, 31.4% complete INFO:tensorflow:local_step=13280 global_step=13280 loss=18.3, 31.4% complete INFO:tensorflow:local_step=13290 global_step=13290 loss=18.1, 31.4% complete INFO:tensorflow:local_step=13300 global_step=13300 loss=18.0, 31.4% complete INFO:tensorflow:local_step=13310 global_step=13310 loss=18.4, 31.5% complete INFO:tensorflow:local_step=13320 global_step=13320 loss=18.6, 31.5% complete INFO:tensorflow:local_step=13330 global_step=13330 loss=18.7, 31.5% complete INFO:tensorflow:local_step=13340 global_step=13340 loss=18.3, 31.5% complete INFO:tensorflow:local_step=13350 global_step=13350 loss=21.3, 31.5% complete INFO:tensorflow:local_step=13360 global_step=13360 loss=18.4, 31.6% complete INFO:tensorflow:local_step=13370 global_step=13370 loss=18.4, 31.6% complete INFO:tensorflow:local_step=13380 global_step=13380 loss=18.8, 31.6% complete INFO:tensorflow:local_step=13390 global_step=13390 loss=17.9, 31.6% complete INFO:tensorflow:local_step=13400 global_step=13400 loss=19.1, 31.7% complete INFO:tensorflow:local_step=13410 global_step=13410 loss=21.0, 31.7% complete INFO:tensorflow:local_step=13420 global_step=13420 loss=18.3, 31.7% complete INFO:tensorflow:local_step=13430 global_step=13430 loss=19.2, 31.7% complete INFO:tensorflow:local_step=13440 global_step=13440 loss=18.3, 31.8% complete INFO:tensorflow:local_step=13450 global_step=13450 loss=18.8, 31.8% complete INFO:tensorflow:local_step=13460 global_step=13460 loss=17.7, 31.8% complete INFO:tensorflow:local_step=13470 global_step=13470 loss=18.8, 31.8% complete INFO:tensorflow:local_step=13480 global_step=13480 loss=18.3, 31.9% complete INFO:tensorflow:local_step=13490 global_step=13490 loss=18.8, 31.9% complete INFO:tensorflow:local_step=13500 global_step=13500 loss=18.5, 31.9% complete INFO:tensorflow:local_step=13510 global_step=13510 loss=17.9, 31.9% complete INFO:tensorflow:local_step=13520 global_step=13520 loss=19.6, 31.9% complete INFO:tensorflow:local_step=13530 global_step=13530 loss=17.8, 32.0% complete INFO:tensorflow:local_step=13540 global_step=13540 loss=18.9, 32.0% complete INFO:tensorflow:local_step=13550 global_step=13550 loss=18.2, 32.0% complete INFO:tensorflow:local_step=13560 global_step=13560 loss=18.4, 32.0% complete INFO:tensorflow:local_step=13570 global_step=13570 loss=17.6, 32.1% complete INFO:tensorflow:local_step=13580 global_step=13580 loss=17.7, 32.1% complete INFO:tensorflow:local_step=13590 global_step=13590 loss=18.4, 32.1% complete INFO:tensorflow:local_step=13600 global_step=13600 loss=18.6, 32.1% complete INFO:tensorflow:local_step=13610 global_step=13610 loss=19.0, 32.2% complete INFO:tensorflow:local_step=13620 global_step=13620 loss=18.3, 32.2% complete INFO:tensorflow:local_step=13630 global_step=13630 loss=18.7, 32.2% complete INFO:tensorflow:local_step=13640 global_step=13640 loss=17.7, 32.2% complete INFO:tensorflow:local_step=13650 global_step=13650 loss=17.7, 32.3% complete INFO:tensorflow:local_step=13660 global_step=13660 loss=17.8, 32.3% complete INFO:tensorflow:local_step=13670 global_step=13670 loss=18.6, 32.3% complete INFO:tensorflow:local_step=13680 global_step=13680 loss=18.4, 32.3% complete INFO:tensorflow:local_step=13690 global_step=13690 loss=18.1, 32.3% complete INFO:tensorflow:local_step=13700 global_step=13700 loss=18.1, 32.4% complete INFO:tensorflow:local_step=13710 global_step=13710 loss=17.7, 32.4% complete INFO:tensorflow:local_step=13720 global_step=13720 loss=17.8, 32.4% complete INFO:tensorflow:local_step=13730 global_step=13730 loss=18.1, 32.4% complete INFO:tensorflow:local_step=13740 global_step=13740 loss=19.0, 32.5% complete INFO:tensorflow:local_step=13750 global_step=13750 loss=19.0, 32.5% complete INFO:tensorflow:local_step=13760 global_step=13760 loss=19.0, 32.5% complete INFO:tensorflow:local_step=13770 global_step=13770 loss=18.0, 32.5% complete INFO:tensorflow:local_step=13780 global_step=13780 loss=18.2, 32.6% complete INFO:tensorflow:local_step=13790 global_step=13790 loss=18.5, 32.6% complete INFO:tensorflow:local_step=13800 global_step=13800 loss=18.5, 32.6% complete INFO:tensorflow:local_step=13810 global_step=13810 loss=17.8, 32.6% complete INFO:tensorflow:local_step=13820 global_step=13820 loss=17.7, 32.7% complete INFO:tensorflow:local_step=13830 global_step=13830 loss=19.4, 32.7% complete INFO:tensorflow:local_step=13840 global_step=13840 loss=19.0, 32.7% complete INFO:tensorflow:local_step=13850 global_step=13850 loss=18.4, 32.7% complete INFO:tensorflow:local_step=13860 global_step=13860 loss=18.3, 32.8% complete INFO:tensorflow:local_step=13870 global_step=13870 loss=19.3, 32.8% complete INFO:tensorflow:local_step=13880 global_step=13880 loss=18.8, 32.8% complete INFO:tensorflow:local_step=13890 global_step=13890 loss=17.6, 32.8% complete INFO:tensorflow:local_step=13900 global_step=13900 loss=18.0, 32.8% complete INFO:tensorflow:local_step=13910 global_step=13910 loss=17.7, 32.9% complete INFO:tensorflow:local_step=13920 global_step=13920 loss=18.1, 32.9% complete INFO:tensorflow:local_step=13930 global_step=13930 loss=17.9, 32.9% complete INFO:tensorflow:local_step=13940 global_step=13940 loss=19.8, 32.9% complete INFO:tensorflow:local_step=13950 global_step=13950 loss=18.4, 33.0% complete INFO:tensorflow:local_step=13960 global_step=13960 loss=17.8, 33.0% complete INFO:tensorflow:local_step=13970 global_step=13970 loss=15.4, 33.0% complete INFO:tensorflow:local_step=13980 global_step=13980 loss=18.5, 33.0% complete INFO:tensorflow:local_step=13990 global_step=13990 loss=18.6, 33.1% complete INFO:tensorflow:local_step=14000 global_step=14000 loss=19.6, 33.1% complete INFO:tensorflow:local_step=14010 global_step=14010 loss=18.0, 33.1% complete INFO:tensorflow:local_step=14020 global_step=14020 loss=17.9, 33.1% complete INFO:tensorflow:local_step=14030 global_step=14030 loss=18.8, 33.2% complete INFO:tensorflow:local_step=14040 global_step=14040 loss=18.3, 33.2% complete INFO:tensorflow:local_step=14050 global_step=14050 loss=18.5, 33.2% complete INFO:tensorflow:local_step=14060 global_step=14060 loss=17.8, 33.2% complete INFO:tensorflow:local_step=14070 global_step=14070 loss=18.6, 33.2% complete INFO:tensorflow:local_step=14080 global_step=14080 loss=18.0, 33.3% complete INFO:tensorflow:local_step=14090 global_step=14090 loss=19.3, 33.3% complete INFO:tensorflow:local_step=14100 global_step=14100 loss=19.1, 33.3% complete INFO:tensorflow:local_step=14110 global_step=14110 loss=18.5, 33.3% complete INFO:tensorflow:local_step=14120 global_step=14120 loss=19.4, 33.4% complete INFO:tensorflow:local_step=14130 global_step=14130 loss=18.6, 33.4% complete INFO:tensorflow:local_step=14140 global_step=14140 loss=18.7, 33.4% complete INFO:tensorflow:local_step=14150 global_step=14150 loss=18.6, 33.4% complete INFO:tensorflow:local_step=14160 global_step=14160 loss=15.3, 33.5% complete INFO:tensorflow:local_step=14170 global_step=14170 loss=18.5, 33.5% complete INFO:tensorflow:local_step=14180 global_step=14180 loss=17.9, 33.5% complete INFO:tensorflow:local_step=14190 global_step=14190 loss=19.7, 33.5% complete INFO:tensorflow:local_step=14200 global_step=14200 loss=18.9, 33.6% complete INFO:tensorflow:local_step=14210 global_step=14210 loss=18.9, 33.6% complete INFO:tensorflow:local_step=14220 global_step=14220 loss=18.1, 33.6% complete INFO:tensorflow:local_step=14230 global_step=14230 loss=18.6, 33.6% complete INFO:tensorflow:local_step=14240 global_step=14240 loss=18.3, 33.6% complete INFO:tensorflow:local_step=14250 global_step=14250 loss=18.1, 33.7% complete INFO:tensorflow:local_step=14260 global_step=14260 loss=17.2, 33.7% complete INFO:tensorflow:local_step=14270 global_step=14270 loss=18.0, 33.7% complete INFO:tensorflow:local_step=14280 global_step=14280 loss=18.7, 33.7% complete INFO:tensorflow:local_step=14290 global_step=14290 loss=18.2, 33.8% complete INFO:tensorflow:local_step=14300 global_step=14300 loss=18.8, 33.8% complete INFO:tensorflow:local_step=14310 global_step=14310 loss=17.9, 33.8% complete INFO:tensorflow:local_step=14320 global_step=14320 loss=19.5, 33.8% complete INFO:tensorflow:local_step=14330 global_step=14330 loss=18.3, 33.9% complete INFO:tensorflow:local_step=14340 global_step=14340 loss=18.0, 33.9% complete INFO:tensorflow:local_step=14350 global_step=14350 loss=18.1, 33.9% complete INFO:tensorflow:local_step=14360 global_step=14360 loss=18.7, 33.9% complete INFO:tensorflow:local_step=14370 global_step=14370 loss=17.6, 34.0% complete INFO:tensorflow:local_step=14380 global_step=14380 loss=18.2, 34.0% complete INFO:tensorflow:local_step=14390 global_step=14390 loss=17.8, 34.0% complete INFO:tensorflow:local_step=14400 global_step=14400 loss=19.6, 34.0% complete INFO:tensorflow:local_step=14410 global_step=14410 loss=17.8, 34.1% complete INFO:tensorflow:local_step=14420 global_step=14420 loss=14.6, 34.1% complete INFO:tensorflow:local_step=14430 global_step=14430 loss=18.5, 34.1% complete INFO:tensorflow:local_step=14440 global_step=14440 loss=17.3, 34.1% complete INFO:tensorflow:local_step=14450 global_step=14450 loss=20.3, 34.1% complete INFO:tensorflow:local_step=14460 global_step=14460 loss=17.8, 34.2% complete INFO:tensorflow:local_step=14470 global_step=14470 loss=14.6, 34.2% complete INFO:tensorflow:local_step=14480 global_step=14480 loss=14.4, 34.2% complete INFO:tensorflow:local_step=14490 global_step=14490 loss=18.3, 34.2% complete INFO:tensorflow:local_step=14500 global_step=14500 loss=18.6, 34.3% complete INFO:tensorflow:local_step=14510 global_step=14510 loss=19.4, 34.3% complete INFO:tensorflow:local_step=14520 global_step=14520 loss=18.9, 34.3% complete INFO:tensorflow:local_step=14530 global_step=14530 loss=19.1, 34.3% complete INFO:tensorflow:local_step=14540 global_step=14540 loss=15.2, 34.4% complete INFO:tensorflow:local_step=14550 global_step=14550 loss=18.1, 34.4% complete INFO:tensorflow:local_step=14560 global_step=14560 loss=18.8, 34.4% complete INFO:tensorflow:local_step=14570 global_step=14570 loss=18.5, 34.4% complete INFO:tensorflow:local_step=14580 global_step=14580 loss=273.3, 34.5% complete INFO:tensorflow:local_step=14590 global_step=14590 loss=19.1, 34.5% complete INFO:tensorflow:local_step=14600 global_step=14600 loss=18.9, 34.5% complete INFO:tensorflow:local_step=14610 global_step=14610 loss=19.0, 34.5% complete INFO:tensorflow:local_step=14620 global_step=14620 loss=17.9, 34.5% complete INFO:tensorflow:local_step=14630 global_step=14630 loss=19.0, 34.6% complete INFO:tensorflow:local_step=14640 global_step=14640 loss=19.7, 34.6% complete INFO:tensorflow:local_step=14650 global_step=14650 loss=18.1, 34.6% complete INFO:tensorflow:local_step=14660 global_step=14660 loss=18.2, 34.6% complete INFO:tensorflow:local_step=14670 global_step=14670 loss=15.4, 34.7% complete INFO:tensorflow:local_step=14680 global_step=14680 loss=19.2, 34.7% complete INFO:tensorflow:local_step=14690 global_step=14690 loss=18.8, 34.7% complete INFO:tensorflow:local_step=14700 global_step=14700 loss=18.5, 34.7% complete INFO:tensorflow:local_step=14710 global_step=14710 loss=18.8, 34.8% complete INFO:tensorflow:local_step=14720 global_step=14720 loss=18.3, 34.8% complete INFO:tensorflow:local_step=14730 global_step=14730 loss=13.5, 34.8% complete INFO:tensorflow:local_step=14740 global_step=14740 loss=302.7, 34.8% complete INFO:tensorflow:local_step=14750 global_step=14750 loss=18.6, 34.9% complete INFO:tensorflow:local_step=14760 global_step=14760 loss=18.4, 34.9% complete INFO:tensorflow:local_step=14770 global_step=14770 loss=18.6, 34.9% complete INFO:tensorflow:local_step=14780 global_step=14780 loss=18.6, 34.9% complete INFO:tensorflow:local_step=14790 global_step=14790 loss=19.5, 34.9% complete INFO:tensorflow:local_step=14800 global_step=14800 loss=18.0, 35.0% complete INFO:tensorflow:local_step=14810 global_step=14810 loss=17.9, 35.0% complete INFO:tensorflow:local_step=14820 global_step=14820 loss=18.4, 35.0% complete INFO:tensorflow:local_step=14830 global_step=14830 loss=17.3, 35.0% complete INFO:tensorflow:local_step=14840 global_step=14840 loss=18.7, 35.1% complete INFO:tensorflow:local_step=14850 global_step=14850 loss=17.8, 35.1% complete INFO:tensorflow:local_step=14860 global_step=14860 loss=17.9, 35.1% complete INFO:tensorflow:local_step=14870 global_step=14870 loss=18.0, 35.1% complete INFO:tensorflow:local_step=14880 global_step=14880 loss=18.8, 35.2% complete INFO:tensorflow:local_step=14890 global_step=14890 loss=18.7, 35.2% complete INFO:tensorflow:local_step=14900 global_step=14900 loss=18.2, 35.2% complete INFO:tensorflow:local_step=14910 global_step=14910 loss=18.4, 35.2% complete INFO:tensorflow:local_step=14920 global_step=14920 loss=17.9, 35.3% complete INFO:tensorflow:local_step=14930 global_step=14930 loss=18.2, 35.3% complete INFO:tensorflow:local_step=14940 global_step=14940 loss=18.7, 35.3% complete INFO:tensorflow:local_step=14950 global_step=14950 loss=18.4, 35.3% complete INFO:tensorflow:Recording summary at step 14956. INFO:tensorflow:global_step/sec: 126.049 INFO:tensorflow:local_step=14960 global_step=14960 loss=18.0, 35.3% complete INFO:tensorflow:local_step=14970 global_step=14970 loss=19.5, 35.4% complete INFO:tensorflow:local_step=14980 global_step=14980 loss=18.6, 35.4% complete INFO:tensorflow:local_step=14990 global_step=14990 loss=17.9, 35.4% complete INFO:tensorflow:local_step=15000 global_step=15000 loss=18.5, 35.4% complete INFO:tensorflow:local_step=15010 global_step=15010 loss=17.1, 35.5% complete INFO:tensorflow:local_step=15020 global_step=15020 loss=18.2, 35.5% complete INFO:tensorflow:local_step=15030 global_step=15030 loss=22.4, 35.5% complete INFO:tensorflow:local_step=15040 global_step=15040 loss=18.4, 35.5% complete INFO:tensorflow:local_step=15050 global_step=15050 loss=18.1, 35.6% complete INFO:tensorflow:local_step=15060 global_step=15060 loss=18.0, 35.6% complete INFO:tensorflow:local_step=15070 global_step=15070 loss=18.4, 35.6% complete INFO:tensorflow:local_step=15080 global_step=15080 loss=18.2, 35.6% complete INFO:tensorflow:local_step=15090 global_step=15090 loss=19.4, 35.7% complete INFO:tensorflow:local_step=15100 global_step=15100 loss=17.9, 35.7% complete INFO:tensorflow:local_step=15110 global_step=15110 loss=14.6, 35.7% complete INFO:tensorflow:local_step=15120 global_step=15120 loss=18.2, 35.7% complete INFO:tensorflow:local_step=15130 global_step=15130 loss=19.4, 35.8% complete INFO:tensorflow:local_step=15140 global_step=15140 loss=17.6, 35.8% complete INFO:tensorflow:local_step=15150 global_step=15150 loss=17.5, 35.8% complete INFO:tensorflow:local_step=15160 global_step=15160 loss=17.9, 35.8% complete INFO:tensorflow:local_step=15170 global_step=15170 loss=17.5, 35.8% complete INFO:tensorflow:local_step=15180 global_step=15180 loss=18.3, 35.9% complete INFO:tensorflow:local_step=15190 global_step=15190 loss=18.3, 35.9% complete INFO:tensorflow:local_step=15200 global_step=15200 loss=18.3, 35.9% complete INFO:tensorflow:local_step=15210 global_step=15210 loss=17.4, 35.9% complete INFO:tensorflow:local_step=15220 global_step=15220 loss=18.0, 36.0% complete INFO:tensorflow:local_step=15230 global_step=15230 loss=19.1, 36.0% complete INFO:tensorflow:local_step=15240 global_step=15240 loss=19.6, 36.0% complete INFO:tensorflow:local_step=15250 global_step=15250 loss=19.3, 36.0% complete INFO:tensorflow:local_step=15260 global_step=15260 loss=17.5, 36.1% complete INFO:tensorflow:local_step=15270 global_step=15270 loss=18.1, 36.1% complete INFO:tensorflow:local_step=15280 global_step=15280 loss=167.1, 36.1% complete INFO:tensorflow:local_step=15290 global_step=15290 loss=18.6, 36.1% complete INFO:tensorflow:local_step=15300 global_step=15300 loss=18.1, 36.2% complete INFO:tensorflow:local_step=15310 global_step=15310 loss=18.6, 36.2% complete INFO:tensorflow:local_step=15320 global_step=15320 loss=20.6, 36.2% complete INFO:tensorflow:local_step=15330 global_step=15330 loss=16.1, 36.2% complete INFO:tensorflow:local_step=15340 global_step=15340 loss=18.0, 36.2% complete INFO:tensorflow:local_step=15350 global_step=15350 loss=18.4, 36.3% complete INFO:tensorflow:local_step=15360 global_step=15360 loss=19.8, 36.3% complete INFO:tensorflow:local_step=15370 global_step=15370 loss=17.7, 36.3% complete INFO:tensorflow:local_step=15380 global_step=15380 loss=18.8, 36.3% complete INFO:tensorflow:local_step=15390 global_step=15390 loss=18.9, 36.4% complete INFO:tensorflow:local_step=15400 global_step=15400 loss=20.0, 36.4% complete INFO:tensorflow:local_step=15410 global_step=15410 loss=18.7, 36.4% complete INFO:tensorflow:local_step=15420 global_step=15420 loss=18.1, 36.4% complete INFO:tensorflow:local_step=15430 global_step=15430 loss=18.9, 36.5% complete INFO:tensorflow:local_step=15440 global_step=15440 loss=21.6, 36.5% complete INFO:tensorflow:local_step=15450 global_step=15450 loss=17.9, 36.5% complete INFO:tensorflow:local_step=15460 global_step=15460 loss=14.3, 36.5% complete INFO:tensorflow:local_step=15470 global_step=15470 loss=18.5, 36.6% complete INFO:tensorflow:local_step=15480 global_step=15480 loss=20.4, 36.6% complete INFO:tensorflow:local_step=15490 global_step=15490 loss=18.9, 36.6% complete INFO:tensorflow:local_step=15500 global_step=15500 loss=19.0, 36.6% complete INFO:tensorflow:local_step=15510 global_step=15510 loss=19.0, 36.6% complete INFO:tensorflow:local_step=15520 global_step=15520 loss=18.9, 36.7% complete INFO:tensorflow:local_step=15530 global_step=15530 loss=18.9, 36.7% complete INFO:tensorflow:local_step=15540 global_step=15540 loss=19.1, 36.7% complete INFO:tensorflow:local_step=15550 global_step=15550 loss=18.5, 36.7% complete INFO:tensorflow:local_step=15560 global_step=15560 loss=18.5, 36.8% complete INFO:tensorflow:local_step=15570 global_step=15570 loss=18.1, 36.8% complete INFO:tensorflow:local_step=15580 global_step=15580 loss=18.5, 36.8% complete INFO:tensorflow:local_step=15590 global_step=15590 loss=18.4, 36.8% complete INFO:tensorflow:local_step=15600 global_step=15600 loss=18.9, 36.9% complete INFO:tensorflow:local_step=15610 global_step=15610 loss=18.5, 36.9% complete INFO:tensorflow:local_step=15620 global_step=15620 loss=16.6, 36.9% complete INFO:tensorflow:local_step=15630 global_step=15630 loss=18.4, 36.9% complete INFO:tensorflow:local_step=15640 global_step=15640 loss=18.4, 37.0% complete INFO:tensorflow:local_step=15650 global_step=15650 loss=16.9, 37.0% complete INFO:tensorflow:local_step=15660 global_step=15660 loss=19.8, 37.0% complete INFO:tensorflow:local_step=15670 global_step=15670 loss=17.5, 37.0% complete INFO:tensorflow:local_step=15680 global_step=15680 loss=17.6, 37.1% complete INFO:tensorflow:local_step=15690 global_step=15690 loss=18.9, 37.1% complete INFO:tensorflow:local_step=15700 global_step=15700 loss=17.7, 37.1% complete INFO:tensorflow:local_step=15710 global_step=15710 loss=18.0, 37.1% complete INFO:tensorflow:local_step=15720 global_step=15720 loss=18.3, 37.1% complete INFO:tensorflow:local_step=15730 global_step=15730 loss=18.2, 37.2% complete INFO:tensorflow:local_step=15740 global_step=15740 loss=18.5, 37.2% complete INFO:tensorflow:local_step=15750 global_step=15750 loss=18.5, 37.2% complete INFO:tensorflow:local_step=15760 global_step=15760 loss=17.9, 37.2% complete INFO:tensorflow:local_step=15770 global_step=15770 loss=17.3, 37.3% complete INFO:tensorflow:local_step=15780 global_step=15780 loss=17.9, 37.3% complete INFO:tensorflow:local_step=15790 global_step=15790 loss=18.2, 37.3% complete INFO:tensorflow:local_step=15800 global_step=15800 loss=17.6, 37.3% complete INFO:tensorflow:local_step=15810 global_step=15810 loss=19.0, 37.4% complete INFO:tensorflow:local_step=15820 global_step=15820 loss=18.3, 37.4% complete INFO:tensorflow:local_step=15830 global_step=15830 loss=18.3, 37.4% complete INFO:tensorflow:local_step=15840 global_step=15840 loss=18.5, 37.4% complete INFO:tensorflow:local_step=15850 global_step=15850 loss=19.5, 37.5% complete INFO:tensorflow:local_step=15860 global_step=15860 loss=19.0, 37.5% complete INFO:tensorflow:local_step=15870 global_step=15870 loss=18.1, 37.5% complete INFO:tensorflow:local_step=15880 global_step=15880 loss=15.8, 37.5% complete INFO:tensorflow:local_step=15890 global_step=15890 loss=19.3, 37.5% complete INFO:tensorflow:local_step=15900 global_step=15900 loss=18.4, 37.6% complete INFO:tensorflow:local_step=15910 global_step=15910 loss=15.6, 37.6% complete INFO:tensorflow:local_step=15920 global_step=15920 loss=18.5, 37.6% complete INFO:tensorflow:local_step=15930 global_step=15930 loss=17.6, 37.6% complete INFO:tensorflow:local_step=15940 global_step=15940 loss=17.8, 37.7% complete INFO:tensorflow:local_step=15950 global_step=15950 loss=17.7, 37.7% complete INFO:tensorflow:local_step=15960 global_step=15960 loss=18.2, 37.7% complete INFO:tensorflow:local_step=15970 global_step=15970 loss=18.3, 37.7% complete INFO:tensorflow:local_step=15980 global_step=15980 loss=18.6, 37.8% complete INFO:tensorflow:local_step=15990 global_step=15990 loss=16.2, 37.8% complete INFO:tensorflow:local_step=16000 global_step=16000 loss=18.3, 37.8% complete INFO:tensorflow:local_step=16010 global_step=16010 loss=19.5, 37.8% complete INFO:tensorflow:local_step=16020 global_step=16020 loss=18.9, 37.9% complete INFO:tensorflow:local_step=16030 global_step=16030 loss=18.0, 37.9% complete INFO:tensorflow:local_step=16040 global_step=16040 loss=18.0, 37.9% complete INFO:tensorflow:local_step=16050 global_step=16050 loss=18.5, 37.9% complete INFO:tensorflow:local_step=16060 global_step=16060 loss=18.0, 37.9% complete INFO:tensorflow:local_step=16070 global_step=16070 loss=19.2, 38.0% complete INFO:tensorflow:local_step=16080 global_step=16080 loss=19.1, 38.0% complete INFO:tensorflow:local_step=16090 global_step=16090 loss=18.1, 38.0% complete INFO:tensorflow:local_step=16100 global_step=16100 loss=18.4, 38.0% complete INFO:tensorflow:local_step=16110 global_step=16110 loss=17.8, 38.1% complete INFO:tensorflow:local_step=16120 global_step=16120 loss=19.2, 38.1% complete INFO:tensorflow:local_step=16130 global_step=16130 loss=315.4, 38.1% complete INFO:tensorflow:local_step=16140 global_step=16140 loss=18.9, 38.1% complete INFO:tensorflow:local_step=16150 global_step=16150 loss=22.1, 38.2% complete INFO:tensorflow:local_step=16160 global_step=16160 loss=17.4, 38.2% complete INFO:tensorflow:local_step=16170 global_step=16170 loss=18.9, 38.2% complete INFO:tensorflow:local_step=16180 global_step=16180 loss=19.2, 38.2% complete INFO:tensorflow:local_step=16190 global_step=16190 loss=19.8, 38.3% complete INFO:tensorflow:local_step=16200 global_step=16200 loss=18.0, 38.3% complete INFO:tensorflow:local_step=16210 global_step=16210 loss=17.9, 38.3% complete INFO:tensorflow:local_step=16220 global_step=16220 loss=18.3, 38.3% complete INFO:tensorflow:local_step=16230 global_step=16230 loss=17.9, 38.4% complete INFO:tensorflow:local_step=16240 global_step=16240 loss=290.6, 38.4% complete INFO:tensorflow:local_step=16250 global_step=16250 loss=18.9, 38.4% complete INFO:tensorflow:local_step=16260 global_step=16260 loss=18.8, 38.4% complete INFO:tensorflow:local_step=16270 global_step=16270 loss=19.2, 38.4% complete INFO:tensorflow:local_step=16280 global_step=16280 loss=18.5, 38.5% complete INFO:tensorflow:local_step=16290 global_step=16290 loss=19.4, 38.5% complete INFO:tensorflow:local_step=16300 global_step=16300 loss=21.2, 38.5% complete INFO:tensorflow:local_step=16310 global_step=16310 loss=17.9, 38.5% complete INFO:tensorflow:local_step=16320 global_step=16320 loss=18.3, 38.6% complete INFO:tensorflow:local_step=16330 global_step=16330 loss=18.5, 38.6% complete INFO:tensorflow:local_step=16340 global_step=16340 loss=20.2, 38.6% complete INFO:tensorflow:local_step=16350 global_step=16350 loss=289.9, 38.6% complete INFO:tensorflow:local_step=16360 global_step=16360 loss=18.3, 38.7% complete INFO:tensorflow:local_step=16370 global_step=16370 loss=18.1, 38.7% complete INFO:tensorflow:local_step=16380 global_step=16380 loss=18.0, 38.7% complete INFO:tensorflow:local_step=16390 global_step=16390 loss=21.6, 38.7% complete INFO:tensorflow:local_step=16400 global_step=16400 loss=16.2, 38.8% complete INFO:tensorflow:local_step=16410 global_step=16410 loss=18.4, 38.8% complete INFO:tensorflow:local_step=16420 global_step=16420 loss=16.3, 38.8% complete INFO:tensorflow:local_step=16430 global_step=16430 loss=18.3, 38.8% complete INFO:tensorflow:local_step=16440 global_step=16440 loss=17.7, 38.8% complete INFO:tensorflow:local_step=16450 global_step=16450 loss=17.7, 38.9% complete INFO:tensorflow:local_step=16460 global_step=16460 loss=18.1, 38.9% complete INFO:tensorflow:local_step=16470 global_step=16470 loss=19.6, 38.9% complete INFO:tensorflow:local_step=16480 global_step=16480 loss=18.5, 38.9% complete INFO:tensorflow:local_step=16490 global_step=16490 loss=17.9, 39.0% complete INFO:tensorflow:local_step=16500 global_step=16500 loss=18.8, 39.0% complete INFO:tensorflow:local_step=16510 global_step=16510 loss=18.8, 39.0% complete INFO:tensorflow:local_step=16520 global_step=16520 loss=17.7, 39.0% complete INFO:tensorflow:local_step=16530 global_step=16530 loss=18.7, 39.1% complete INFO:tensorflow:local_step=16540 global_step=16540 loss=17.5, 39.1% complete INFO:tensorflow:local_step=16550 global_step=16550 loss=19.6, 39.1% complete INFO:tensorflow:local_step=16560 global_step=16560 loss=17.8, 39.1% complete INFO:tensorflow:local_step=16570 global_step=16570 loss=18.5, 39.2% complete INFO:tensorflow:local_step=16580 global_step=16580 loss=18.7, 39.2% complete INFO:tensorflow:local_step=16590 global_step=16590 loss=19.8, 39.2% complete INFO:tensorflow:local_step=16600 global_step=16600 loss=17.3, 39.2% complete INFO:tensorflow:local_step=16610 global_step=16610 loss=19.8, 39.2% complete INFO:tensorflow:local_step=16620 global_step=16620 loss=20.1, 39.3% complete INFO:tensorflow:local_step=16630 global_step=16630 loss=18.7, 39.3% complete INFO:tensorflow:local_step=16640 global_step=16640 loss=18.2, 39.3% complete INFO:tensorflow:local_step=16650 global_step=16650 loss=18.5, 39.3% complete INFO:tensorflow:local_step=16660 global_step=16660 loss=18.4, 39.4% complete INFO:tensorflow:local_step=16670 global_step=16670 loss=19.3, 39.4% complete INFO:tensorflow:local_step=16680 global_step=16680 loss=17.4, 39.4% complete INFO:tensorflow:local_step=16690 global_step=16690 loss=18.0, 39.4% complete INFO:tensorflow:local_step=16700 global_step=16700 loss=19.0, 39.5% complete INFO:tensorflow:local_step=16710 global_step=16710 loss=16.5, 39.5% complete INFO:tensorflow:local_step=16720 global_step=16720 loss=18.3, 39.5% complete INFO:tensorflow:local_step=16730 global_step=16730 loss=18.8, 39.5% complete INFO:tensorflow:local_step=16740 global_step=16740 loss=18.8, 39.6% complete INFO:tensorflow:local_step=16750 global_step=16750 loss=21.4, 39.6% complete INFO:tensorflow:local_step=16760 global_step=16760 loss=18.1, 39.6% complete INFO:tensorflow:local_step=16770 global_step=16770 loss=18.7, 39.6% complete INFO:tensorflow:local_step=16780 global_step=16780 loss=15.7, 39.7% complete INFO:tensorflow:local_step=16790 global_step=16790 loss=17.8, 39.7% complete INFO:tensorflow:local_step=16800 global_step=16800 loss=18.1, 39.7% complete INFO:tensorflow:local_step=16810 global_step=16810 loss=18.5, 39.7% complete INFO:tensorflow:local_step=16820 global_step=16820 loss=19.0, 39.7% complete INFO:tensorflow:local_step=16830 global_step=16830 loss=18.5, 39.8% complete INFO:tensorflow:local_step=16840 global_step=16840 loss=18.6, 39.8% complete INFO:tensorflow:local_step=16850 global_step=16850 loss=17.0, 39.8% complete INFO:tensorflow:local_step=16860 global_step=16860 loss=18.7, 39.8% complete INFO:tensorflow:local_step=16870 global_step=16870 loss=18.1, 39.9% complete INFO:tensorflow:local_step=16880 global_step=16880 loss=19.8, 39.9% complete INFO:tensorflow:local_step=16890 global_step=16890 loss=18.0, 39.9% complete INFO:tensorflow:local_step=16900 global_step=16900 loss=18.6, 39.9% complete INFO:tensorflow:local_step=16910 global_step=16910 loss=19.1, 40.0% complete INFO:tensorflow:local_step=16920 global_step=16920 loss=18.3, 40.0% complete INFO:tensorflow:local_step=16930 global_step=16930 loss=18.0, 40.0% complete INFO:tensorflow:local_step=16940 global_step=16940 loss=17.8, 40.0% complete INFO:tensorflow:local_step=16950 global_step=16950 loss=18.5, 40.1% complete INFO:tensorflow:local_step=16960 global_step=16960 loss=19.1, 40.1% complete INFO:tensorflow:local_step=16970 global_step=16970 loss=17.6, 40.1% complete INFO:tensorflow:local_step=16980 global_step=16980 loss=17.9, 40.1% complete INFO:tensorflow:local_step=16990 global_step=16990 loss=18.5, 40.1% complete INFO:tensorflow:local_step=17000 global_step=17000 loss=18.1, 40.2% complete INFO:tensorflow:local_step=17010 global_step=17010 loss=18.7, 40.2% complete INFO:tensorflow:local_step=17020 global_step=17020 loss=18.4, 40.2% complete INFO:tensorflow:local_step=17030 global_step=17030 loss=18.8, 40.2% complete INFO:tensorflow:local_step=17040 global_step=17040 loss=17.3, 40.3% complete INFO:tensorflow:local_step=17050 global_step=17050 loss=18.6, 40.3% complete INFO:tensorflow:local_step=17060 global_step=17060 loss=18.0, 40.3% complete INFO:tensorflow:local_step=17070 global_step=17070 loss=18.1, 40.3% complete INFO:tensorflow:local_step=17080 global_step=17080 loss=15.5, 40.4% complete INFO:tensorflow:local_step=17090 global_step=17090 loss=18.1, 40.4% complete INFO:tensorflow:local_step=17100 global_step=17100 loss=19.1, 40.4% complete INFO:tensorflow:local_step=17110 global_step=17110 loss=19.2, 40.4% complete INFO:tensorflow:local_step=17120 global_step=17120 loss=18.5, 40.5% complete INFO:tensorflow:local_step=17130 global_step=17130 loss=18.4, 40.5% complete INFO:tensorflow:local_step=17140 global_step=17140 loss=18.9, 40.5% complete INFO:tensorflow:local_step=17150 global_step=17150 loss=18.5, 40.5% complete INFO:tensorflow:local_step=17160 global_step=17160 loss=19.2, 40.5% complete INFO:tensorflow:local_step=17170 global_step=17170 loss=18.2, 40.6% complete INFO:tensorflow:local_step=17180 global_step=17180 loss=18.2, 40.6% complete INFO:tensorflow:local_step=17190 global_step=17190 loss=18.4, 40.6% complete INFO:tensorflow:local_step=17200 global_step=17200 loss=19.1, 40.6% complete INFO:tensorflow:local_step=17210 global_step=17210 loss=18.7, 40.7% complete INFO:tensorflow:local_step=17220 global_step=17220 loss=18.2, 40.7% complete INFO:tensorflow:local_step=17230 global_step=17230 loss=17.8, 40.7% complete INFO:tensorflow:local_step=17240 global_step=17240 loss=17.8, 40.7% complete INFO:tensorflow:local_step=17250 global_step=17250 loss=17.9, 40.8% complete INFO:tensorflow:local_step=17260 global_step=17260 loss=18.1, 40.8% complete INFO:tensorflow:local_step=17270 global_step=17270 loss=18.0, 40.8% complete INFO:tensorflow:local_step=17280 global_step=17280 loss=17.6, 40.8% complete INFO:tensorflow:local_step=17290 global_step=17290 loss=18.3, 40.9% complete INFO:tensorflow:local_step=17300 global_step=17300 loss=19.5, 40.9% complete INFO:tensorflow:local_step=17310 global_step=17310 loss=18.2, 40.9% complete INFO:tensorflow:local_step=17320 global_step=17320 loss=18.5, 40.9% complete INFO:tensorflow:local_step=17330 global_step=17330 loss=18.9, 40.9% complete INFO:tensorflow:local_step=17340 global_step=17340 loss=18.2, 41.0% complete INFO:tensorflow:local_step=17350 global_step=17350 loss=18.4, 41.0% complete INFO:tensorflow:local_step=17360 global_step=17360 loss=18.5, 41.0% complete INFO:tensorflow:local_step=17370 global_step=17370 loss=18.6, 41.0% complete INFO:tensorflow:local_step=17380 global_step=17380 loss=17.8, 41.1% complete INFO:tensorflow:local_step=17390 global_step=17390 loss=18.7, 41.1% complete INFO:tensorflow:local_step=17400 global_step=17400 loss=233.2, 41.1% complete INFO:tensorflow:local_step=17410 global_step=17410 loss=18.1, 41.1% complete INFO:tensorflow:local_step=17420 global_step=17420 loss=19.2, 41.2% complete INFO:tensorflow:local_step=17430 global_step=17430 loss=17.6, 41.2% complete INFO:tensorflow:local_step=17440 global_step=17440 loss=18.1, 41.2% complete INFO:tensorflow:local_step=17450 global_step=17450 loss=17.8, 41.2% complete INFO:tensorflow:local_step=17460 global_step=17460 loss=18.7, 41.3% complete INFO:tensorflow:local_step=17470 global_step=17470 loss=18.8, 41.3% complete INFO:tensorflow:local_step=17480 global_step=17480 loss=18.8, 41.3% complete INFO:tensorflow:local_step=17490 global_step=17490 loss=19.3, 41.3% complete INFO:tensorflow:local_step=17500 global_step=17500 loss=18.1, 41.4% complete INFO:tensorflow:local_step=17510 global_step=17510 loss=17.6, 41.4% complete INFO:tensorflow:local_step=17520 global_step=17520 loss=18.8, 41.4% complete INFO:tensorflow:local_step=17530 global_step=17530 loss=19.0, 41.4% complete INFO:tensorflow:local_step=17540 global_step=17540 loss=18.5, 41.4% complete INFO:tensorflow:local_step=17550 global_step=17550 loss=18.9, 41.5% complete INFO:tensorflow:local_step=17560 global_step=17560 loss=17.8, 41.5% complete INFO:tensorflow:local_step=17570 global_step=17570 loss=18.5, 41.5% complete INFO:tensorflow:local_step=17580 global_step=17580 loss=17.9, 41.5% complete INFO:tensorflow:local_step=17590 global_step=17590 loss=18.3, 41.6% complete INFO:tensorflow:local_step=17600 global_step=17600 loss=21.5, 41.6% complete INFO:tensorflow:local_step=17610 global_step=17610 loss=18.1, 41.6% complete INFO:tensorflow:local_step=17620 global_step=17620 loss=18.5, 41.6% complete INFO:tensorflow:local_step=17630 global_step=17630 loss=18.1, 41.7% complete INFO:tensorflow:local_step=17640 global_step=17640 loss=18.7, 41.7% complete INFO:tensorflow:local_step=17650 global_step=17650 loss=18.9, 41.7% complete INFO:tensorflow:local_step=17660 global_step=17660 loss=19.1, 41.7% complete INFO:tensorflow:local_step=17670 global_step=17670 loss=18.9, 41.8% complete INFO:tensorflow:local_step=17680 global_step=17680 loss=249.3, 41.8% complete INFO:tensorflow:local_step=17690 global_step=17690 loss=20.1, 41.8% complete INFO:tensorflow:local_step=17700 global_step=17700 loss=18.2, 41.8% complete INFO:tensorflow:local_step=17710 global_step=17710 loss=18.4, 41.8% complete INFO:tensorflow:local_step=17720 global_step=17720 loss=18.9, 41.9% complete INFO:tensorflow:local_step=17730 global_step=17730 loss=18.8, 41.9% complete INFO:tensorflow:local_step=17740 global_step=17740 loss=18.9, 41.9% complete INFO:tensorflow:local_step=17750 global_step=17750 loss=18.2, 41.9% complete INFO:tensorflow:local_step=17760 global_step=17760 loss=18.8, 42.0% complete INFO:tensorflow:local_step=17770 global_step=17770 loss=18.6, 42.0% complete INFO:tensorflow:local_step=17780 global_step=17780 loss=19.2, 42.0% complete INFO:tensorflow:local_step=17790 global_step=17790 loss=19.0, 42.0% complete INFO:tensorflow:local_step=17800 global_step=17800 loss=18.0, 42.1% complete INFO:tensorflow:local_step=17810 global_step=17810 loss=18.6, 42.1% complete INFO:tensorflow:local_step=17820 global_step=17820 loss=249.7, 42.1% complete INFO:tensorflow:local_step=17830 global_step=17830 loss=277.7, 42.1% complete INFO:tensorflow:local_step=17840 global_step=17840 loss=18.2, 42.2% complete INFO:tensorflow:local_step=17850 global_step=17850 loss=19.6, 42.2% complete INFO:tensorflow:local_step=17860 global_step=17860 loss=15.4, 42.2% complete INFO:tensorflow:local_step=17870 global_step=17870 loss=18.7, 42.2% complete INFO:tensorflow:local_step=17880 global_step=17880 loss=18.3, 42.2% complete INFO:tensorflow:local_step=17890 global_step=17890 loss=18.8, 42.3% complete INFO:tensorflow:local_step=17900 global_step=17900 loss=18.8, 42.3% complete INFO:tensorflow:local_step=17910 global_step=17910 loss=18.7, 42.3% complete INFO:tensorflow:local_step=17920 global_step=17920 loss=19.6, 42.3% complete INFO:tensorflow:local_step=17930 global_step=17930 loss=18.6, 42.4% complete INFO:tensorflow:local_step=17940 global_step=17940 loss=18.6, 42.4% complete INFO:tensorflow:local_step=17950 global_step=17950 loss=18.4, 42.4% complete INFO:tensorflow:local_step=17960 global_step=17960 loss=18.5, 42.4% complete INFO:tensorflow:local_step=17970 global_step=17970 loss=19.0, 42.5% complete INFO:tensorflow:local_step=17980 global_step=17980 loss=18.3, 42.5% complete INFO:tensorflow:local_step=17990 global_step=17990 loss=19.0, 42.5% complete INFO:tensorflow:local_step=18000 global_step=18000 loss=18.4, 42.5% complete INFO:tensorflow:local_step=18010 global_step=18010 loss=18.8, 42.6% complete INFO:tensorflow:local_step=18020 global_step=18020 loss=19.0, 42.6% complete INFO:tensorflow:local_step=18030 global_step=18030 loss=17.9, 42.6% complete INFO:tensorflow:local_step=18040 global_step=18040 loss=17.9, 42.6% complete INFO:tensorflow:local_step=18050 global_step=18050 loss=17.5, 42.7% complete INFO:tensorflow:local_step=18060 global_step=18060 loss=17.9, 42.7% complete INFO:tensorflow:local_step=18070 global_step=18070 loss=18.7, 42.7% complete INFO:tensorflow:local_step=18080 global_step=18080 loss=18.5, 42.7% complete INFO:tensorflow:local_step=18090 global_step=18090 loss=19.1, 42.7% complete INFO:tensorflow:local_step=18100 global_step=18100 loss=18.0, 42.8% complete INFO:tensorflow:local_step=18110 global_step=18110 loss=17.6, 42.8% complete INFO:tensorflow:local_step=18120 global_step=18120 loss=18.3, 42.8% complete INFO:tensorflow:local_step=18130 global_step=18130 loss=18.0, 42.8% complete INFO:tensorflow:local_step=18140 global_step=18140 loss=18.4, 42.9% complete INFO:tensorflow:local_step=18150 global_step=18150 loss=18.4, 42.9% complete INFO:tensorflow:local_step=18160 global_step=18160 loss=18.4, 42.9% complete INFO:tensorflow:local_step=18170 global_step=18170 loss=328.5, 42.9% complete INFO:tensorflow:local_step=18180 global_step=18180 loss=19.7, 43.0% complete INFO:tensorflow:local_step=18190 global_step=18190 loss=18.5, 43.0% complete INFO:tensorflow:local_step=18200 global_step=18200 loss=18.0, 43.0% complete INFO:tensorflow:local_step=18210 global_step=18210 loss=19.5, 43.0% complete INFO:tensorflow:local_step=18220 global_step=18220 loss=17.6, 43.1% complete INFO:tensorflow:local_step=18230 global_step=18230 loss=14.8, 43.1% complete INFO:tensorflow:local_step=18240 global_step=18240 loss=20.2, 43.1% complete INFO:tensorflow:local_step=18250 global_step=18250 loss=19.1, 43.1% complete INFO:tensorflow:local_step=18260 global_step=18260 loss=18.6, 43.1% complete INFO:tensorflow:local_step=18270 global_step=18270 loss=19.3, 43.2% complete INFO:tensorflow:local_step=18280 global_step=18280 loss=19.3, 43.2% complete INFO:tensorflow:local_step=18290 global_step=18290 loss=18.5, 43.2% complete INFO:tensorflow:local_step=18300 global_step=18300 loss=19.0, 43.2% complete INFO:tensorflow:local_step=18310 global_step=18310 loss=295.1, 43.3% complete INFO:tensorflow:local_step=18320 global_step=18320 loss=21.5, 43.3% complete INFO:tensorflow:local_step=18330 global_step=18330 loss=18.5, 43.3% complete INFO:tensorflow:local_step=18340 global_step=18340 loss=17.7, 43.3% complete INFO:tensorflow:local_step=18350 global_step=18350 loss=17.7, 43.4% complete INFO:tensorflow:local_step=18360 global_step=18360 loss=19.1, 43.4% complete INFO:tensorflow:local_step=18370 global_step=18370 loss=18.0, 43.4% complete INFO:tensorflow:local_step=18380 global_step=18380 loss=17.4, 43.4% complete INFO:tensorflow:local_step=18390 global_step=18390 loss=17.8, 43.5% complete INFO:tensorflow:local_step=18400 global_step=18400 loss=19.4, 43.5% complete INFO:tensorflow:local_step=18410 global_step=18410 loss=18.7, 43.5% complete INFO:tensorflow:local_step=18420 global_step=18420 loss=18.4, 43.5% complete INFO:tensorflow:local_step=18430 global_step=18430 loss=18.6, 43.5% complete INFO:tensorflow:local_step=18440 global_step=18440 loss=19.5, 43.6% complete INFO:tensorflow:local_step=18450 global_step=18450 loss=18.2, 43.6% complete INFO:tensorflow:local_step=18460 global_step=18460 loss=18.5, 43.6% complete INFO:tensorflow:local_step=18470 global_step=18470 loss=17.7, 43.6% complete INFO:tensorflow:local_step=18480 global_step=18480 loss=19.1, 43.7% complete INFO:tensorflow:local_step=18490 global_step=18490 loss=18.3, 43.7% complete INFO:tensorflow:local_step=18500 global_step=18500 loss=19.0, 43.7% complete INFO:tensorflow:local_step=18510 global_step=18510 loss=18.5, 43.7% complete INFO:tensorflow:local_step=18520 global_step=18520 loss=18.3, 43.8% complete INFO:tensorflow:local_step=18530 global_step=18530 loss=18.4, 43.8% complete INFO:tensorflow:local_step=18540 global_step=18540 loss=18.4, 43.8% complete INFO:tensorflow:local_step=18550 global_step=18550 loss=17.7, 43.8% complete INFO:tensorflow:local_step=18560 global_step=18560 loss=19.1, 43.9% complete INFO:tensorflow:local_step=18570 global_step=18570 loss=18.3, 43.9% complete INFO:tensorflow:local_step=18580 global_step=18580 loss=18.4, 43.9% complete INFO:tensorflow:local_step=18590 global_step=18590 loss=17.8, 43.9% complete INFO:tensorflow:local_step=18600 global_step=18600 loss=18.5, 44.0% complete INFO:tensorflow:local_step=18610 global_step=18610 loss=17.9, 44.0% complete INFO:tensorflow:local_step=18620 global_step=18620 loss=17.6, 44.0% complete INFO:tensorflow:local_step=18630 global_step=18630 loss=18.5, 44.0% complete INFO:tensorflow:local_step=18640 global_step=18640 loss=18.4, 44.0% complete INFO:tensorflow:local_step=18650 global_step=18650 loss=19.6, 44.1% complete INFO:tensorflow:local_step=18660 global_step=18660 loss=15.7, 44.1% complete INFO:tensorflow:local_step=18670 global_step=18670 loss=19.1, 44.1% complete INFO:tensorflow:local_step=18680 global_step=18680 loss=15.8, 44.1% complete INFO:tensorflow:local_step=18690 global_step=18690 loss=18.7, 44.2% complete INFO:tensorflow:local_step=18700 global_step=18700 loss=18.6, 44.2% complete INFO:tensorflow:local_step=18710 global_step=18710 loss=17.9, 44.2% complete INFO:tensorflow:local_step=18720 global_step=18720 loss=17.8, 44.2% complete INFO:tensorflow:local_step=18730 global_step=18730 loss=17.6, 44.3% complete INFO:tensorflow:local_step=18740 global_step=18740 loss=17.8, 44.3% complete INFO:tensorflow:local_step=18750 global_step=18750 loss=18.5, 44.3% complete INFO:tensorflow:local_step=18760 global_step=18760 loss=19.4, 44.3% complete INFO:tensorflow:local_step=18770 global_step=18770 loss=13.5, 44.4% complete INFO:tensorflow:local_step=18780 global_step=18780 loss=15.3, 44.4% complete INFO:tensorflow:local_step=18790 global_step=18790 loss=16.7, 44.4% complete INFO:tensorflow:local_step=18800 global_step=18800 loss=17.9, 44.4% complete INFO:tensorflow:local_step=18810 global_step=18810 loss=18.9, 44.4% complete INFO:tensorflow:local_step=18820 global_step=18820 loss=15.2, 44.5% complete INFO:tensorflow:local_step=18830 global_step=18830 loss=18.3, 44.5% complete INFO:tensorflow:local_step=18840 global_step=18840 loss=14.7, 44.5% complete INFO:tensorflow:local_step=18850 global_step=18850 loss=19.9, 44.5% complete INFO:tensorflow:local_step=18860 global_step=18860 loss=18.3, 44.6% complete INFO:tensorflow:local_step=18870 global_step=18870 loss=18.2, 44.6% complete INFO:tensorflow:local_step=18880 global_step=18880 loss=18.7, 44.6% complete INFO:tensorflow:local_step=18890 global_step=18890 loss=19.3, 44.6% complete INFO:tensorflow:local_step=18900 global_step=18900 loss=18.5, 44.7% complete INFO:tensorflow:local_step=18910 global_step=18910 loss=18.5, 44.7% complete INFO:tensorflow:local_step=18920 global_step=18920 loss=18.1, 44.7% complete INFO:tensorflow:local_step=18930 global_step=18930 loss=18.9, 44.7% complete INFO:tensorflow:local_step=18940 global_step=18940 loss=18.0, 44.8% complete INFO:tensorflow:local_step=18950 global_step=18950 loss=17.8, 44.8% complete INFO:tensorflow:local_step=18960 global_step=18960 loss=18.5, 44.8% complete INFO:tensorflow:local_step=18970 global_step=18970 loss=19.9, 44.8% complete INFO:tensorflow:local_step=18980 global_step=18980 loss=19.1, 44.8% complete INFO:tensorflow:local_step=18990 global_step=18990 loss=18.4, 44.9% complete INFO:tensorflow:local_step=19000 global_step=19000 loss=19.2, 44.9% complete INFO:tensorflow:local_step=19010 global_step=19010 loss=17.8, 44.9% complete INFO:tensorflow:local_step=19020 global_step=19020 loss=18.9, 44.9% complete INFO:tensorflow:local_step=19030 global_step=19030 loss=18.2, 45.0% complete INFO:tensorflow:local_step=19040 global_step=19040 loss=18.7, 45.0% complete INFO:tensorflow:local_step=19050 global_step=19050 loss=18.5, 45.0% complete INFO:tensorflow:local_step=19060 global_step=19060 loss=18.7, 45.0% complete INFO:tensorflow:local_step=19070 global_step=19070 loss=18.7, 45.1% complete INFO:tensorflow:local_step=19080 global_step=19080 loss=18.2, 45.1% complete INFO:tensorflow:local_step=19090 global_step=19090 loss=18.2, 45.1% complete INFO:tensorflow:local_step=19100 global_step=19100 loss=17.5, 45.1% complete INFO:tensorflow:local_step=19110 global_step=19110 loss=19.4, 45.2% complete INFO:tensorflow:local_step=19120 global_step=19120 loss=17.7, 45.2% complete INFO:tensorflow:local_step=19130 global_step=19130 loss=18.0, 45.2% complete INFO:tensorflow:local_step=19140 global_step=19140 loss=18.0, 45.2% complete INFO:tensorflow:local_step=19150 global_step=19150 loss=18.0, 45.3% complete INFO:tensorflow:local_step=19160 global_step=19160 loss=14.4, 45.3% complete INFO:tensorflow:local_step=19170 global_step=19170 loss=18.8, 45.3% complete INFO:tensorflow:local_step=19180 global_step=19180 loss=18.1, 45.3% complete INFO:tensorflow:local_step=19190 global_step=19190 loss=18.3, 45.3% complete INFO:tensorflow:local_step=19200 global_step=19200 loss=17.7, 45.4% complete INFO:tensorflow:local_step=19210 global_step=19210 loss=19.3, 45.4% complete INFO:tensorflow:local_step=19220 global_step=19220 loss=18.4, 45.4% complete INFO:tensorflow:local_step=19230 global_step=19230 loss=17.8, 45.4% complete INFO:tensorflow:local_step=19240 global_step=19240 loss=17.7, 45.5% complete INFO:tensorflow:local_step=19250 global_step=19250 loss=15.1, 45.5% complete INFO:tensorflow:local_step=19260 global_step=19260 loss=18.7, 45.5% complete INFO:tensorflow:local_step=19270 global_step=19270 loss=17.8, 45.5% complete INFO:tensorflow:local_step=19280 global_step=19280 loss=18.3, 45.6% complete INFO:tensorflow:local_step=19290 global_step=19290 loss=17.9, 45.6% complete INFO:tensorflow:local_step=19300 global_step=19300 loss=18.5, 45.6% complete INFO:tensorflow:local_step=19310 global_step=19310 loss=17.5, 45.6% complete INFO:tensorflow:local_step=19320 global_step=19320 loss=17.5, 45.7% complete INFO:tensorflow:local_step=19330 global_step=19330 loss=18.7, 45.7% complete INFO:tensorflow:local_step=19340 global_step=19340 loss=18.0, 45.7% complete INFO:tensorflow:local_step=19350 global_step=19350 loss=16.9, 45.7% complete INFO:tensorflow:local_step=19360 global_step=19360 loss=18.6, 45.7% complete INFO:tensorflow:local_step=19370 global_step=19370 loss=18.3, 45.8% complete INFO:tensorflow:local_step=19380 global_step=19380 loss=18.4, 45.8% complete INFO:tensorflow:local_step=19390 global_step=19390 loss=18.8, 45.8% complete INFO:tensorflow:local_step=19400 global_step=19400 loss=19.0, 45.8% complete INFO:tensorflow:local_step=19410 global_step=19410 loss=18.3, 45.9% complete INFO:tensorflow:local_step=19420 global_step=19420 loss=18.5, 45.9% complete INFO:tensorflow:local_step=19430 global_step=19430 loss=18.9, 45.9% complete INFO:tensorflow:local_step=19440 global_step=19440 loss=18.5, 45.9% complete INFO:tensorflow:local_step=19450 global_step=19450 loss=19.0, 46.0% complete INFO:tensorflow:local_step=19460 global_step=19460 loss=18.7, 46.0% complete INFO:tensorflow:local_step=19470 global_step=19470 loss=19.1, 46.0% complete INFO:tensorflow:local_step=19480 global_step=19480 loss=17.8, 46.0% complete INFO:tensorflow:local_step=19490 global_step=19490 loss=18.1, 46.1% complete INFO:tensorflow:local_step=19500 global_step=19500 loss=17.8, 46.1% complete INFO:tensorflow:local_step=19510 global_step=19510 loss=16.4, 46.1% complete INFO:tensorflow:local_step=19520 global_step=19520 loss=18.4, 46.1% complete INFO:tensorflow:local_step=19530 global_step=19530 loss=17.7, 46.1% complete INFO:tensorflow:local_step=19540 global_step=19540 loss=18.7, 46.2% complete INFO:tensorflow:local_step=19550 global_step=19550 loss=17.4, 46.2% complete INFO:tensorflow:local_step=19560 global_step=19560 loss=18.2, 46.2% complete INFO:tensorflow:local_step=19570 global_step=19570 loss=19.1, 46.2% complete INFO:tensorflow:local_step=19580 global_step=19580 loss=17.7, 46.3% complete INFO:tensorflow:local_step=19590 global_step=19590 loss=19.8, 46.3% complete INFO:tensorflow:local_step=19600 global_step=19600 loss=18.7, 46.3% complete INFO:tensorflow:local_step=19610 global_step=19610 loss=17.9, 46.3% complete INFO:tensorflow:local_step=19620 global_step=19620 loss=18.6, 46.4% complete INFO:tensorflow:local_step=19630 global_step=19630 loss=17.7, 46.4% complete INFO:tensorflow:local_step=19640 global_step=19640 loss=18.4, 46.4% complete INFO:tensorflow:local_step=19650 global_step=19650 loss=19.6, 46.4% complete INFO:tensorflow:local_step=19660 global_step=19660 loss=19.2, 46.5% complete INFO:tensorflow:local_step=19670 global_step=19670 loss=18.8, 46.5% complete INFO:tensorflow:local_step=19680 global_step=19680 loss=18.5, 46.5% complete INFO:tensorflow:local_step=19690 global_step=19690 loss=18.5, 46.5% complete INFO:tensorflow:local_step=19700 global_step=19700 loss=18.5, 46.6% complete INFO:tensorflow:local_step=19710 global_step=19710 loss=18.7, 46.6% complete INFO:tensorflow:local_step=19720 global_step=19720 loss=18.5, 46.6% complete INFO:tensorflow:local_step=19730 global_step=19730 loss=18.2, 46.6% complete INFO:tensorflow:local_step=19740 global_step=19740 loss=19.3, 46.6% complete INFO:tensorflow:local_step=19750 global_step=19750 loss=19.3, 46.7% complete INFO:tensorflow:local_step=19760 global_step=19760 loss=18.0, 46.7% complete INFO:tensorflow:local_step=19770 global_step=19770 loss=18.7, 46.7% complete INFO:tensorflow:local_step=19780 global_step=19780 loss=18.0, 46.7% complete INFO:tensorflow:local_step=19790 global_step=19790 loss=18.9, 46.8% complete INFO:tensorflow:local_step=19800 global_step=19800 loss=18.3, 46.8% complete INFO:tensorflow:local_step=19810 global_step=19810 loss=17.9, 46.8% complete INFO:tensorflow:local_step=19820 global_step=19820 loss=18.6, 46.8% complete INFO:tensorflow:local_step=19830 global_step=19830 loss=18.9, 46.9% complete INFO:tensorflow:local_step=19840 global_step=19840 loss=19.9, 46.9% complete INFO:tensorflow:local_step=19850 global_step=19850 loss=19.1, 46.9% complete INFO:tensorflow:local_step=19860 global_step=19860 loss=18.2, 46.9% complete INFO:tensorflow:local_step=19870 global_step=19870 loss=18.5, 47.0% complete INFO:tensorflow:local_step=19880 global_step=19880 loss=18.0, 47.0% complete INFO:tensorflow:local_step=19890 global_step=19890 loss=21.3, 47.0% complete INFO:tensorflow:local_step=19900 global_step=19900 loss=16.3, 47.0% complete INFO:tensorflow:local_step=19910 global_step=19910 loss=18.2, 47.0% complete INFO:tensorflow:local_step=19920 global_step=19920 loss=19.1, 47.1% complete INFO:tensorflow:local_step=19930 global_step=19930 loss=18.4, 47.1% complete INFO:tensorflow:local_step=19940 global_step=19940 loss=18.5, 47.1% complete INFO:tensorflow:local_step=19950 global_step=19950 loss=17.2, 47.1% complete INFO:tensorflow:local_step=19960 global_step=19960 loss=18.0, 47.2% complete INFO:tensorflow:local_step=19970 global_step=19970 loss=18.7, 47.2% complete INFO:tensorflow:local_step=19980 global_step=19980 loss=18.2, 47.2% complete INFO:tensorflow:local_step=19990 global_step=19990 loss=18.6, 47.2% complete INFO:tensorflow:local_step=20000 global_step=20000 loss=19.3, 47.3% complete INFO:tensorflow:local_step=20010 global_step=20010 loss=18.7, 47.3% complete INFO:tensorflow:local_step=20020 global_step=20020 loss=17.5, 47.3% complete INFO:tensorflow:local_step=20030 global_step=20030 loss=18.2, 47.3% complete INFO:tensorflow:local_step=20040 global_step=20040 loss=18.3, 47.4% complete INFO:tensorflow:local_step=20050 global_step=20050 loss=17.7, 47.4% complete INFO:tensorflow:local_step=20060 global_step=20060 loss=18.2, 47.4% complete INFO:tensorflow:local_step=20070 global_step=20070 loss=18.2, 47.4% complete INFO:tensorflow:local_step=20080 global_step=20080 loss=18.7, 47.4% complete INFO:tensorflow:local_step=20090 global_step=20090 loss=18.3, 47.5% complete INFO:tensorflow:local_step=20100 global_step=20100 loss=255.8, 47.5% complete INFO:tensorflow:local_step=20110 global_step=20110 loss=342.8, 47.5% complete INFO:tensorflow:local_step=20120 global_step=20120 loss=18.1, 47.5% complete INFO:tensorflow:local_step=20130 global_step=20130 loss=15.2, 47.6% complete INFO:tensorflow:local_step=20140 global_step=20140 loss=17.6, 47.6% complete INFO:tensorflow:local_step=20150 global_step=20150 loss=19.4, 47.6% complete INFO:tensorflow:local_step=20160 global_step=20160 loss=19.3, 47.6% complete INFO:tensorflow:local_step=20170 global_step=20170 loss=15.2, 47.7% complete INFO:tensorflow:local_step=20180 global_step=20180 loss=19.3, 47.7% complete INFO:tensorflow:local_step=20190 global_step=20190 loss=18.1, 47.7% complete INFO:tensorflow:local_step=20200 global_step=20200 loss=19.0, 47.7% complete INFO:tensorflow:local_step=20210 global_step=20210 loss=18.5, 47.8% complete INFO:tensorflow:local_step=20220 global_step=20220 loss=18.3, 47.8% complete INFO:tensorflow:local_step=20230 global_step=20230 loss=19.4, 47.8% complete INFO:tensorflow:local_step=20240 global_step=20240 loss=19.3, 47.8% complete INFO:tensorflow:local_step=20250 global_step=20250 loss=18.7, 47.8% complete INFO:tensorflow:local_step=20260 global_step=20260 loss=20.8, 47.9% complete INFO:tensorflow:local_step=20270 global_step=20270 loss=18.6, 47.9% complete INFO:tensorflow:local_step=20280 global_step=20280 loss=18.3, 47.9% complete INFO:tensorflow:local_step=20290 global_step=20290 loss=17.7, 47.9% complete INFO:tensorflow:local_step=20300 global_step=20300 loss=17.7, 48.0% complete INFO:tensorflow:local_step=20310 global_step=20310 loss=18.6, 48.0% complete INFO:tensorflow:local_step=20320 global_step=20320 loss=18.8, 48.0% complete INFO:tensorflow:local_step=20330 global_step=20330 loss=18.2, 48.0% complete INFO:tensorflow:local_step=20340 global_step=20340 loss=17.2, 48.1% complete INFO:tensorflow:local_step=20350 global_step=20350 loss=18.3, 48.1% complete INFO:tensorflow:local_step=20360 global_step=20360 loss=19.6, 48.1% complete INFO:tensorflow:local_step=20370 global_step=20370 loss=19.2, 48.1% complete INFO:tensorflow:local_step=20380 global_step=20380 loss=18.6, 48.2% complete INFO:tensorflow:local_step=20390 global_step=20390 loss=19.3, 48.2% complete INFO:tensorflow:local_step=20400 global_step=20400 loss=17.7, 48.2% complete INFO:tensorflow:local_step=20410 global_step=20410 loss=17.8, 48.2% complete INFO:tensorflow:local_step=20420 global_step=20420 loss=18.0, 48.3% complete INFO:tensorflow:local_step=20430 global_step=20430 loss=18.1, 48.3% complete INFO:tensorflow:local_step=20440 global_step=20440 loss=19.5, 48.3% complete INFO:tensorflow:local_step=20450 global_step=20450 loss=18.0, 48.3% complete INFO:tensorflow:local_step=20460 global_step=20460 loss=17.8, 48.3% complete INFO:tensorflow:local_step=20470 global_step=20470 loss=18.7, 48.4% complete INFO:tensorflow:local_step=20480 global_step=20480 loss=18.3, 48.4% complete INFO:tensorflow:local_step=20490 global_step=20490 loss=17.8, 48.4% complete INFO:tensorflow:local_step=20500 global_step=20500 loss=18.5, 48.4% complete INFO:tensorflow:local_step=20510 global_step=20510 loss=18.3, 48.5% complete INFO:tensorflow:local_step=20520 global_step=20520 loss=18.7, 48.5% complete INFO:tensorflow:local_step=20530 global_step=20530 loss=18.6, 48.5% complete INFO:tensorflow:local_step=20540 global_step=20540 loss=17.6, 48.5% complete INFO:tensorflow:local_step=20550 global_step=20550 loss=19.3, 48.6% complete INFO:tensorflow:local_step=20560 global_step=20560 loss=18.4, 48.6% complete INFO:tensorflow:local_step=20570 global_step=20570 loss=18.9, 48.6% complete INFO:tensorflow:local_step=20580 global_step=20580 loss=18.2, 48.6% complete INFO:tensorflow:local_step=20590 global_step=20590 loss=18.5, 48.7% complete INFO:tensorflow:local_step=20600 global_step=20600 loss=18.2, 48.7% complete INFO:tensorflow:local_step=20610 global_step=20610 loss=18.8, 48.7% complete INFO:tensorflow:local_step=20620 global_step=20620 loss=18.1, 48.7% complete INFO:tensorflow:local_step=20630 global_step=20630 loss=18.6, 48.7% complete INFO:tensorflow:local_step=20640 global_step=20640 loss=18.1, 48.8% complete INFO:tensorflow:local_step=20650 global_step=20650 loss=18.6, 48.8% complete INFO:tensorflow:local_step=20660 global_step=20660 loss=18.5, 48.8% complete INFO:tensorflow:local_step=20670 global_step=20670 loss=18.9, 48.8% complete INFO:tensorflow:local_step=20680 global_step=20680 loss=19.0, 48.9% complete INFO:tensorflow:local_step=20690 global_step=20690 loss=19.9, 48.9% complete INFO:tensorflow:local_step=20700 global_step=20700 loss=19.5, 48.9% complete INFO:tensorflow:local_step=20710 global_step=20710 loss=18.4, 48.9% complete INFO:tensorflow:local_step=20720 global_step=20720 loss=18.3, 49.0% complete INFO:tensorflow:local_step=20730 global_step=20730 loss=18.8, 49.0% complete INFO:tensorflow:local_step=20740 global_step=20740 loss=18.0, 49.0% complete INFO:tensorflow:local_step=20750 global_step=20750 loss=19.1, 49.0% complete INFO:tensorflow:local_step=20760 global_step=20760 loss=276.3, 49.1% complete INFO:tensorflow:local_step=20770 global_step=20770 loss=18.8, 49.1% complete INFO:tensorflow:local_step=20780 global_step=20780 loss=16.0, 49.1% complete INFO:tensorflow:local_step=20790 global_step=20790 loss=18.1, 49.1% complete INFO:tensorflow:local_step=20800 global_step=20800 loss=18.2, 49.1% complete INFO:tensorflow:local_step=20810 global_step=20810 loss=18.8, 49.2% complete INFO:tensorflow:local_step=20820 global_step=20820 loss=18.8, 49.2% complete INFO:tensorflow:local_step=20830 global_step=20830 loss=18.4, 49.2% complete INFO:tensorflow:local_step=20840 global_step=20840 loss=18.5, 49.2% complete INFO:tensorflow:local_step=20850 global_step=20850 loss=18.8, 49.3% complete INFO:tensorflow:local_step=20860 global_step=20860 loss=18.7, 49.3% complete INFO:tensorflow:local_step=20870 global_step=20870 loss=17.8, 49.3% complete INFO:tensorflow:local_step=20880 global_step=20880 loss=18.8, 49.3% complete INFO:tensorflow:local_step=20890 global_step=20890 loss=18.3, 49.4% complete INFO:tensorflow:local_step=20900 global_step=20900 loss=269.8, 49.4% complete INFO:tensorflow:local_step=20910 global_step=20910 loss=18.8, 49.4% complete INFO:tensorflow:local_step=20920 global_step=20920 loss=18.3, 49.4% complete INFO:tensorflow:local_step=20930 global_step=20930 loss=18.0, 49.5% complete INFO:tensorflow:local_step=20940 global_step=20940 loss=18.3, 49.5% complete INFO:tensorflow:local_step=20950 global_step=20950 loss=15.3, 49.5% complete INFO:tensorflow:local_step=20960 global_step=20960 loss=17.7, 49.5% complete INFO:tensorflow:local_step=20970 global_step=20970 loss=18.6, 49.6% complete INFO:tensorflow:local_step=20980 global_step=20980 loss=18.2, 49.6% complete INFO:tensorflow:local_step=20990 global_step=20990 loss=18.8, 49.6% complete INFO:tensorflow:local_step=21000 global_step=21000 loss=18.7, 49.6% complete INFO:tensorflow:local_step=21010 global_step=21010 loss=17.9, 49.6% complete INFO:tensorflow:local_step=21020 global_step=21020 loss=18.2, 49.7% complete INFO:tensorflow:local_step=21030 global_step=21030 loss=19.0, 49.7% complete INFO:tensorflow:local_step=21040 global_step=21040 loss=18.9, 49.7% complete INFO:tensorflow:local_step=21050 global_step=21050 loss=18.1, 49.7% complete INFO:tensorflow:local_step=21060 global_step=21060 loss=17.7, 49.8% complete INFO:tensorflow:local_step=21070 global_step=21070 loss=19.6, 49.8% complete INFO:tensorflow:local_step=21080 global_step=21080 loss=19.2, 49.8% complete INFO:tensorflow:local_step=21090 global_step=21090 loss=300.9, 49.8% complete INFO:tensorflow:local_step=21100 global_step=21100 loss=15.4, 49.9% complete INFO:tensorflow:local_step=21110 global_step=21110 loss=17.1, 49.9% complete INFO:tensorflow:local_step=21120 global_step=21120 loss=18.2, 49.9% complete INFO:tensorflow:local_step=21130 global_step=21130 loss=21.0, 49.9% complete INFO:tensorflow:local_step=21140 global_step=21140 loss=18.3, 50.0% complete INFO:tensorflow:local_step=21150 global_step=21150 loss=18.1, 50.0% complete INFO:tensorflow:local_step=21160 global_step=21160 loss=19.5, 50.0% complete INFO:tensorflow:local_step=21170 global_step=21170 loss=17.3, 50.0% complete INFO:tensorflow:local_step=21180 global_step=21180 loss=17.5, 50.0% complete INFO:tensorflow:local_step=21190 global_step=21190 loss=17.9, 50.1% complete INFO:tensorflow:local_step=21200 global_step=21200 loss=18.5, 50.1% complete INFO:tensorflow:local_step=21210 global_step=21210 loss=18.7, 50.1% complete INFO:tensorflow:local_step=21220 global_step=21220 loss=17.5, 50.1% complete INFO:tensorflow:local_step=21230 global_step=21230 loss=18.7, 50.2% complete INFO:tensorflow:local_step=21240 global_step=21240 loss=18.2, 50.2% complete INFO:tensorflow:local_step=21250 global_step=21250 loss=18.3, 50.2% complete INFO:tensorflow:local_step=21260 global_step=21260 loss=18.1, 50.2% complete INFO:tensorflow:local_step=21270 global_step=21270 loss=17.4, 50.3% complete INFO:tensorflow:local_step=21280 global_step=21280 loss=17.7, 50.3% complete INFO:tensorflow:local_step=21290 global_step=21290 loss=18.4, 50.3% complete INFO:tensorflow:local_step=21300 global_step=21300 loss=18.7, 50.3% complete INFO:tensorflow:local_step=21310 global_step=21310 loss=18.8, 50.4% complete INFO:tensorflow:local_step=21320 global_step=21320 loss=17.9, 50.4% complete INFO:tensorflow:local_step=21330 global_step=21330 loss=15.5, 50.4% complete INFO:tensorflow:local_step=21340 global_step=21340 loss=18.5, 50.4% complete INFO:tensorflow:local_step=21350 global_step=21350 loss=17.4, 50.4% complete INFO:tensorflow:local_step=21360 global_step=21360 loss=18.0, 50.5% complete INFO:tensorflow:local_step=21370 global_step=21370 loss=17.9, 50.5% complete INFO:tensorflow:local_step=21380 global_step=21380 loss=18.1, 50.5% complete INFO:tensorflow:local_step=21390 global_step=21390 loss=18.1, 50.5% complete INFO:tensorflow:local_step=21400 global_step=21400 loss=18.1, 50.6% complete INFO:tensorflow:local_step=21410 global_step=21410 loss=18.6, 50.6% complete INFO:tensorflow:local_step=21420 global_step=21420 loss=287.3, 50.6% complete INFO:tensorflow:local_step=21430 global_step=21430 loss=18.3, 50.6% complete INFO:tensorflow:local_step=21440 global_step=21440 loss=19.2, 50.7% complete INFO:tensorflow:local_step=21450 global_step=21450 loss=18.4, 50.7% complete INFO:tensorflow:local_step=21460 global_step=21460 loss=18.1, 50.7% complete INFO:tensorflow:local_step=21470 global_step=21470 loss=17.9, 50.7% complete INFO:tensorflow:local_step=21480 global_step=21480 loss=17.8, 50.8% complete INFO:tensorflow:local_step=21490 global_step=21490 loss=18.2, 50.8% complete INFO:tensorflow:local_step=21500 global_step=21500 loss=17.2, 50.8% complete INFO:tensorflow:local_step=21510 global_step=21510 loss=21.6, 50.8% complete INFO:tensorflow:local_step=21520 global_step=21520 loss=19.0, 50.9% complete INFO:tensorflow:local_step=21530 global_step=21530 loss=18.5, 50.9% complete INFO:tensorflow:local_step=21540 global_step=21540 loss=17.3, 50.9% complete INFO:tensorflow:local_step=21550 global_step=21550 loss=19.2, 50.9% complete INFO:tensorflow:local_step=21560 global_step=21560 loss=18.3, 50.9% complete INFO:tensorflow:local_step=21570 global_step=21570 loss=18.0, 51.0% complete INFO:tensorflow:local_step=21580 global_step=21580 loss=18.4, 51.0% complete INFO:tensorflow:local_step=21590 global_step=21590 loss=19.2, 51.0% complete INFO:tensorflow:local_step=21600 global_step=21600 loss=18.3, 51.0% complete INFO:tensorflow:local_step=21610 global_step=21610 loss=18.0, 51.1% complete INFO:tensorflow:local_step=21620 global_step=21620 loss=18.5, 51.1% complete INFO:tensorflow:local_step=21630 global_step=21630 loss=17.8, 51.1% complete INFO:tensorflow:local_step=21640 global_step=21640 loss=18.9, 51.1% complete INFO:tensorflow:local_step=21650 global_step=21650 loss=18.6, 51.2% complete INFO:tensorflow:local_step=21660 global_step=21660 loss=17.8, 51.2% complete INFO:tensorflow:local_step=21670 global_step=21670 loss=17.8, 51.2% complete INFO:tensorflow:local_step=21680 global_step=21680 loss=17.8, 51.2% complete INFO:tensorflow:local_step=21690 global_step=21690 loss=18.2, 51.3% complete INFO:tensorflow:local_step=21700 global_step=21700 loss=18.6, 51.3% complete INFO:tensorflow:local_step=21710 global_step=21710 loss=19.1, 51.3% complete INFO:tensorflow:local_step=21720 global_step=21720 loss=17.7, 51.3% complete INFO:tensorflow:local_step=21730 global_step=21730 loss=18.8, 51.3% complete INFO:tensorflow:local_step=21740 global_step=21740 loss=17.8, 51.4% complete INFO:tensorflow:local_step=21750 global_step=21750 loss=18.3, 51.4% complete INFO:tensorflow:local_step=21760 global_step=21760 loss=18.2, 51.4% complete INFO:tensorflow:local_step=21770 global_step=21770 loss=16.2, 51.4% complete INFO:tensorflow:local_step=21780 global_step=21780 loss=18.5, 51.5% complete INFO:tensorflow:local_step=21790 global_step=21790 loss=18.5, 51.5% complete INFO:tensorflow:local_step=21800 global_step=21800 loss=18.6, 51.5% complete INFO:tensorflow:local_step=21810 global_step=21810 loss=18.2, 51.5% complete INFO:tensorflow:local_step=21820 global_step=21820 loss=18.1, 51.6% complete INFO:tensorflow:local_step=21830 global_step=21830 loss=18.6, 51.6% complete INFO:tensorflow:local_step=21840 global_step=21840 loss=18.4, 51.6% complete INFO:tensorflow:local_step=21850 global_step=21850 loss=17.8, 51.6% complete INFO:tensorflow:local_step=21860 global_step=21860 loss=19.0, 51.7% complete INFO:tensorflow:local_step=21870 global_step=21870 loss=17.8, 51.7% complete INFO:tensorflow:local_step=21880 global_step=21880 loss=18.9, 51.7% complete INFO:tensorflow:local_step=21890 global_step=21890 loss=18.5, 51.7% complete INFO:tensorflow:local_step=21900 global_step=21900 loss=18.1, 51.7% complete INFO:tensorflow:local_step=21910 global_step=21910 loss=18.8, 51.8% complete INFO:tensorflow:local_step=21920 global_step=21920 loss=17.7, 51.8% complete INFO:tensorflow:local_step=21930 global_step=21930 loss=17.8, 51.8% complete INFO:tensorflow:local_step=21940 global_step=21940 loss=18.0, 51.8% complete INFO:tensorflow:local_step=21950 global_step=21950 loss=19.3, 51.9% complete INFO:tensorflow:local_step=21960 global_step=21960 loss=19.2, 51.9% complete INFO:tensorflow:local_step=21970 global_step=21970 loss=19.8, 51.9% complete INFO:tensorflow:local_step=21980 global_step=21980 loss=18.3, 51.9% complete INFO:tensorflow:local_step=21990 global_step=21990 loss=16.2, 52.0% complete INFO:tensorflow:local_step=22000 global_step=22000 loss=15.8, 52.0% complete INFO:tensorflow:local_step=22010 global_step=22010 loss=16.6, 52.0% complete INFO:tensorflow:local_step=22020 global_step=22020 loss=18.3, 52.0% complete INFO:tensorflow:local_step=22030 global_step=22030 loss=18.1, 52.1% complete INFO:tensorflow:local_step=22040 global_step=22040 loss=18.8, 52.1% complete INFO:tensorflow:local_step=22050 global_step=22050 loss=17.9, 52.1% complete INFO:tensorflow:local_step=22060 global_step=22060 loss=19.4, 52.1% complete INFO:tensorflow:local_step=22070 global_step=22070 loss=17.6, 52.2% complete INFO:tensorflow:local_step=22080 global_step=22080 loss=18.6, 52.2% complete INFO:tensorflow:local_step=22090 global_step=22090 loss=19.3, 52.2% complete INFO:tensorflow:local_step=22100 global_step=22100 loss=18.1, 52.2% complete INFO:tensorflow:local_step=22110 global_step=22110 loss=19.9, 52.2% complete INFO:tensorflow:local_step=22120 global_step=22120 loss=18.5, 52.3% complete INFO:tensorflow:local_step=22130 global_step=22130 loss=17.9, 52.3% complete INFO:tensorflow:local_step=22140 global_step=22140 loss=19.7, 52.3% complete INFO:tensorflow:local_step=22150 global_step=22150 loss=17.7, 52.3% complete INFO:tensorflow:local_step=22160 global_step=22160 loss=18.1, 52.4% complete INFO:tensorflow:local_step=22170 global_step=22170 loss=19.1, 52.4% complete INFO:tensorflow:local_step=22180 global_step=22180 loss=18.2, 52.4% complete INFO:tensorflow:local_step=22190 global_step=22190 loss=17.8, 52.4% complete INFO:tensorflow:local_step=22200 global_step=22200 loss=19.1, 52.5% complete INFO:tensorflow:local_step=22210 global_step=22210 loss=18.2, 52.5% complete INFO:tensorflow:local_step=22220 global_step=22220 loss=18.5, 52.5% complete INFO:tensorflow:local_step=22230 global_step=22230 loss=19.5, 52.5% complete INFO:tensorflow:local_step=22240 global_step=22240 loss=18.3, 52.6% complete INFO:tensorflow:local_step=22250 global_step=22250 loss=18.2, 52.6% complete INFO:tensorflow:local_step=22260 global_step=22260 loss=18.4, 52.6% complete INFO:tensorflow:local_step=22270 global_step=22270 loss=18.6, 52.6% complete INFO:tensorflow:local_step=22280 global_step=22280 loss=17.7, 52.6% complete INFO:tensorflow:local_step=22290 global_step=22290 loss=18.0, 52.7% complete INFO:tensorflow:local_step=22300 global_step=22300 loss=18.6, 52.7% complete INFO:tensorflow:local_step=22310 global_step=22310 loss=17.9, 52.7% complete INFO:tensorflow:local_step=22320 global_step=22320 loss=19.2, 52.7% complete INFO:tensorflow:local_step=22330 global_step=22330 loss=18.1, 52.8% complete INFO:tensorflow:local_step=22340 global_step=22340 loss=17.9, 52.8% complete INFO:tensorflow:local_step=22350 global_step=22350 loss=18.6, 52.8% complete INFO:tensorflow:local_step=22360 global_step=22360 loss=17.8, 52.8% complete INFO:tensorflow:local_step=22370 global_step=22370 loss=19.4, 52.9% complete INFO:tensorflow:local_step=22380 global_step=22380 loss=17.9, 52.9% complete INFO:tensorflow:local_step=22390 global_step=22390 loss=18.4, 52.9% complete INFO:tensorflow:local_step=22400 global_step=22400 loss=17.9, 52.9% complete INFO:tensorflow:local_step=22410 global_step=22410 loss=19.8, 53.0% complete INFO:tensorflow:local_step=22420 global_step=22420 loss=17.8, 53.0% complete INFO:tensorflow:local_step=22430 global_step=22430 loss=18.3, 53.0% complete INFO:tensorflow:local_step=22440 global_step=22440 loss=232.0, 53.0% complete INFO:tensorflow:local_step=22450 global_step=22450 loss=229.4, 53.0% complete INFO:tensorflow:local_step=22460 global_step=22460 loss=18.5, 53.1% complete INFO:tensorflow:local_step=22470 global_step=22470 loss=18.9, 53.1% complete INFO:tensorflow:Recording summary at step 22475. INFO:tensorflow:global_step/sec: 125.318 INFO:tensorflow:local_step=22480 global_step=22480 loss=18.2, 53.1% complete INFO:tensorflow:local_step=22490 global_step=22490 loss=19.2, 53.1% complete INFO:tensorflow:local_step=22500 global_step=22500 loss=18.9, 53.2% complete INFO:tensorflow:local_step=22510 global_step=22510 loss=19.0, 53.2% complete INFO:tensorflow:local_step=22520 global_step=22520 loss=19.1, 53.2% complete INFO:tensorflow:local_step=22530 global_step=22530 loss=18.1, 53.2% complete INFO:tensorflow:local_step=22540 global_step=22540 loss=18.7, 53.3% complete INFO:tensorflow:local_step=22550 global_step=22550 loss=17.9, 53.3% complete INFO:tensorflow:local_step=22560 global_step=22560 loss=19.1, 53.3% complete INFO:tensorflow:local_step=22570 global_step=22570 loss=19.1, 53.3% complete INFO:tensorflow:local_step=22580 global_step=22580 loss=18.4, 53.4% complete INFO:tensorflow:local_step=22590 global_step=22590 loss=19.1, 53.4% complete INFO:tensorflow:local_step=22600 global_step=22600 loss=267.8, 53.4% complete INFO:tensorflow:local_step=22610 global_step=22610 loss=256.1, 53.4% complete INFO:tensorflow:local_step=22620 global_step=22620 loss=18.0, 53.4% complete INFO:tensorflow:local_step=22630 global_step=22630 loss=18.7, 53.5% complete INFO:tensorflow:local_step=22640 global_step=22640 loss=18.3, 53.5% complete INFO:tensorflow:local_step=22650 global_step=22650 loss=18.3, 53.5% complete INFO:tensorflow:local_step=22660 global_step=22660 loss=18.0, 53.5% complete INFO:tensorflow:local_step=22670 global_step=22670 loss=255.1, 53.6% complete INFO:tensorflow:local_step=22680 global_step=22680 loss=17.9, 53.6% complete INFO:tensorflow:local_step=22690 global_step=22690 loss=18.1, 53.6% complete INFO:tensorflow:local_step=22700 global_step=22700 loss=15.1, 53.6% complete INFO:tensorflow:local_step=22710 global_step=22710 loss=18.3, 53.7% complete INFO:tensorflow:local_step=22720 global_step=22720 loss=18.8, 53.7% complete INFO:tensorflow:local_step=22730 global_step=22730 loss=18.5, 53.7% complete INFO:tensorflow:local_step=22740 global_step=22740 loss=18.3, 53.7% complete INFO:tensorflow:local_step=22750 global_step=22750 loss=19.0, 53.8% complete INFO:tensorflow:local_step=22760 global_step=22760 loss=18.1, 53.8% complete INFO:tensorflow:local_step=22770 global_step=22770 loss=18.0, 53.8% complete INFO:tensorflow:local_step=22780 global_step=22780 loss=18.6, 53.8% complete INFO:tensorflow:local_step=22790 global_step=22790 loss=18.9, 53.9% complete INFO:tensorflow:local_step=22800 global_step=22800 loss=18.4, 53.9% complete INFO:tensorflow:local_step=22810 global_step=22810 loss=18.2, 53.9% complete INFO:tensorflow:local_step=22820 global_step=22820 loss=18.7, 53.9% complete INFO:tensorflow:local_step=22830 global_step=22830 loss=19.0, 53.9% complete INFO:tensorflow:local_step=22840 global_step=22840 loss=19.9, 54.0% complete INFO:tensorflow:local_step=22850 global_step=22850 loss=18.5, 54.0% complete INFO:tensorflow:local_step=22860 global_step=22860 loss=18.3, 54.0% complete INFO:tensorflow:local_step=22870 global_step=22870 loss=18.4, 54.0% complete INFO:tensorflow:local_step=22880 global_step=22880 loss=18.9, 54.1% complete INFO:tensorflow:local_step=22890 global_step=22890 loss=18.4, 54.1% complete INFO:tensorflow:local_step=22900 global_step=22900 loss=18.3, 54.1% complete INFO:tensorflow:local_step=22910 global_step=22910 loss=18.0, 54.1% complete INFO:tensorflow:local_step=22920 global_step=22920 loss=17.6, 54.2% complete INFO:tensorflow:local_step=22930 global_step=22930 loss=18.3, 54.2% complete INFO:tensorflow:local_step=22940 global_step=22940 loss=18.1, 54.2% complete INFO:tensorflow:local_step=22950 global_step=22950 loss=19.7, 54.2% complete INFO:tensorflow:local_step=22960 global_step=22960 loss=18.5, 54.3% complete INFO:tensorflow:local_step=22970 global_step=22970 loss=18.3, 54.3% complete INFO:tensorflow:local_step=22980 global_step=22980 loss=18.9, 54.3% complete INFO:tensorflow:local_step=22990 global_step=22990 loss=18.9, 54.3% complete INFO:tensorflow:local_step=23000 global_step=23000 loss=19.6, 54.3% complete INFO:tensorflow:local_step=23010 global_step=23010 loss=17.8, 54.4% complete INFO:tensorflow:local_step=23020 global_step=23020 loss=18.7, 54.4% complete INFO:tensorflow:local_step=23030 global_step=23030 loss=19.5, 54.4% complete INFO:tensorflow:local_step=23040 global_step=23040 loss=18.4, 54.4% complete INFO:tensorflow:local_step=23050 global_step=23050 loss=18.1, 54.5% complete INFO:tensorflow:local_step=23060 global_step=23060 loss=18.5, 54.5% complete INFO:tensorflow:local_step=23070 global_step=23070 loss=18.8, 54.5% complete INFO:tensorflow:local_step=23080 global_step=23080 loss=18.6, 54.5% complete INFO:tensorflow:local_step=23090 global_step=23090 loss=18.2, 54.6% complete INFO:tensorflow:local_step=23100 global_step=23100 loss=17.7, 54.6% complete INFO:tensorflow:local_step=23110 global_step=23110 loss=18.6, 54.6% complete INFO:tensorflow:local_step=23120 global_step=23120 loss=17.8, 54.6% complete INFO:tensorflow:local_step=23130 global_step=23130 loss=19.6, 54.7% complete INFO:tensorflow:local_step=23140 global_step=23140 loss=17.7, 54.7% complete INFO:tensorflow:local_step=23150 global_step=23150 loss=18.0, 54.7% complete INFO:tensorflow:local_step=23160 global_step=23160 loss=18.5, 54.7% complete INFO:tensorflow:local_step=23170 global_step=23170 loss=18.9, 54.7% complete INFO:tensorflow:local_step=23180 global_step=23180 loss=17.7, 54.8% complete INFO:tensorflow:local_step=23190 global_step=23190 loss=17.6, 54.8% complete INFO:tensorflow:local_step=23200 global_step=23200 loss=18.0, 54.8% complete INFO:tensorflow:local_step=23210 global_step=23210 loss=18.0, 54.8% complete INFO:tensorflow:local_step=23220 global_step=23220 loss=18.2, 54.9% complete INFO:tensorflow:local_step=23230 global_step=23230 loss=15.2, 54.9% complete INFO:tensorflow:local_step=23240 global_step=23240 loss=18.4, 54.9% complete INFO:tensorflow:local_step=23250 global_step=23250 loss=18.3, 54.9% complete INFO:tensorflow:local_step=23260 global_step=23260 loss=17.7, 55.0% complete INFO:tensorflow:local_step=23270 global_step=23270 loss=18.8, 55.0% complete INFO:tensorflow:local_step=23280 global_step=23280 loss=18.1, 55.0% complete INFO:tensorflow:local_step=23290 global_step=23290 loss=18.1, 55.0% complete INFO:tensorflow:local_step=23300 global_step=23300 loss=17.9, 55.1% complete INFO:tensorflow:local_step=23310 global_step=23310 loss=18.3, 55.1% complete INFO:tensorflow:local_step=23320 global_step=23320 loss=17.8, 55.1% complete INFO:tensorflow:local_step=23330 global_step=23330 loss=19.1, 55.1% complete INFO:tensorflow:local_step=23340 global_step=23340 loss=18.1, 55.2% complete INFO:tensorflow:local_step=23350 global_step=23350 loss=18.6, 55.2% complete INFO:tensorflow:local_step=23360 global_step=23360 loss=17.8, 55.2% complete INFO:tensorflow:local_step=23370 global_step=23370 loss=18.9, 55.2% complete INFO:tensorflow:local_step=23380 global_step=23380 loss=18.4, 55.2% complete INFO:tensorflow:local_step=23390 global_step=23390 loss=15.1, 55.3% complete INFO:tensorflow:local_step=23400 global_step=23400 loss=18.0, 55.3% complete INFO:tensorflow:local_step=23410 global_step=23410 loss=18.7, 55.3% complete INFO:tensorflow:local_step=23420 global_step=23420 loss=18.1, 55.3% complete INFO:tensorflow:local_step=23430 global_step=23430 loss=18.5, 55.4% complete INFO:tensorflow:local_step=23440 global_step=23440 loss=18.6, 55.4% complete INFO:tensorflow:local_step=23450 global_step=23450 loss=17.4, 55.4% complete INFO:tensorflow:local_step=23460 global_step=23460 loss=17.9, 55.4% complete INFO:tensorflow:local_step=23470 global_step=23470 loss=17.8, 55.5% complete INFO:tensorflow:local_step=23480 global_step=23480 loss=17.1, 55.5% complete INFO:tensorflow:local_step=23490 global_step=23490 loss=18.6, 55.5% complete INFO:tensorflow:local_step=23500 global_step=23500 loss=18.1, 55.5% complete INFO:tensorflow:local_step=23510 global_step=23510 loss=18.5, 55.6% complete INFO:tensorflow:local_step=23520 global_step=23520 loss=18.2, 55.6% complete INFO:tensorflow:local_step=23530 global_step=23530 loss=17.9, 55.6% complete INFO:tensorflow:local_step=23540 global_step=23540 loss=19.2, 55.6% complete INFO:tensorflow:local_step=23550 global_step=23550 loss=18.1, 55.6% complete INFO:tensorflow:local_step=23560 global_step=23560 loss=19.1, 55.7% complete INFO:tensorflow:local_step=23570 global_step=23570 loss=18.4, 55.7% complete INFO:tensorflow:local_step=23580 global_step=23580 loss=18.4, 55.7% complete INFO:tensorflow:local_step=23590 global_step=23590 loss=18.2, 55.7% complete INFO:tensorflow:local_step=23600 global_step=23600 loss=17.9, 55.8% complete INFO:tensorflow:local_step=23610 global_step=23610 loss=19.2, 55.8% complete INFO:tensorflow:local_step=23620 global_step=23620 loss=19.2, 55.8% complete INFO:tensorflow:local_step=23630 global_step=23630 loss=19.0, 55.8% complete INFO:tensorflow:local_step=23640 global_step=23640 loss=17.7, 55.9% complete INFO:tensorflow:local_step=23650 global_step=23650 loss=18.6, 55.9% complete INFO:tensorflow:local_step=23660 global_step=23660 loss=18.2, 55.9% complete INFO:tensorflow:local_step=23670 global_step=23670 loss=15.1, 55.9% complete INFO:tensorflow:local_step=23680 global_step=23680 loss=18.0, 56.0% complete INFO:tensorflow:local_step=23690 global_step=23690 loss=18.9, 56.0% complete INFO:tensorflow:local_step=23700 global_step=23700 loss=17.9, 56.0% complete INFO:tensorflow:local_step=23710 global_step=23710 loss=17.6, 56.0% complete INFO:tensorflow:local_step=23720 global_step=23720 loss=17.4, 56.0% complete INFO:tensorflow:local_step=23730 global_step=23730 loss=18.2, 56.1% complete INFO:tensorflow:local_step=23740 global_step=23740 loss=18.7, 56.1% complete INFO:tensorflow:local_step=23750 global_step=23750 loss=18.9, 56.1% complete INFO:tensorflow:local_step=23760 global_step=23760 loss=18.8, 56.1% complete INFO:tensorflow:local_step=23770 global_step=23770 loss=18.1, 56.2% complete INFO:tensorflow:local_step=23780 global_step=23780 loss=17.6, 56.2% complete INFO:tensorflow:local_step=23790 global_step=23790 loss=17.7, 56.2% complete INFO:tensorflow:local_step=23800 global_step=23800 loss=19.1, 56.2% complete INFO:tensorflow:local_step=23810 global_step=23810 loss=18.8, 56.3% complete INFO:tensorflow:local_step=23820 global_step=23820 loss=19.0, 56.3% complete INFO:tensorflow:local_step=23830 global_step=23830 loss=17.9, 56.3% complete INFO:tensorflow:local_step=23840 global_step=23840 loss=17.9, 56.3% complete INFO:tensorflow:local_step=23850 global_step=23850 loss=18.1, 56.4% complete INFO:tensorflow:local_step=23860 global_step=23860 loss=17.5, 56.4% complete INFO:tensorflow:local_step=23870 global_step=23870 loss=18.5, 56.4% complete INFO:tensorflow:local_step=23880 global_step=23880 loss=19.0, 56.4% complete INFO:tensorflow:local_step=23890 global_step=23890 loss=15.1, 56.5% complete INFO:tensorflow:local_step=23900 global_step=23900 loss=17.4, 56.5% complete INFO:tensorflow:local_step=23910 global_step=23910 loss=18.1, 56.5% complete INFO:tensorflow:local_step=23920 global_step=23920 loss=18.3, 56.5% complete INFO:tensorflow:local_step=23930 global_step=23930 loss=20.8, 56.5% complete INFO:tensorflow:local_step=23940 global_step=23940 loss=18.0, 56.6% complete INFO:tensorflow:local_step=23950 global_step=23950 loss=20.5, 56.6% complete INFO:tensorflow:local_step=23960 global_step=23960 loss=18.4, 56.6% complete INFO:tensorflow:local_step=23970 global_step=23970 loss=18.7, 56.6% complete INFO:tensorflow:local_step=23980 global_step=23980 loss=18.8, 56.7% complete INFO:tensorflow:local_step=23990 global_step=23990 loss=18.7, 56.7% complete INFO:tensorflow:local_step=24000 global_step=24000 loss=18.6, 56.7% complete INFO:tensorflow:local_step=24010 global_step=24010 loss=18.4, 56.7% complete INFO:tensorflow:local_step=24020 global_step=24020 loss=19.2, 56.8% complete INFO:tensorflow:local_step=24030 global_step=24030 loss=18.1, 56.8% complete INFO:tensorflow:local_step=24040 global_step=24040 loss=18.5, 56.8% complete INFO:tensorflow:local_step=24050 global_step=24050 loss=18.0, 56.8% complete INFO:tensorflow:local_step=24060 global_step=24060 loss=18.1, 56.9% complete INFO:tensorflow:local_step=24070 global_step=24070 loss=16.9, 56.9% complete INFO:tensorflow:local_step=24080 global_step=24080 loss=19.0, 56.9% complete INFO:tensorflow:local_step=24090 global_step=24090 loss=18.8, 56.9% complete INFO:tensorflow:local_step=24100 global_step=24100 loss=17.6, 56.9% complete INFO:tensorflow:local_step=24110 global_step=24110 loss=18.2, 57.0% complete INFO:tensorflow:local_step=24120 global_step=24120 loss=17.4, 57.0% complete INFO:tensorflow:local_step=24130 global_step=24130 loss=18.1, 57.0% complete INFO:tensorflow:local_step=24140 global_step=24140 loss=17.2, 57.0% complete INFO:tensorflow:local_step=24150 global_step=24150 loss=18.2, 57.1% complete INFO:tensorflow:local_step=24160 global_step=24160 loss=17.5, 57.1% complete INFO:tensorflow:local_step=24170 global_step=24170 loss=18.8, 57.1% complete INFO:tensorflow:local_step=24180 global_step=24180 loss=18.3, 57.1% complete INFO:tensorflow:local_step=24190 global_step=24190 loss=17.7, 57.2% complete INFO:tensorflow:local_step=24200 global_step=24200 loss=19.3, 57.2% complete INFO:tensorflow:local_step=24210 global_step=24210 loss=18.4, 57.2% complete INFO:tensorflow:local_step=24220 global_step=24220 loss=20.1, 57.2% complete INFO:tensorflow:local_step=24230 global_step=24230 loss=17.9, 57.3% complete INFO:tensorflow:local_step=24240 global_step=24240 loss=17.4, 57.3% complete INFO:tensorflow:local_step=24250 global_step=24250 loss=18.2, 57.3% complete INFO:tensorflow:local_step=24260 global_step=24260 loss=18.8, 57.3% complete INFO:tensorflow:local_step=24270 global_step=24270 loss=17.9, 57.3% complete INFO:tensorflow:local_step=24280 global_step=24280 loss=18.9, 57.4% complete INFO:tensorflow:local_step=24290 global_step=24290 loss=295.1, 57.4% complete INFO:tensorflow:local_step=24300 global_step=24300 loss=19.2, 57.4% complete INFO:tensorflow:local_step=24310 global_step=24310 loss=17.7, 57.4% complete INFO:tensorflow:local_step=24320 global_step=24320 loss=18.1, 57.5% complete INFO:tensorflow:local_step=24330 global_step=24330 loss=18.5, 57.5% complete INFO:tensorflow:local_step=24340 global_step=24340 loss=18.6, 57.5% complete INFO:tensorflow:local_step=24350 global_step=24350 loss=15.5, 57.5% complete INFO:tensorflow:local_step=24360 global_step=24360 loss=19.3, 57.6% complete INFO:tensorflow:local_step=24370 global_step=24370 loss=15.5, 57.6% complete INFO:tensorflow:local_step=24380 global_step=24380 loss=17.8, 57.6% complete INFO:tensorflow:local_step=24390 global_step=24390 loss=18.3, 57.6% complete INFO:tensorflow:local_step=24400 global_step=24400 loss=17.5, 57.7% complete INFO:tensorflow:local_step=24410 global_step=24410 loss=17.4, 57.7% complete INFO:tensorflow:local_step=24420 global_step=24420 loss=18.4, 57.7% complete INFO:tensorflow:local_step=24430 global_step=24430 loss=18.8, 57.7% complete INFO:tensorflow:local_step=24440 global_step=24440 loss=18.7, 57.8% complete INFO:tensorflow:local_step=24450 global_step=24450 loss=18.1, 57.8% complete INFO:tensorflow:local_step=24460 global_step=24460 loss=18.9, 57.8% complete INFO:tensorflow:local_step=24470 global_step=24470 loss=18.5, 57.8% complete INFO:tensorflow:local_step=24480 global_step=24480 loss=18.0, 57.8% complete INFO:tensorflow:local_step=24490 global_step=24490 loss=19.1, 57.9% complete INFO:tensorflow:local_step=24500 global_step=24500 loss=17.7, 57.9% complete INFO:tensorflow:local_step=24510 global_step=24510 loss=18.5, 57.9% complete INFO:tensorflow:local_step=24520 global_step=24520 loss=18.3, 57.9% complete INFO:tensorflow:local_step=24530 global_step=24530 loss=19.1, 58.0% complete INFO:tensorflow:local_step=24540 global_step=24540 loss=19.1, 58.0% complete INFO:tensorflow:local_step=24550 global_step=24550 loss=18.5, 58.0% complete INFO:tensorflow:local_step=24560 global_step=24560 loss=19.1, 58.0% complete INFO:tensorflow:local_step=24570 global_step=24570 loss=18.0, 58.1% complete INFO:tensorflow:local_step=24580 global_step=24580 loss=19.0, 58.1% complete INFO:tensorflow:local_step=24590 global_step=24590 loss=18.9, 58.1% complete INFO:tensorflow:local_step=24600 global_step=24600 loss=18.6, 58.1% complete INFO:tensorflow:local_step=24610 global_step=24610 loss=18.0, 58.2% complete INFO:tensorflow:local_step=24620 global_step=24620 loss=18.4, 58.2% complete INFO:tensorflow:local_step=24630 global_step=24630 loss=18.1, 58.2% complete INFO:tensorflow:local_step=24640 global_step=24640 loss=17.8, 58.2% complete INFO:tensorflow:local_step=24650 global_step=24650 loss=18.1, 58.2% complete INFO:tensorflow:local_step=24660 global_step=24660 loss=17.6, 58.3% complete INFO:tensorflow:local_step=24670 global_step=24670 loss=19.2, 58.3% complete INFO:tensorflow:local_step=24680 global_step=24680 loss=19.2, 58.3% complete INFO:tensorflow:local_step=24690 global_step=24690 loss=19.0, 58.3% complete INFO:tensorflow:local_step=24700 global_step=24700 loss=18.0, 58.4% complete INFO:tensorflow:local_step=24710 global_step=24710 loss=17.9, 58.4% complete INFO:tensorflow:local_step=24720 global_step=24720 loss=17.7, 58.4% complete INFO:tensorflow:local_step=24730 global_step=24730 loss=18.7, 58.4% complete INFO:tensorflow:local_step=24740 global_step=24740 loss=18.8, 58.5% complete INFO:tensorflow:local_step=24750 global_step=24750 loss=288.3, 58.5% complete INFO:tensorflow:local_step=24760 global_step=24760 loss=19.0, 58.5% complete INFO:tensorflow:local_step=24770 global_step=24770 loss=17.9, 58.5% complete INFO:tensorflow:local_step=24780 global_step=24780 loss=19.2, 58.6% complete INFO:tensorflow:local_step=24790 global_step=24790 loss=18.0, 58.6% complete INFO:tensorflow:local_step=24800 global_step=24800 loss=18.1, 58.6% complete INFO:tensorflow:local_step=24810 global_step=24810 loss=17.8, 58.6% complete INFO:tensorflow:local_step=24820 global_step=24820 loss=18.6, 58.6% complete INFO:tensorflow:local_step=24830 global_step=24830 loss=18.7, 58.7% complete INFO:tensorflow:local_step=24840 global_step=24840 loss=17.8, 58.7% complete INFO:tensorflow:local_step=24850 global_step=24850 loss=18.3, 58.7% complete INFO:tensorflow:local_step=24860 global_step=24860 loss=18.6, 58.7% complete INFO:tensorflow:local_step=24870 global_step=24870 loss=19.2, 58.8% complete INFO:tensorflow:local_step=24880 global_step=24880 loss=18.2, 58.8% complete INFO:tensorflow:local_step=24890 global_step=24890 loss=17.5, 58.8% complete INFO:tensorflow:local_step=24900 global_step=24900 loss=18.1, 58.8% complete INFO:tensorflow:local_step=24910 global_step=24910 loss=18.1, 58.9% complete INFO:tensorflow:local_step=24920 global_step=24920 loss=18.8, 58.9% complete INFO:tensorflow:local_step=24930 global_step=24930 loss=17.6, 58.9% complete INFO:tensorflow:local_step=24940 global_step=24940 loss=17.5, 58.9% complete INFO:tensorflow:local_step=24950 global_step=24950 loss=18.5, 59.0% complete INFO:tensorflow:local_step=24960 global_step=24960 loss=18.9, 59.0% complete INFO:tensorflow:local_step=24970 global_step=24970 loss=19.4, 59.0% complete INFO:tensorflow:local_step=24980 global_step=24980 loss=18.7, 59.0% complete INFO:tensorflow:local_step=24990 global_step=24990 loss=17.7, 59.1% complete INFO:tensorflow:local_step=25000 global_step=25000 loss=18.7, 59.1% complete INFO:tensorflow:local_step=25010 global_step=25010 loss=18.4, 59.1% complete INFO:tensorflow:local_step=25020 global_step=25020 loss=17.8, 59.1% complete INFO:tensorflow:local_step=25030 global_step=25030 loss=16.0, 59.1% complete INFO:tensorflow:local_step=25040 global_step=25040 loss=18.6, 59.2% complete INFO:tensorflow:local_step=25050 global_step=25050 loss=17.9, 59.2% complete INFO:tensorflow:local_step=25060 global_step=25060 loss=18.1, 59.2% complete INFO:tensorflow:local_step=25070 global_step=25070 loss=17.5, 59.2% complete INFO:tensorflow:local_step=25080 global_step=25080 loss=18.7, 59.3% complete INFO:tensorflow:local_step=25090 global_step=25090 loss=18.3, 59.3% complete INFO:tensorflow:local_step=25100 global_step=25100 loss=18.1, 59.3% complete INFO:tensorflow:local_step=25110 global_step=25110 loss=18.4, 59.3% complete INFO:tensorflow:local_step=25120 global_step=25120 loss=18.5, 59.4% complete INFO:tensorflow:local_step=25130 global_step=25130 loss=18.1, 59.4% complete INFO:tensorflow:local_step=25140 global_step=25140 loss=18.1, 59.4% complete INFO:tensorflow:local_step=25150 global_step=25150 loss=297.4, 59.4% complete INFO:tensorflow:local_step=25160 global_step=25160 loss=18.4, 59.5% complete INFO:tensorflow:local_step=25170 global_step=25170 loss=18.1, 59.5% complete INFO:tensorflow:local_step=25180 global_step=25180 loss=18.4, 59.5% complete INFO:tensorflow:local_step=25190 global_step=25190 loss=19.0, 59.5% complete INFO:tensorflow:local_step=25200 global_step=25200 loss=17.4, 59.5% complete INFO:tensorflow:local_step=25210 global_step=25210 loss=17.5, 59.6% complete INFO:tensorflow:local_step=25220 global_step=25220 loss=20.0, 59.6% complete INFO:tensorflow:local_step=25230 global_step=25230 loss=17.8, 59.6% complete INFO:tensorflow:local_step=25240 global_step=25240 loss=19.0, 59.6% complete INFO:tensorflow:local_step=25250 global_step=25250 loss=18.5, 59.7% complete INFO:tensorflow:local_step=25260 global_step=25260 loss=17.8, 59.7% complete INFO:tensorflow:local_step=25270 global_step=25270 loss=18.5, 59.7% complete INFO:tensorflow:local_step=25280 global_step=25280 loss=18.6, 59.7% complete INFO:tensorflow:local_step=25290 global_step=25290 loss=18.8, 59.8% complete INFO:tensorflow:local_step=25300 global_step=25300 loss=18.5, 59.8% complete INFO:tensorflow:local_step=25310 global_step=25310 loss=19.0, 59.8% complete INFO:tensorflow:local_step=25320 global_step=25320 loss=18.1, 59.8% complete INFO:tensorflow:local_step=25330 global_step=25330 loss=17.4, 59.9% complete INFO:tensorflow:local_step=25340 global_step=25340 loss=17.7, 59.9% complete INFO:tensorflow:local_step=25350 global_step=25350 loss=18.8, 59.9% complete INFO:tensorflow:local_step=25360 global_step=25360 loss=15.4, 59.9% complete INFO:tensorflow:local_step=25370 global_step=25370 loss=18.8, 59.9% complete INFO:tensorflow:local_step=25380 global_step=25380 loss=19.1, 60.0% complete INFO:tensorflow:local_step=25390 global_step=25390 loss=18.3, 60.0% complete INFO:tensorflow:local_step=25400 global_step=25400 loss=17.4, 60.0% complete INFO:tensorflow:local_step=25410 global_step=25410 loss=17.3, 60.0% complete INFO:tensorflow:local_step=25420 global_step=25420 loss=17.6, 60.1% complete INFO:tensorflow:local_step=25430 global_step=25430 loss=18.3, 60.1% complete INFO:tensorflow:local_step=25440 global_step=25440 loss=18.1, 60.1% complete INFO:tensorflow:local_step=25450 global_step=25450 loss=18.4, 60.1% complete INFO:tensorflow:local_step=25460 global_step=25460 loss=165.0, 60.2% complete INFO:tensorflow:local_step=25470 global_step=25470 loss=18.0, 60.2% complete INFO:tensorflow:local_step=25480 global_step=25480 loss=18.6, 60.2% complete INFO:tensorflow:local_step=25490 global_step=25490 loss=18.2, 60.2% complete INFO:tensorflow:local_step=25500 global_step=25500 loss=18.0, 60.3% complete INFO:tensorflow:local_step=25510 global_step=25510 loss=18.6, 60.3% complete INFO:tensorflow:local_step=25520 global_step=25520 loss=18.0, 60.3% complete INFO:tensorflow:local_step=25530 global_step=25530 loss=18.1, 60.3% complete INFO:tensorflow:local_step=25540 global_step=25540 loss=18.4, 60.3% complete INFO:tensorflow:local_step=25550 global_step=25550 loss=18.1, 60.4% complete INFO:tensorflow:local_step=25560 global_step=25560 loss=18.8, 60.4% complete INFO:tensorflow:local_step=25570 global_step=25570 loss=18.2, 60.4% complete INFO:tensorflow:local_step=25580 global_step=25580 loss=18.2, 60.4% complete INFO:tensorflow:local_step=25590 global_step=25590 loss=17.8, 60.5% complete INFO:tensorflow:local_step=25600 global_step=25600 loss=17.8, 60.5% complete INFO:tensorflow:local_step=25610 global_step=25610 loss=18.5, 60.5% complete INFO:tensorflow:local_step=25620 global_step=25620 loss=18.2, 60.5% complete INFO:tensorflow:local_step=25630 global_step=25630 loss=17.2, 60.6% complete INFO:tensorflow:local_step=25640 global_step=25640 loss=17.7, 60.6% complete INFO:tensorflow:local_step=25650 global_step=25650 loss=18.3, 60.6% complete INFO:tensorflow:local_step=25660 global_step=25660 loss=18.2, 60.6% complete INFO:tensorflow:local_step=25670 global_step=25670 loss=18.2, 60.7% complete INFO:tensorflow:local_step=25680 global_step=25680 loss=18.5, 60.7% complete INFO:tensorflow:local_step=25690 global_step=25690 loss=18.4, 60.7% complete INFO:tensorflow:local_step=25700 global_step=25700 loss=17.7, 60.7% complete INFO:tensorflow:local_step=25710 global_step=25710 loss=17.9, 60.8% complete INFO:tensorflow:local_step=25720 global_step=25720 loss=18.5, 60.8% complete INFO:tensorflow:local_step=25730 global_step=25730 loss=17.1, 60.8% complete INFO:tensorflow:local_step=25740 global_step=25740 loss=17.5, 60.8% complete INFO:tensorflow:local_step=25750 global_step=25750 loss=18.0, 60.8% complete INFO:tensorflow:local_step=25760 global_step=25760 loss=17.9, 60.9% complete INFO:tensorflow:local_step=25770 global_step=25770 loss=17.9, 60.9% complete INFO:tensorflow:local_step=25780 global_step=25780 loss=18.7, 60.9% complete INFO:tensorflow:local_step=25790 global_step=25790 loss=18.2, 60.9% complete INFO:tensorflow:local_step=25800 global_step=25800 loss=18.2, 61.0% complete INFO:tensorflow:local_step=25810 global_step=25810 loss=18.0, 61.0% complete INFO:tensorflow:local_step=25820 global_step=25820 loss=18.5, 61.0% complete INFO:tensorflow:local_step=25830 global_step=25830 loss=18.0, 61.0% complete INFO:tensorflow:local_step=25840 global_step=25840 loss=18.0, 61.1% complete INFO:tensorflow:local_step=25850 global_step=25850 loss=18.2, 61.1% complete INFO:tensorflow:local_step=25860 global_step=25860 loss=18.0, 61.1% complete INFO:tensorflow:local_step=25870 global_step=25870 loss=18.6, 61.1% complete INFO:tensorflow:local_step=25880 global_step=25880 loss=18.5, 61.2% complete INFO:tensorflow:local_step=25890 global_step=25890 loss=18.4, 61.2% complete INFO:tensorflow:local_step=25900 global_step=25900 loss=18.8, 61.2% complete INFO:tensorflow:local_step=25910 global_step=25910 loss=18.2, 61.2% complete INFO:tensorflow:local_step=25920 global_step=25920 loss=18.4, 61.2% complete INFO:tensorflow:local_step=25930 global_step=25930 loss=18.6, 61.3% complete INFO:tensorflow:local_step=25940 global_step=25940 loss=18.1, 61.3% complete INFO:tensorflow:local_step=25950 global_step=25950 loss=18.5, 61.3% complete INFO:tensorflow:local_step=25960 global_step=25960 loss=17.9, 61.3% complete INFO:tensorflow:local_step=25970 global_step=25970 loss=18.1, 61.4% complete INFO:tensorflow:local_step=25980 global_step=25980 loss=18.3, 61.4% complete INFO:tensorflow:local_step=25990 global_step=25990 loss=18.8, 61.4% complete INFO:tensorflow:local_step=26000 global_step=26000 loss=18.2, 61.4% complete INFO:tensorflow:local_step=26010 global_step=26010 loss=18.0, 61.5% complete INFO:tensorflow:local_step=26020 global_step=26020 loss=17.8, 61.5% complete INFO:tensorflow:local_step=26030 global_step=26030 loss=18.3, 61.5% complete INFO:tensorflow:local_step=26040 global_step=26040 loss=18.0, 61.5% complete INFO:tensorflow:local_step=26050 global_step=26050 loss=17.9, 61.6% complete INFO:tensorflow:local_step=26060 global_step=26060 loss=19.1, 61.6% complete INFO:tensorflow:local_step=26070 global_step=26070 loss=18.1, 61.6% complete INFO:tensorflow:local_step=26080 global_step=26080 loss=17.7, 61.6% complete INFO:tensorflow:local_step=26090 global_step=26090 loss=18.6, 61.6% complete INFO:tensorflow:local_step=26100 global_step=26100 loss=18.2, 61.7% complete INFO:tensorflow:local_step=26110 global_step=26110 loss=18.3, 61.7% complete INFO:tensorflow:local_step=26120 global_step=26120 loss=18.3, 61.7% complete INFO:tensorflow:local_step=26130 global_step=26130 loss=19.0, 61.7% complete INFO:tensorflow:local_step=26140 global_step=26140 loss=18.6, 61.8% complete INFO:tensorflow:local_step=26150 global_step=26150 loss=18.7, 61.8% complete INFO:tensorflow:local_step=26160 global_step=26160 loss=18.4, 61.8% complete INFO:tensorflow:local_step=26170 global_step=26170 loss=17.6, 61.8% complete INFO:tensorflow:local_step=26180 global_step=26180 loss=19.2, 61.9% complete INFO:tensorflow:local_step=26190 global_step=26190 loss=17.7, 61.9% complete INFO:tensorflow:local_step=26200 global_step=26200 loss=18.8, 61.9% complete INFO:tensorflow:local_step=26210 global_step=26210 loss=17.7, 61.9% complete INFO:tensorflow:local_step=26220 global_step=26220 loss=18.2, 62.0% complete INFO:tensorflow:local_step=26230 global_step=26230 loss=18.3, 62.0% complete INFO:tensorflow:local_step=26240 global_step=26240 loss=18.9, 62.0% complete INFO:tensorflow:local_step=26250 global_step=26250 loss=17.9, 62.0% complete INFO:tensorflow:local_step=26260 global_step=26260 loss=18.7, 62.1% complete INFO:tensorflow:local_step=26270 global_step=26270 loss=18.1, 62.1% complete INFO:tensorflow:local_step=26280 global_step=26280 loss=18.3, 62.1% complete INFO:tensorflow:local_step=26290 global_step=26290 loss=18.6, 62.1% complete INFO:tensorflow:local_step=26300 global_step=26300 loss=17.8, 62.1% complete INFO:tensorflow:local_step=26310 global_step=26310 loss=18.0, 62.2% complete INFO:tensorflow:local_step=26320 global_step=26320 loss=16.4, 62.2% complete INFO:tensorflow:local_step=26330 global_step=26330 loss=16.6, 62.2% complete INFO:tensorflow:local_step=26340 global_step=26340 loss=16.9, 62.2% complete INFO:tensorflow:local_step=26350 global_step=26350 loss=19.3, 62.3% complete INFO:tensorflow:local_step=26360 global_step=26360 loss=18.3, 62.3% complete INFO:tensorflow:local_step=26370 global_step=26370 loss=19.5, 62.3% complete INFO:tensorflow:local_step=26380 global_step=26380 loss=19.0, 62.3% complete INFO:tensorflow:local_step=26390 global_step=26390 loss=18.1, 62.4% complete INFO:tensorflow:local_step=26400 global_step=26400 loss=18.2, 62.4% complete INFO:tensorflow:local_step=26410 global_step=26410 loss=18.1, 62.4% complete INFO:tensorflow:local_step=26420 global_step=26420 loss=18.7, 62.4% complete INFO:tensorflow:local_step=26430 global_step=26430 loss=18.7, 62.5% complete INFO:tensorflow:local_step=26440 global_step=26440 loss=17.5, 62.5% complete INFO:tensorflow:local_step=26450 global_step=26450 loss=18.0, 62.5% complete INFO:tensorflow:local_step=26460 global_step=26460 loss=18.6, 62.5% complete INFO:tensorflow:local_step=26470 global_step=26470 loss=18.0, 62.5% complete INFO:tensorflow:local_step=26480 global_step=26480 loss=18.1, 62.6% complete INFO:tensorflow:local_step=26490 global_step=26490 loss=18.1, 62.6% complete INFO:tensorflow:local_step=26500 global_step=26500 loss=17.4, 62.6% complete INFO:tensorflow:local_step=26510 global_step=26510 loss=18.9, 62.6% complete INFO:tensorflow:local_step=26520 global_step=26520 loss=18.4, 62.7% complete INFO:tensorflow:local_step=26530 global_step=26530 loss=18.0, 62.7% complete INFO:tensorflow:local_step=26540 global_step=26540 loss=18.1, 62.7% complete INFO:tensorflow:local_step=26550 global_step=26550 loss=18.9, 62.7% complete INFO:tensorflow:local_step=26560 global_step=26560 loss=19.0, 62.8% complete INFO:tensorflow:local_step=26570 global_step=26570 loss=18.1, 62.8% complete INFO:tensorflow:local_step=26580 global_step=26580 loss=18.3, 62.8% complete INFO:tensorflow:local_step=26590 global_step=26590 loss=18.8, 62.8% complete INFO:tensorflow:local_step=26600 global_step=26600 loss=17.9, 62.9% complete INFO:tensorflow:local_step=26610 global_step=26610 loss=18.4, 62.9% complete INFO:tensorflow:local_step=26620 global_step=26620 loss=15.6, 62.9% complete INFO:tensorflow:local_step=26630 global_step=26630 loss=18.6, 62.9% complete INFO:tensorflow:local_step=26640 global_step=26640 loss=18.0, 62.9% complete INFO:tensorflow:local_step=26650 global_step=26650 loss=18.3, 63.0% complete INFO:tensorflow:local_step=26660 global_step=26660 loss=18.7, 63.0% complete INFO:tensorflow:local_step=26670 global_step=26670 loss=19.7, 63.0% complete INFO:tensorflow:local_step=26680 global_step=26680 loss=19.2, 63.0% complete INFO:tensorflow:local_step=26690 global_step=26690 loss=17.2, 63.1% complete INFO:tensorflow:local_step=26700 global_step=26700 loss=19.2, 63.1% complete INFO:tensorflow:local_step=26710 global_step=26710 loss=18.6, 63.1% complete INFO:tensorflow:local_step=26720 global_step=26720 loss=19.3, 63.1% complete INFO:tensorflow:local_step=26730 global_step=26730 loss=17.9, 63.2% complete INFO:tensorflow:local_step=26740 global_step=26740 loss=18.4, 63.2% complete INFO:tensorflow:local_step=26750 global_step=26750 loss=18.4, 63.2% complete INFO:tensorflow:local_step=26760 global_step=26760 loss=18.7, 63.2% complete INFO:tensorflow:local_step=26770 global_step=26770 loss=18.0, 63.3% complete INFO:tensorflow:local_step=26780 global_step=26780 loss=18.2, 63.3% complete INFO:tensorflow:local_step=26790 global_step=26790 loss=19.1, 63.3% complete INFO:tensorflow:local_step=26800 global_step=26800 loss=17.7, 63.3% complete INFO:tensorflow:local_step=26810 global_step=26810 loss=18.4, 63.4% complete INFO:tensorflow:local_step=26820 global_step=26820 loss=18.2, 63.4% complete INFO:tensorflow:local_step=26830 global_step=26830 loss=331.2, 63.4% complete INFO:tensorflow:local_step=26840 global_step=26840 loss=17.7, 63.4% complete INFO:tensorflow:local_step=26850 global_step=26850 loss=17.6, 63.4% complete INFO:tensorflow:local_step=26860 global_step=26860 loss=18.0, 63.5% complete INFO:tensorflow:local_step=26870 global_step=26870 loss=19.7, 63.5% complete INFO:tensorflow:local_step=26880 global_step=26880 loss=17.6, 63.5% complete INFO:tensorflow:local_step=26890 global_step=26890 loss=17.6, 63.5% complete INFO:tensorflow:local_step=26900 global_step=26900 loss=18.3, 63.6% complete INFO:tensorflow:local_step=26910 global_step=26910 loss=18.2, 63.6% complete INFO:tensorflow:local_step=26920 global_step=26920 loss=18.9, 63.6% complete INFO:tensorflow:local_step=26930 global_step=26930 loss=17.9, 63.6% complete INFO:tensorflow:local_step=26940 global_step=26940 loss=19.1, 63.7% complete INFO:tensorflow:local_step=26950 global_step=26950 loss=18.4, 63.7% complete INFO:tensorflow:local_step=26960 global_step=26960 loss=18.5, 63.7% complete INFO:tensorflow:local_step=26970 global_step=26970 loss=18.0, 63.7% complete INFO:tensorflow:local_step=26980 global_step=26980 loss=18.3, 63.8% complete INFO:tensorflow:local_step=26990 global_step=26990 loss=18.9, 63.8% complete INFO:tensorflow:local_step=27000 global_step=27000 loss=17.9, 63.8% complete INFO:tensorflow:local_step=27010 global_step=27010 loss=18.9, 63.8% complete INFO:tensorflow:local_step=27020 global_step=27020 loss=18.1, 63.8% complete INFO:tensorflow:local_step=27030 global_step=27030 loss=17.8, 63.9% complete INFO:tensorflow:local_step=27040 global_step=27040 loss=17.8, 63.9% complete INFO:tensorflow:local_step=27050 global_step=27050 loss=19.2, 63.9% complete INFO:tensorflow:local_step=27060 global_step=27060 loss=18.7, 63.9% complete INFO:tensorflow:local_step=27070 global_step=27070 loss=17.7, 64.0% complete INFO:tensorflow:local_step=27080 global_step=27080 loss=18.7, 64.0% complete INFO:tensorflow:local_step=27090 global_step=27090 loss=17.9, 64.0% complete INFO:tensorflow:local_step=27100 global_step=27100 loss=18.0, 64.0% complete INFO:tensorflow:local_step=27110 global_step=27110 loss=19.4, 64.1% complete INFO:tensorflow:local_step=27120 global_step=27120 loss=18.3, 64.1% complete INFO:tensorflow:local_step=27130 global_step=27130 loss=18.1, 64.1% complete INFO:tensorflow:local_step=27140 global_step=27140 loss=18.2, 64.1% complete INFO:tensorflow:local_step=27150 global_step=27150 loss=19.3, 64.2% complete INFO:tensorflow:local_step=27160 global_step=27160 loss=14.7, 64.2% complete INFO:tensorflow:local_step=27170 global_step=27170 loss=19.6, 64.2% complete INFO:tensorflow:local_step=27180 global_step=27180 loss=18.4, 64.2% complete INFO:tensorflow:local_step=27190 global_step=27190 loss=17.8, 64.2% complete INFO:tensorflow:local_step=27200 global_step=27200 loss=16.7, 64.3% complete INFO:tensorflow:local_step=27210 global_step=27210 loss=18.7, 64.3% complete INFO:tensorflow:local_step=27220 global_step=27220 loss=25.1, 64.3% complete INFO:tensorflow:local_step=27230 global_step=27230 loss=17.8, 64.3% complete INFO:tensorflow:local_step=27240 global_step=27240 loss=21.9, 64.4% complete INFO:tensorflow:local_step=27250 global_step=27250 loss=18.8, 64.4% complete INFO:tensorflow:local_step=27260 global_step=27260 loss=15.5, 64.4% complete INFO:tensorflow:local_step=27270 global_step=27270 loss=18.5, 64.4% complete INFO:tensorflow:local_step=27280 global_step=27280 loss=17.8, 64.5% complete INFO:tensorflow:local_step=27290 global_step=27290 loss=18.6, 64.5% complete INFO:tensorflow:local_step=27300 global_step=27300 loss=19.5, 64.5% complete INFO:tensorflow:local_step=27310 global_step=27310 loss=17.9, 64.5% complete INFO:tensorflow:local_step=27320 global_step=27320 loss=16.7, 64.6% complete INFO:tensorflow:local_step=27330 global_step=27330 loss=17.9, 64.6% complete INFO:tensorflow:local_step=27340 global_step=27340 loss=17.6, 64.6% complete INFO:tensorflow:local_step=27350 global_step=27350 loss=19.2, 64.6% complete INFO:tensorflow:local_step=27360 global_step=27360 loss=17.6, 64.7% complete INFO:tensorflow:local_step=27370 global_step=27370 loss=18.2, 64.7% complete INFO:tensorflow:local_step=27380 global_step=27380 loss=19.0, 64.7% complete INFO:tensorflow:local_step=27390 global_step=27390 loss=19.2, 64.7% complete INFO:tensorflow:local_step=27400 global_step=27400 loss=17.6, 64.7% complete INFO:tensorflow:local_step=27410 global_step=27410 loss=18.5, 64.8% complete INFO:tensorflow:local_step=27420 global_step=27420 loss=18.5, 64.8% complete INFO:tensorflow:local_step=27430 global_step=27430 loss=18.8, 64.8% complete INFO:tensorflow:local_step=27440 global_step=27440 loss=22.2, 64.8% complete INFO:tensorflow:local_step=27450 global_step=27450 loss=18.5, 64.9% complete INFO:tensorflow:local_step=27460 global_step=27460 loss=17.6, 64.9% complete INFO:tensorflow:local_step=27470 global_step=27470 loss=17.8, 64.9% complete INFO:tensorflow:local_step=27480 global_step=27480 loss=18.0, 64.9% complete INFO:tensorflow:local_step=27490 global_step=27490 loss=18.2, 65.0% complete INFO:tensorflow:local_step=27500 global_step=27500 loss=19.4, 65.0% complete INFO:tensorflow:local_step=27510 global_step=27510 loss=17.8, 65.0% complete INFO:tensorflow:local_step=27520 global_step=27520 loss=18.1, 65.0% complete INFO:tensorflow:local_step=27530 global_step=27530 loss=144.7, 65.1% complete INFO:tensorflow:local_step=27540 global_step=27540 loss=18.3, 65.1% complete INFO:tensorflow:local_step=27550 global_step=27550 loss=17.2, 65.1% complete INFO:tensorflow:local_step=27560 global_step=27560 loss=18.0, 65.1% complete INFO:tensorflow:local_step=27570 global_step=27570 loss=18.3, 65.1% complete INFO:tensorflow:local_step=27580 global_step=27580 loss=17.8, 65.2% complete INFO:tensorflow:local_step=27590 global_step=27590 loss=18.0, 65.2% complete INFO:tensorflow:local_step=27600 global_step=27600 loss=17.5, 65.2% complete INFO:tensorflow:local_step=27610 global_step=27610 loss=17.7, 65.2% complete INFO:tensorflow:local_step=27620 global_step=27620 loss=19.1, 65.3% complete INFO:tensorflow:local_step=27630 global_step=27630 loss=18.5, 65.3% complete INFO:tensorflow:local_step=27640 global_step=27640 loss=19.2, 65.3% complete INFO:tensorflow:local_step=27650 global_step=27650 loss=18.9, 65.3% complete INFO:tensorflow:local_step=27660 global_step=27660 loss=18.4, 65.4% complete INFO:tensorflow:local_step=27670 global_step=27670 loss=18.6, 65.4% complete INFO:tensorflow:local_step=27680 global_step=27680 loss=18.4, 65.4% complete INFO:tensorflow:local_step=27690 global_step=27690 loss=15.5, 65.4% complete INFO:tensorflow:local_step=27700 global_step=27700 loss=18.6, 65.5% complete INFO:tensorflow:local_step=27710 global_step=27710 loss=19.2, 65.5% complete INFO:tensorflow:local_step=27720 global_step=27720 loss=18.1, 65.5% complete INFO:tensorflow:local_step=27730 global_step=27730 loss=18.0, 65.5% complete INFO:tensorflow:local_step=27740 global_step=27740 loss=18.3, 65.5% complete INFO:tensorflow:local_step=27750 global_step=27750 loss=17.9, 65.6% complete INFO:tensorflow:local_step=27760 global_step=27760 loss=18.2, 65.6% complete INFO:tensorflow:local_step=27770 global_step=27770 loss=18.7, 65.6% complete INFO:tensorflow:local_step=27780 global_step=27780 loss=19.1, 65.6% complete INFO:tensorflow:local_step=27790 global_step=27790 loss=18.3, 65.7% complete INFO:tensorflow:local_step=27800 global_step=27800 loss=18.1, 65.7% complete INFO:tensorflow:local_step=27810 global_step=27810 loss=17.8, 65.7% complete INFO:tensorflow:local_step=27820 global_step=27820 loss=19.0, 65.7% complete INFO:tensorflow:local_step=27830 global_step=27830 loss=18.0, 65.8% complete INFO:tensorflow:local_step=27840 global_step=27840 loss=19.1, 65.8% complete INFO:tensorflow:local_step=27850 global_step=27850 loss=18.1, 65.8% complete INFO:tensorflow:local_step=27860 global_step=27860 loss=18.4, 65.8% complete INFO:tensorflow:local_step=27870 global_step=27870 loss=17.5, 65.9% complete INFO:tensorflow:local_step=27880 global_step=27880 loss=18.2, 65.9% complete INFO:tensorflow:local_step=27890 global_step=27890 loss=18.1, 65.9% complete INFO:tensorflow:local_step=27900 global_step=27900 loss=18.8, 65.9% complete INFO:tensorflow:local_step=27910 global_step=27910 loss=17.8, 65.9% complete INFO:tensorflow:local_step=27920 global_step=27920 loss=19.2, 66.0% complete INFO:tensorflow:local_step=27930 global_step=27930 loss=236.3, 66.0% complete INFO:tensorflow:local_step=27940 global_step=27940 loss=19.4, 66.0% complete INFO:tensorflow:local_step=27950 global_step=27950 loss=18.8, 66.0% complete INFO:tensorflow:local_step=27960 global_step=27960 loss=18.2, 66.1% complete INFO:tensorflow:local_step=27970 global_step=27970 loss=19.0, 66.1% complete INFO:tensorflow:local_step=27980 global_step=27980 loss=18.5, 66.1% complete INFO:tensorflow:local_step=27990 global_step=27990 loss=19.1, 66.1% complete INFO:tensorflow:local_step=28000 global_step=28000 loss=18.3, 66.2% complete INFO:tensorflow:local_step=28010 global_step=28010 loss=18.8, 66.2% complete INFO:tensorflow:local_step=28020 global_step=28020 loss=19.2, 66.2% complete INFO:tensorflow:local_step=28030 global_step=28030 loss=17.4, 66.2% complete INFO:tensorflow:local_step=28040 global_step=28040 loss=18.0, 66.3% complete INFO:tensorflow:local_step=28050 global_step=28050 loss=20.9, 66.3% complete INFO:tensorflow:local_step=28060 global_step=28060 loss=15.6, 66.3% complete INFO:tensorflow:local_step=28070 global_step=28070 loss=17.5, 66.3% complete INFO:tensorflow:local_step=28080 global_step=28080 loss=290.3, 66.4% complete INFO:tensorflow:local_step=28090 global_step=28090 loss=18.8, 66.4% complete INFO:tensorflow:local_step=28100 global_step=28100 loss=18.5, 66.4% complete INFO:tensorflow:local_step=28110 global_step=28110 loss=18.6, 66.4% complete INFO:tensorflow:local_step=28120 global_step=28120 loss=19.4, 66.4% complete INFO:tensorflow:local_step=28130 global_step=28130 loss=18.6, 66.5% complete INFO:tensorflow:local_step=28140 global_step=28140 loss=15.2, 66.5% complete INFO:tensorflow:local_step=28150 global_step=28150 loss=18.0, 66.5% complete INFO:tensorflow:local_step=28160 global_step=28160 loss=17.6, 66.5% complete INFO:tensorflow:local_step=28170 global_step=28170 loss=21.4, 66.6% complete INFO:tensorflow:local_step=28180 global_step=28180 loss=21.7, 66.6% complete INFO:tensorflow:local_step=28190 global_step=28190 loss=18.6, 66.6% complete INFO:tensorflow:local_step=28200 global_step=28200 loss=19.7, 66.6% complete INFO:tensorflow:local_step=28210 global_step=28210 loss=19.3, 66.7% complete INFO:tensorflow:local_step=28220 global_step=28220 loss=17.8, 66.7% complete INFO:tensorflow:local_step=28230 global_step=28230 loss=14.9, 66.7% complete INFO:tensorflow:local_step=28240 global_step=28240 loss=19.2, 66.7% complete INFO:tensorflow:local_step=28250 global_step=28250 loss=18.6, 66.8% complete INFO:tensorflow:local_step=28260 global_step=28260 loss=19.6, 66.8% complete INFO:tensorflow:local_step=28270 global_step=28270 loss=19.1, 66.8% complete INFO:tensorflow:local_step=28280 global_step=28280 loss=18.4, 66.8% complete INFO:tensorflow:local_step=28290 global_step=28290 loss=18.8, 66.8% complete INFO:tensorflow:local_step=28300 global_step=28300 loss=18.8, 66.9% complete INFO:tensorflow:local_step=28310 global_step=28310 loss=17.6, 66.9% complete INFO:tensorflow:local_step=28320 global_step=28320 loss=19.2, 66.9% complete INFO:tensorflow:local_step=28330 global_step=28330 loss=18.4, 66.9% complete INFO:tensorflow:local_step=28340 global_step=28340 loss=348.4, 67.0% complete INFO:tensorflow:local_step=28350 global_step=28350 loss=17.6, 67.0% complete INFO:tensorflow:local_step=28360 global_step=28360 loss=19.1, 67.0% complete INFO:tensorflow:local_step=28370 global_step=28370 loss=272.8, 67.0% complete INFO:tensorflow:local_step=28380 global_step=28380 loss=17.5, 67.1% complete INFO:tensorflow:local_step=28390 global_step=28390 loss=18.2, 67.1% complete INFO:tensorflow:local_step=28400 global_step=28400 loss=19.1, 67.1% complete INFO:tensorflow:local_step=28410 global_step=28410 loss=18.9, 67.1% complete INFO:tensorflow:local_step=28420 global_step=28420 loss=19.5, 67.2% complete INFO:tensorflow:local_step=28430 global_step=28430 loss=18.9, 67.2% complete INFO:tensorflow:local_step=28440 global_step=28440 loss=19.2, 67.2% complete INFO:tensorflow:local_step=28450 global_step=28450 loss=18.2, 67.2% complete INFO:tensorflow:local_step=28460 global_step=28460 loss=18.0, 67.2% complete INFO:tensorflow:local_step=28470 global_step=28470 loss=18.2, 67.3% complete INFO:tensorflow:local_step=28480 global_step=28480 loss=21.2, 67.3% complete INFO:tensorflow:local_step=28490 global_step=28490 loss=18.7, 67.3% complete INFO:tensorflow:local_step=28500 global_step=28500 loss=17.7, 67.3% complete INFO:tensorflow:local_step=28510 global_step=28510 loss=280.8, 67.4% complete INFO:tensorflow:local_step=28520 global_step=28520 loss=18.8, 67.4% complete INFO:tensorflow:local_step=28530 global_step=28530 loss=17.1, 67.4% complete INFO:tensorflow:local_step=28540 global_step=28540 loss=17.6, 67.4% complete INFO:tensorflow:local_step=28550 global_step=28550 loss=17.4, 67.5% complete INFO:tensorflow:local_step=28560 global_step=28560 loss=18.2, 67.5% complete INFO:tensorflow:local_step=28570 global_step=28570 loss=18.8, 67.5% complete INFO:tensorflow:local_step=28580 global_step=28580 loss=19.1, 67.5% complete INFO:tensorflow:local_step=28590 global_step=28590 loss=19.4, 67.6% complete INFO:tensorflow:local_step=28600 global_step=28600 loss=17.5, 67.6% complete INFO:tensorflow:local_step=28610 global_step=28610 loss=19.2, 67.6% complete INFO:tensorflow:local_step=28620 global_step=28620 loss=18.3, 67.6% complete INFO:tensorflow:local_step=28630 global_step=28630 loss=18.1, 67.7% complete INFO:tensorflow:local_step=28640 global_step=28640 loss=19.2, 67.7% complete INFO:tensorflow:local_step=28650 global_step=28650 loss=215.0, 67.7% complete INFO:tensorflow:local_step=28660 global_step=28660 loss=18.0, 67.7% complete INFO:tensorflow:local_step=28670 global_step=28670 loss=16.3, 67.7% complete INFO:tensorflow:local_step=28680 global_step=28680 loss=17.6, 67.8% complete INFO:tensorflow:local_step=28690 global_step=28690 loss=18.1, 67.8% complete INFO:tensorflow:local_step=28700 global_step=28700 loss=17.8, 67.8% complete INFO:tensorflow:local_step=28710 global_step=28710 loss=19.0, 67.8% complete INFO:tensorflow:local_step=28720 global_step=28720 loss=18.2, 67.9% complete INFO:tensorflow:local_step=28730 global_step=28730 loss=19.0, 67.9% complete INFO:tensorflow:local_step=28740 global_step=28740 loss=18.3, 67.9% complete INFO:tensorflow:local_step=28750 global_step=28750 loss=19.3, 67.9% complete INFO:tensorflow:local_step=28760 global_step=28760 loss=16.0, 68.0% complete INFO:tensorflow:local_step=28770 global_step=28770 loss=18.5, 68.0% complete INFO:tensorflow:local_step=28780 global_step=28780 loss=18.8, 68.0% complete INFO:tensorflow:local_step=28790 global_step=28790 loss=18.8, 68.0% complete INFO:tensorflow:local_step=28800 global_step=28800 loss=19.0, 68.1% complete INFO:tensorflow:local_step=28810 global_step=28810 loss=18.8, 68.1% complete INFO:tensorflow:local_step=28820 global_step=28820 loss=19.3, 68.1% complete INFO:tensorflow:local_step=28830 global_step=28830 loss=17.7, 68.1% complete INFO:tensorflow:local_step=28840 global_step=28840 loss=21.5, 68.1% complete INFO:tensorflow:local_step=28850 global_step=28850 loss=18.0, 68.2% complete INFO:tensorflow:local_step=28860 global_step=28860 loss=18.4, 68.2% complete INFO:tensorflow:local_step=28870 global_step=28870 loss=18.5, 68.2% complete INFO:tensorflow:local_step=28880 global_step=28880 loss=18.1, 68.2% complete INFO:tensorflow:local_step=28890 global_step=28890 loss=19.1, 68.3% complete INFO:tensorflow:local_step=28900 global_step=28900 loss=19.6, 68.3% complete INFO:tensorflow:local_step=28910 global_step=28910 loss=17.8, 68.3% complete INFO:tensorflow:local_step=28920 global_step=28920 loss=19.2, 68.3% complete INFO:tensorflow:local_step=28930 global_step=28930 loss=17.8, 68.4% complete INFO:tensorflow:local_step=28940 global_step=28940 loss=18.9, 68.4% complete INFO:tensorflow:local_step=28950 global_step=28950 loss=18.2, 68.4% complete INFO:tensorflow:local_step=28960 global_step=28960 loss=298.3, 68.4% complete INFO:tensorflow:local_step=28970 global_step=28970 loss=19.0, 68.5% complete INFO:tensorflow:local_step=28980 global_step=28980 loss=18.6, 68.5% complete INFO:tensorflow:local_step=28990 global_step=28990 loss=15.1, 68.5% complete INFO:tensorflow:local_step=29000 global_step=29000 loss=17.9, 68.5% complete INFO:tensorflow:local_step=29010 global_step=29010 loss=18.0, 68.5% complete INFO:tensorflow:local_step=29020 global_step=29020 loss=18.7, 68.6% complete INFO:tensorflow:local_step=29030 global_step=29030 loss=18.7, 68.6% complete INFO:tensorflow:local_step=29040 global_step=29040 loss=18.8, 68.6% complete INFO:tensorflow:local_step=29050 global_step=29050 loss=18.3, 68.6% complete INFO:tensorflow:local_step=29060 global_step=29060 loss=18.7, 68.7% complete INFO:tensorflow:local_step=29070 global_step=29070 loss=18.6, 68.7% complete INFO:tensorflow:local_step=29080 global_step=29080 loss=18.2, 68.7% complete INFO:tensorflow:local_step=29090 global_step=29090 loss=18.5, 68.7% complete INFO:tensorflow:local_step=29100 global_step=29100 loss=18.1, 68.8% complete INFO:tensorflow:local_step=29110 global_step=29110 loss=19.4, 68.8% complete INFO:tensorflow:local_step=29120 global_step=29120 loss=18.0, 68.8% complete INFO:tensorflow:local_step=29130 global_step=29130 loss=19.4, 68.8% complete INFO:tensorflow:local_step=29140 global_step=29140 loss=17.6, 68.9% complete INFO:tensorflow:local_step=29150 global_step=29150 loss=18.3, 68.9% complete INFO:tensorflow:local_step=29160 global_step=29160 loss=18.2, 68.9% complete INFO:tensorflow:local_step=29170 global_step=29170 loss=19.2, 68.9% complete INFO:tensorflow:local_step=29180 global_step=29180 loss=17.9, 69.0% complete INFO:tensorflow:local_step=29190 global_step=29190 loss=19.4, 69.0% complete INFO:tensorflow:local_step=29200 global_step=29200 loss=18.5, 69.0% complete INFO:tensorflow:local_step=29210 global_step=29210 loss=17.7, 69.0% complete INFO:tensorflow:local_step=29220 global_step=29220 loss=320.1, 69.0% complete INFO:tensorflow:local_step=29230 global_step=29230 loss=18.3, 69.1% complete INFO:tensorflow:local_step=29240 global_step=29240 loss=19.4, 69.1% complete INFO:tensorflow:local_step=29250 global_step=29250 loss=21.2, 69.1% complete INFO:tensorflow:local_step=29260 global_step=29260 loss=18.1, 69.1% complete INFO:tensorflow:local_step=29270 global_step=29270 loss=17.7, 69.2% complete INFO:tensorflow:local_step=29280 global_step=29280 loss=18.8, 69.2% complete INFO:tensorflow:local_step=29290 global_step=29290 loss=18.3, 69.2% complete INFO:tensorflow:local_step=29300 global_step=29300 loss=17.9, 69.2% complete INFO:tensorflow:local_step=29310 global_step=29310 loss=17.6, 69.3% complete INFO:tensorflow:local_step=29320 global_step=29320 loss=18.3, 69.3% complete INFO:tensorflow:local_step=29330 global_step=29330 loss=17.7, 69.3% complete INFO:tensorflow:local_step=29340 global_step=29340 loss=18.1, 69.3% complete INFO:tensorflow:local_step=29350 global_step=29350 loss=18.3, 69.4% complete INFO:tensorflow:local_step=29360 global_step=29360 loss=19.5, 69.4% complete INFO:tensorflow:local_step=29370 global_step=29370 loss=18.4, 69.4% complete INFO:tensorflow:local_step=29380 global_step=29380 loss=18.1, 69.4% complete INFO:tensorflow:local_step=29390 global_step=29390 loss=17.9, 69.4% complete INFO:tensorflow:local_step=29400 global_step=29400 loss=17.6, 69.5% complete INFO:tensorflow:local_step=29410 global_step=29410 loss=17.5, 69.5% complete INFO:tensorflow:local_step=29420 global_step=29420 loss=18.7, 69.5% complete INFO:tensorflow:local_step=29430 global_step=29430 loss=18.6, 69.5% complete INFO:tensorflow:local_step=29440 global_step=29440 loss=18.9, 69.6% complete INFO:tensorflow:local_step=29450 global_step=29450 loss=18.0, 69.6% complete INFO:tensorflow:local_step=29460 global_step=29460 loss=18.7, 69.6% complete INFO:tensorflow:local_step=29470 global_step=29470 loss=17.8, 69.6% complete INFO:tensorflow:local_step=29480 global_step=29480 loss=17.1, 69.7% complete INFO:tensorflow:local_step=29490 global_step=29490 loss=18.3, 69.7% complete INFO:tensorflow:local_step=29500 global_step=29500 loss=19.0, 69.7% complete INFO:tensorflow:local_step=29510 global_step=29510 loss=18.6, 69.7% complete INFO:tensorflow:local_step=29520 global_step=29520 loss=18.6, 69.8% complete INFO:tensorflow:local_step=29530 global_step=29530 loss=18.6, 69.8% complete INFO:tensorflow:local_step=29540 global_step=29540 loss=17.8, 69.8% complete INFO:tensorflow:local_step=29550 global_step=29550 loss=18.9, 69.8% complete INFO:tensorflow:local_step=29560 global_step=29560 loss=18.9, 69.8% complete INFO:tensorflow:local_step=29570 global_step=29570 loss=18.4, 69.9% complete INFO:tensorflow:local_step=29580 global_step=29580 loss=18.9, 69.9% complete INFO:tensorflow:local_step=29590 global_step=29590 loss=18.4, 69.9% complete INFO:tensorflow:local_step=29600 global_step=29600 loss=19.2, 69.9% complete INFO:tensorflow:local_step=29610 global_step=29610 loss=18.5, 70.0% complete INFO:tensorflow:local_step=29620 global_step=29620 loss=18.6, 70.0% complete INFO:tensorflow:local_step=29630 global_step=29630 loss=18.3, 70.0% complete INFO:tensorflow:local_step=29640 global_step=29640 loss=19.6, 70.0% complete INFO:tensorflow:local_step=29650 global_step=29650 loss=19.0, 70.1% complete INFO:tensorflow:local_step=29660 global_step=29660 loss=19.1, 70.1% complete INFO:tensorflow:local_step=29670 global_step=29670 loss=18.3, 70.1% complete INFO:tensorflow:local_step=29680 global_step=29680 loss=18.1, 70.1% complete INFO:tensorflow:local_step=29690 global_step=29690 loss=17.9, 70.2% complete INFO:tensorflow:local_step=29700 global_step=29700 loss=18.0, 70.2% complete INFO:tensorflow:local_step=29710 global_step=29710 loss=18.4, 70.2% complete INFO:tensorflow:local_step=29720 global_step=29720 loss=18.8, 70.2% complete INFO:tensorflow:local_step=29730 global_step=29730 loss=18.3, 70.3% complete INFO:tensorflow:local_step=29740 global_step=29740 loss=18.0, 70.3% complete INFO:tensorflow:local_step=29750 global_step=29750 loss=18.6, 70.3% complete INFO:tensorflow:local_step=29760 global_step=29760 loss=18.7, 70.3% complete INFO:tensorflow:local_step=29770 global_step=29770 loss=17.8, 70.3% complete INFO:tensorflow:local_step=29780 global_step=29780 loss=18.4, 70.4% complete INFO:tensorflow:local_step=29790 global_step=29790 loss=18.5, 70.4% complete INFO:tensorflow:local_step=29800 global_step=29800 loss=18.3, 70.4% complete INFO:tensorflow:local_step=29810 global_step=29810 loss=202.8, 70.4% complete INFO:tensorflow:local_step=29820 global_step=29820 loss=18.2, 70.5% complete INFO:tensorflow:local_step=29830 global_step=29830 loss=20.1, 70.5% complete INFO:tensorflow:local_step=29840 global_step=29840 loss=16.8, 70.5% complete INFO:tensorflow:local_step=29850 global_step=29850 loss=17.4, 70.5% complete INFO:tensorflow:local_step=29860 global_step=29860 loss=17.9, 70.6% complete INFO:tensorflow:local_step=29870 global_step=29870 loss=17.9, 70.6% complete INFO:tensorflow:local_step=29880 global_step=29880 loss=18.7, 70.6% complete INFO:tensorflow:local_step=29890 global_step=29890 loss=18.6, 70.6% complete INFO:tensorflow:local_step=29900 global_step=29900 loss=18.6, 70.7% complete INFO:tensorflow:local_step=29910 global_step=29910 loss=18.2, 70.7% complete INFO:tensorflow:local_step=29920 global_step=29920 loss=17.9, 70.7% complete INFO:tensorflow:local_step=29930 global_step=29930 loss=18.5, 70.7% complete INFO:tensorflow:local_step=29940 global_step=29940 loss=18.3, 70.7% complete INFO:tensorflow:local_step=29950 global_step=29950 loss=18.5, 70.8% complete INFO:tensorflow:local_step=29960 global_step=29960 loss=19.1, 70.8% complete INFO:tensorflow:local_step=29970 global_step=29970 loss=18.0, 70.8% complete INFO:tensorflow:local_step=29980 global_step=29980 loss=18.3, 70.8% complete INFO:tensorflow:local_step=29990 global_step=29990 loss=18.6, 70.9% complete INFO:tensorflow:local_step=30000 global_step=30000 loss=17.7, 70.9% complete INFO:tensorflow:local_step=30010 global_step=30010 loss=18.3, 70.9% complete INFO:tensorflow:local_step=30020 global_step=30020 loss=18.4, 70.9% complete INFO:tensorflow:local_step=30030 global_step=30030 loss=17.4, 71.0% complete INFO:tensorflow:local_step=30040 global_step=30040 loss=18.7, 71.0% complete INFO:tensorflow:local_step=30050 global_step=30050 loss=18.9, 71.0% complete INFO:tensorflow:local_step=30060 global_step=30060 loss=18.9, 71.0% complete INFO:tensorflow:Recording summary at step 30060. INFO:tensorflow:global_step/sec: 126.417 INFO:tensorflow:local_step=30070 global_step=30070 loss=18.1, 71.1% complete INFO:tensorflow:local_step=30080 global_step=30080 loss=18.2, 71.1% complete INFO:tensorflow:local_step=30090 global_step=30090 loss=18.9, 71.1% complete INFO:tensorflow:local_step=30100 global_step=30100 loss=17.8, 71.1% complete INFO:tensorflow:local_step=30110 global_step=30110 loss=18.2, 71.1% complete INFO:tensorflow:local_step=30120 global_step=30120 loss=18.1, 71.2% complete INFO:tensorflow:local_step=30130 global_step=30130 loss=17.9, 71.2% complete INFO:tensorflow:local_step=30140 global_step=30140 loss=17.7, 71.2% complete INFO:tensorflow:local_step=30150 global_step=30150 loss=18.1, 71.2% complete INFO:tensorflow:local_step=30160 global_step=30160 loss=18.2, 71.3% complete INFO:tensorflow:local_step=30170 global_step=30170 loss=18.4, 71.3% complete INFO:tensorflow:local_step=30180 global_step=30180 loss=17.9, 71.3% complete INFO:tensorflow:local_step=30190 global_step=30190 loss=16.0, 71.3% complete INFO:tensorflow:local_step=30200 global_step=30200 loss=17.0, 71.4% complete INFO:tensorflow:local_step=30210 global_step=30210 loss=18.2, 71.4% complete INFO:tensorflow:local_step=30220 global_step=30220 loss=18.5, 71.4% complete INFO:tensorflow:local_step=30230 global_step=30230 loss=18.3, 71.4% complete INFO:tensorflow:local_step=30240 global_step=30240 loss=18.7, 71.5% complete INFO:tensorflow:local_step=30250 global_step=30250 loss=18.6, 71.5% complete INFO:tensorflow:local_step=30260 global_step=30260 loss=19.2, 71.5% complete INFO:tensorflow:local_step=30270 global_step=30270 loss=18.3, 71.5% complete INFO:tensorflow:local_step=30280 global_step=30280 loss=18.1, 71.6% complete INFO:tensorflow:local_step=30290 global_step=30290 loss=18.0, 71.6% complete INFO:tensorflow:local_step=30300 global_step=30300 loss=18.3, 71.6% complete INFO:tensorflow:local_step=30310 global_step=30310 loss=17.9, 71.6% complete INFO:tensorflow:local_step=30320 global_step=30320 loss=19.1, 71.6% complete INFO:tensorflow:local_step=30330 global_step=30330 loss=18.5, 71.7% complete INFO:tensorflow:local_step=30340 global_step=30340 loss=18.2, 71.7% complete INFO:tensorflow:local_step=30350 global_step=30350 loss=18.6, 71.7% complete INFO:tensorflow:local_step=30360 global_step=30360 loss=17.9, 71.7% complete INFO:tensorflow:local_step=30370 global_step=30370 loss=18.0, 71.8% complete INFO:tensorflow:local_step=30380 global_step=30380 loss=17.6, 71.8% complete INFO:tensorflow:local_step=30390 global_step=30390 loss=18.0, 71.8% complete INFO:tensorflow:local_step=30400 global_step=30400 loss=17.4, 71.8% complete INFO:tensorflow:local_step=30410 global_step=30410 loss=17.8, 71.9% complete INFO:tensorflow:local_step=30420 global_step=30420 loss=18.5, 71.9% complete INFO:tensorflow:local_step=30430 global_step=30430 loss=18.6, 71.9% complete INFO:tensorflow:local_step=30440 global_step=30440 loss=18.8, 71.9% complete INFO:tensorflow:local_step=30450 global_step=30450 loss=18.8, 72.0% complete INFO:tensorflow:local_step=30460 global_step=30460 loss=18.0, 72.0% complete INFO:tensorflow:local_step=30470 global_step=30470 loss=18.0, 72.0% complete INFO:tensorflow:local_step=30480 global_step=30480 loss=18.3, 72.0% complete INFO:tensorflow:local_step=30490 global_step=30490 loss=21.3, 72.0% complete INFO:tensorflow:local_step=30500 global_step=30500 loss=19.2, 72.1% complete INFO:tensorflow:local_step=30510 global_step=30510 loss=19.3, 72.1% complete INFO:tensorflow:local_step=30520 global_step=30520 loss=18.7, 72.1% complete INFO:tensorflow:local_step=30530 global_step=30530 loss=18.8, 72.1% complete INFO:tensorflow:local_step=30540 global_step=30540 loss=18.6, 72.2% complete INFO:tensorflow:local_step=30550 global_step=30550 loss=18.4, 72.2% complete INFO:tensorflow:local_step=30560 global_step=30560 loss=18.5, 72.2% complete INFO:tensorflow:local_step=30570 global_step=30570 loss=18.1, 72.2% complete INFO:tensorflow:local_step=30580 global_step=30580 loss=18.3, 72.3% complete INFO:tensorflow:local_step=30590 global_step=30590 loss=18.4, 72.3% complete INFO:tensorflow:local_step=30600 global_step=30600 loss=18.6, 72.3% complete INFO:tensorflow:local_step=30610 global_step=30610 loss=17.6, 72.3% complete INFO:tensorflow:local_step=30620 global_step=30620 loss=17.4, 72.4% complete INFO:tensorflow:local_step=30630 global_step=30630 loss=18.9, 72.4% complete INFO:tensorflow:local_step=30640 global_step=30640 loss=18.7, 72.4% complete INFO:tensorflow:local_step=30650 global_step=30650 loss=18.6, 72.4% complete INFO:tensorflow:local_step=30660 global_step=30660 loss=17.9, 72.4% complete INFO:tensorflow:local_step=30670 global_step=30670 loss=18.7, 72.5% complete INFO:tensorflow:local_step=30680 global_step=30680 loss=18.9, 72.5% complete INFO:tensorflow:local_step=30690 global_step=30690 loss=18.7, 72.5% complete INFO:tensorflow:local_step=30700 global_step=30700 loss=17.8, 72.5% complete INFO:tensorflow:local_step=30710 global_step=30710 loss=17.6, 72.6% complete INFO:tensorflow:local_step=30720 global_step=30720 loss=18.5, 72.6% complete INFO:tensorflow:local_step=30730 global_step=30730 loss=19.3, 72.6% complete INFO:tensorflow:local_step=30740 global_step=30740 loss=18.5, 72.6% complete INFO:tensorflow:local_step=30750 global_step=30750 loss=18.2, 72.7% complete INFO:tensorflow:local_step=30760 global_step=30760 loss=18.7, 72.7% complete INFO:tensorflow:local_step=30770 global_step=30770 loss=14.6, 72.7% complete INFO:tensorflow:local_step=30780 global_step=30780 loss=17.8, 72.7% complete INFO:tensorflow:local_step=30790 global_step=30790 loss=18.0, 72.8% complete INFO:tensorflow:local_step=30800 global_step=30800 loss=287.8, 72.8% complete INFO:tensorflow:local_step=30810 global_step=30810 loss=17.7, 72.8% complete INFO:tensorflow:local_step=30820 global_step=30820 loss=17.8, 72.8% complete INFO:tensorflow:local_step=30830 global_step=30830 loss=18.1, 72.8% complete INFO:tensorflow:local_step=30840 global_step=30840 loss=18.3, 72.9% complete INFO:tensorflow:local_step=30850 global_step=30850 loss=18.1, 72.9% complete INFO:tensorflow:local_step=30860 global_step=30860 loss=19.1, 72.9% complete INFO:tensorflow:local_step=30870 global_step=30870 loss=18.4, 72.9% complete INFO:tensorflow:local_step=30880 global_step=30880 loss=18.8, 73.0% complete INFO:tensorflow:local_step=30890 global_step=30890 loss=18.7, 73.0% complete INFO:tensorflow:local_step=30900 global_step=30900 loss=18.8, 73.0% complete INFO:tensorflow:local_step=30910 global_step=30910 loss=18.6, 73.0% complete INFO:tensorflow:local_step=30920 global_step=30920 loss=19.0, 73.1% complete INFO:tensorflow:local_step=30930 global_step=30930 loss=17.5, 73.1% complete INFO:tensorflow:local_step=30940 global_step=30940 loss=16.6, 73.1% complete INFO:tensorflow:local_step=30950 global_step=30950 loss=18.0, 73.1% complete INFO:tensorflow:local_step=30960 global_step=30960 loss=17.9, 73.2% complete INFO:tensorflow:local_step=30970 global_step=30970 loss=18.3, 73.2% complete INFO:tensorflow:local_step=30980 global_step=30980 loss=18.4, 73.2% complete INFO:tensorflow:local_step=30990 global_step=30990 loss=19.2, 73.2% complete INFO:tensorflow:local_step=31000 global_step=31000 loss=18.0, 73.3% complete INFO:tensorflow:local_step=31010 global_step=31010 loss=18.9, 73.3% complete INFO:tensorflow:local_step=31020 global_step=31020 loss=18.3, 73.3% complete INFO:tensorflow:local_step=31030 global_step=31030 loss=18.2, 73.3% complete INFO:tensorflow:local_step=31040 global_step=31040 loss=18.3, 73.3% complete INFO:tensorflow:local_step=31050 global_step=31050 loss=18.4, 73.4% complete INFO:tensorflow:local_step=31060 global_step=31060 loss=19.1, 73.4% complete INFO:tensorflow:local_step=31070 global_step=31070 loss=18.4, 73.4% complete INFO:tensorflow:local_step=31080 global_step=31080 loss=19.0, 73.4% complete INFO:tensorflow:local_step=31090 global_step=31090 loss=18.4, 73.5% complete INFO:tensorflow:local_step=31100 global_step=31100 loss=18.8, 73.5% complete INFO:tensorflow:local_step=31110 global_step=31110 loss=18.8, 73.5% complete INFO:tensorflow:local_step=31120 global_step=31120 loss=19.0, 73.5% complete INFO:tensorflow:local_step=31130 global_step=31130 loss=18.1, 73.6% complete INFO:tensorflow:local_step=31140 global_step=31140 loss=17.9, 73.6% complete INFO:tensorflow:local_step=31150 global_step=31150 loss=15.6, 73.6% complete INFO:tensorflow:local_step=31160 global_step=31160 loss=17.9, 73.6% complete INFO:tensorflow:local_step=31170 global_step=31170 loss=18.3, 73.7% complete INFO:tensorflow:local_step=31180 global_step=31180 loss=19.1, 73.7% complete INFO:tensorflow:local_step=31190 global_step=31190 loss=18.3, 73.7% complete INFO:tensorflow:local_step=31200 global_step=31200 loss=18.7, 73.7% complete INFO:tensorflow:local_step=31210 global_step=31210 loss=264.0, 73.7% complete INFO:tensorflow:local_step=31220 global_step=31220 loss=17.7, 73.8% complete INFO:tensorflow:local_step=31230 global_step=31230 loss=18.6, 73.8% complete INFO:tensorflow:local_step=31240 global_step=31240 loss=17.9, 73.8% complete INFO:tensorflow:local_step=31250 global_step=31250 loss=18.2, 73.8% complete INFO:tensorflow:local_step=31260 global_step=31260 loss=18.4, 73.9% complete INFO:tensorflow:local_step=31270 global_step=31270 loss=18.2, 73.9% complete INFO:tensorflow:local_step=31280 global_step=31280 loss=17.5, 73.9% complete INFO:tensorflow:local_step=31290 global_step=31290 loss=17.6, 73.9% complete INFO:tensorflow:local_step=31300 global_step=31300 loss=18.1, 74.0% complete INFO:tensorflow:local_step=31310 global_step=31310 loss=18.5, 74.0% complete INFO:tensorflow:local_step=31320 global_step=31320 loss=289.9, 74.0% complete INFO:tensorflow:local_step=31330 global_step=31330 loss=18.0, 74.0% complete INFO:tensorflow:local_step=31340 global_step=31340 loss=18.7, 74.1% complete INFO:tensorflow:local_step=31350 global_step=31350 loss=18.3, 74.1% complete INFO:tensorflow:local_step=31360 global_step=31360 loss=17.9, 74.1% complete INFO:tensorflow:local_step=31370 global_step=31370 loss=20.0, 74.1% complete INFO:tensorflow:local_step=31380 global_step=31380 loss=18.3, 74.1% complete INFO:tensorflow:local_step=31390 global_step=31390 loss=14.4, 74.2% complete INFO:tensorflow:local_step=31400 global_step=31400 loss=22.1, 74.2% complete INFO:tensorflow:local_step=31410 global_step=31410 loss=18.1, 74.2% complete INFO:tensorflow:local_step=31420 global_step=31420 loss=18.2, 74.2% complete INFO:tensorflow:local_step=31430 global_step=31430 loss=18.2, 74.3% complete INFO:tensorflow:local_step=31440 global_step=31440 loss=19.4, 74.3% complete INFO:tensorflow:local_step=31450 global_step=31450 loss=18.7, 74.3% complete INFO:tensorflow:local_step=31460 global_step=31460 loss=17.8, 74.3% complete INFO:tensorflow:local_step=31470 global_step=31470 loss=18.1, 74.4% complete INFO:tensorflow:local_step=31480 global_step=31480 loss=18.6, 74.4% complete INFO:tensorflow:local_step=31490 global_step=31490 loss=18.4, 74.4% complete INFO:tensorflow:local_step=31500 global_step=31500 loss=19.8, 74.4% complete INFO:tensorflow:local_step=31510 global_step=31510 loss=19.0, 74.5% complete INFO:tensorflow:local_step=31520 global_step=31520 loss=17.9, 74.5% complete INFO:tensorflow:local_step=31530 global_step=31530 loss=19.1, 74.5% complete INFO:tensorflow:local_step=31540 global_step=31540 loss=18.4, 74.5% complete INFO:tensorflow:local_step=31550 global_step=31550 loss=19.2, 74.6% complete INFO:tensorflow:local_step=31560 global_step=31560 loss=17.8, 74.6% complete INFO:tensorflow:local_step=31570 global_step=31570 loss=17.9, 74.6% complete INFO:tensorflow:local_step=31580 global_step=31580 loss=18.8, 74.6% complete INFO:tensorflow:local_step=31590 global_step=31590 loss=18.8, 74.6% complete INFO:tensorflow:local_step=31600 global_step=31600 loss=18.4, 74.7% complete INFO:tensorflow:local_step=31610 global_step=31610 loss=18.8, 74.7% complete INFO:tensorflow:local_step=31620 global_step=31620 loss=18.9, 74.7% complete INFO:tensorflow:local_step=31630 global_step=31630 loss=18.0, 74.7% complete INFO:tensorflow:local_step=31640 global_step=31640 loss=18.7, 74.8% complete INFO:tensorflow:local_step=31650 global_step=31650 loss=18.4, 74.8% complete INFO:tensorflow:local_step=31660 global_step=31660 loss=18.1, 74.8% complete INFO:tensorflow:local_step=31670 global_step=31670 loss=17.7, 74.8% complete INFO:tensorflow:local_step=31680 global_step=31680 loss=18.0, 74.9% complete INFO:tensorflow:local_step=31690 global_step=31690 loss=17.9, 74.9% complete INFO:tensorflow:local_step=31700 global_step=31700 loss=18.2, 74.9% complete INFO:tensorflow:local_step=31710 global_step=31710 loss=18.1, 74.9% complete INFO:tensorflow:local_step=31720 global_step=31720 loss=18.9, 75.0% complete INFO:tensorflow:local_step=31730 global_step=31730 loss=18.4, 75.0% complete INFO:tensorflow:local_step=31740 global_step=31740 loss=18.7, 75.0% complete INFO:tensorflow:local_step=31750 global_step=31750 loss=15.0, 75.0% complete INFO:tensorflow:local_step=31760 global_step=31760 loss=14.4, 75.0% complete INFO:tensorflow:local_step=31770 global_step=31770 loss=15.1, 75.1% complete INFO:tensorflow:local_step=31780 global_step=31780 loss=18.9, 75.1% complete INFO:tensorflow:local_step=31790 global_step=31790 loss=14.4, 75.1% complete INFO:tensorflow:local_step=31800 global_step=31800 loss=18.2, 75.1% complete INFO:tensorflow:local_step=31810 global_step=31810 loss=19.1, 75.2% complete INFO:tensorflow:local_step=31820 global_step=31820 loss=18.7, 75.2% complete INFO:tensorflow:local_step=31830 global_step=31830 loss=17.3, 75.2% complete INFO:tensorflow:local_step=31840 global_step=31840 loss=17.5, 75.2% complete INFO:tensorflow:local_step=31850 global_step=31850 loss=18.5, 75.3% complete INFO:tensorflow:local_step=31860 global_step=31860 loss=18.7, 75.3% complete INFO:tensorflow:local_step=31870 global_step=31870 loss=18.4, 75.3% complete INFO:tensorflow:local_step=31880 global_step=31880 loss=17.2, 75.3% complete INFO:tensorflow:local_step=31890 global_step=31890 loss=15.0, 75.4% complete INFO:tensorflow:local_step=31900 global_step=31900 loss=18.7, 75.4% complete INFO:tensorflow:local_step=31910 global_step=31910 loss=17.8, 75.4% complete INFO:tensorflow:local_step=31920 global_step=31920 loss=18.5, 75.4% complete INFO:tensorflow:local_step=31930 global_step=31930 loss=18.0, 75.4% complete INFO:tensorflow:local_step=31940 global_step=31940 loss=18.3, 75.5% complete INFO:tensorflow:local_step=31950 global_step=31950 loss=18.4, 75.5% complete INFO:tensorflow:local_step=31960 global_step=31960 loss=18.4, 75.5% complete INFO:tensorflow:local_step=31970 global_step=31970 loss=17.6, 75.5% complete INFO:tensorflow:local_step=31980 global_step=31980 loss=19.1, 75.6% complete INFO:tensorflow:local_step=31990 global_step=31990 loss=17.6, 75.6% complete INFO:tensorflow:local_step=32000 global_step=32000 loss=17.8, 75.6% complete INFO:tensorflow:local_step=32010 global_step=32010 loss=18.4, 75.6% complete INFO:tensorflow:local_step=32020 global_step=32020 loss=18.0, 75.7% complete INFO:tensorflow:local_step=32030 global_step=32030 loss=17.6, 75.7% complete INFO:tensorflow:local_step=32040 global_step=32040 loss=19.2, 75.7% complete INFO:tensorflow:local_step=32050 global_step=32050 loss=18.1, 75.7% complete INFO:tensorflow:local_step=32060 global_step=32060 loss=18.7, 75.8% complete INFO:tensorflow:local_step=32070 global_step=32070 loss=280.6, 75.8% complete INFO:tensorflow:local_step=32080 global_step=32080 loss=18.4, 75.8% complete INFO:tensorflow:local_step=32090 global_step=32090 loss=18.5, 75.8% complete INFO:tensorflow:local_step=32100 global_step=32100 loss=18.4, 75.9% complete INFO:tensorflow:local_step=32110 global_step=32110 loss=17.7, 75.9% complete INFO:tensorflow:local_step=32120 global_step=32120 loss=19.0, 75.9% complete INFO:tensorflow:local_step=32130 global_step=32130 loss=16.9, 75.9% complete INFO:tensorflow:local_step=32140 global_step=32140 loss=18.4, 75.9% complete INFO:tensorflow:local_step=32150 global_step=32150 loss=17.8, 76.0% complete INFO:tensorflow:local_step=32160 global_step=32160 loss=18.4, 76.0% complete INFO:tensorflow:local_step=32170 global_step=32170 loss=18.8, 76.0% complete INFO:tensorflow:local_step=32180 global_step=32180 loss=18.5, 76.0% complete INFO:tensorflow:local_step=32190 global_step=32190 loss=18.2, 76.1% complete INFO:tensorflow:local_step=32200 global_step=32200 loss=17.7, 76.1% complete INFO:tensorflow:local_step=32210 global_step=32210 loss=17.9, 76.1% complete INFO:tensorflow:local_step=32220 global_step=32220 loss=19.1, 76.1% complete INFO:tensorflow:local_step=32230 global_step=32230 loss=18.4, 76.2% complete INFO:tensorflow:local_step=32240 global_step=32240 loss=17.5, 76.2% complete INFO:tensorflow:local_step=32250 global_step=32250 loss=18.1, 76.2% complete INFO:tensorflow:local_step=32260 global_step=32260 loss=17.8, 76.2% complete INFO:tensorflow:local_step=32270 global_step=32270 loss=17.6, 76.3% complete INFO:tensorflow:local_step=32280 global_step=32280 loss=19.1, 76.3% complete INFO:tensorflow:local_step=32290 global_step=32290 loss=17.9, 76.3% complete INFO:tensorflow:local_step=32300 global_step=32300 loss=17.5, 76.3% complete INFO:tensorflow:local_step=32310 global_step=32310 loss=18.3, 76.3% complete INFO:tensorflow:local_step=32320 global_step=32320 loss=18.4, 76.4% complete INFO:tensorflow:local_step=32330 global_step=32330 loss=15.4, 76.4% complete INFO:tensorflow:local_step=32340 global_step=32340 loss=18.7, 76.4% complete INFO:tensorflow:local_step=32350 global_step=32350 loss=17.6, 76.4% complete INFO:tensorflow:local_step=32360 global_step=32360 loss=18.7, 76.5% complete INFO:tensorflow:local_step=32370 global_step=32370 loss=19.2, 76.5% complete INFO:tensorflow:local_step=32380 global_step=32380 loss=18.1, 76.5% complete INFO:tensorflow:local_step=32390 global_step=32390 loss=18.7, 76.5% complete INFO:tensorflow:local_step=32400 global_step=32400 loss=18.3, 76.6% complete INFO:tensorflow:local_step=32410 global_step=32410 loss=18.0, 76.6% complete INFO:tensorflow:local_step=32420 global_step=32420 loss=19.5, 76.6% complete INFO:tensorflow:local_step=32430 global_step=32430 loss=18.3, 76.6% complete INFO:tensorflow:local_step=32440 global_step=32440 loss=18.9, 76.7% complete INFO:tensorflow:local_step=32450 global_step=32450 loss=17.7, 76.7% complete INFO:tensorflow:local_step=32460 global_step=32460 loss=18.8, 76.7% complete INFO:tensorflow:local_step=32470 global_step=32470 loss=18.5, 76.7% complete INFO:tensorflow:local_step=32480 global_step=32480 loss=18.3, 76.7% complete INFO:tensorflow:local_step=32490 global_step=32490 loss=17.6, 76.8% complete INFO:tensorflow:local_step=32500 global_step=32500 loss=15.9, 76.8% complete INFO:tensorflow:local_step=32510 global_step=32510 loss=17.6, 76.8% complete INFO:tensorflow:local_step=32520 global_step=32520 loss=18.0, 76.8% complete INFO:tensorflow:local_step=32530 global_step=32530 loss=19.3, 76.9% complete INFO:tensorflow:local_step=32540 global_step=32540 loss=18.6, 76.9% complete INFO:tensorflow:local_step=32550 global_step=32550 loss=19.3, 76.9% complete INFO:tensorflow:local_step=32560 global_step=32560 loss=18.2, 76.9% complete INFO:tensorflow:local_step=32570 global_step=32570 loss=15.0, 77.0% complete INFO:tensorflow:local_step=32580 global_step=32580 loss=18.7, 77.0% complete INFO:tensorflow:local_step=32590 global_step=32590 loss=18.8, 77.0% complete INFO:tensorflow:local_step=32600 global_step=32600 loss=18.8, 77.0% complete INFO:tensorflow:local_step=32610 global_step=32610 loss=20.9, 77.1% complete INFO:tensorflow:local_step=32620 global_step=32620 loss=18.3, 77.1% complete INFO:tensorflow:local_step=32630 global_step=32630 loss=19.0, 77.1% complete INFO:tensorflow:local_step=32640 global_step=32640 loss=18.1, 77.1% complete INFO:tensorflow:local_step=32650 global_step=32650 loss=15.0, 77.2% complete INFO:tensorflow:local_step=32660 global_step=32660 loss=18.6, 77.2% complete INFO:tensorflow:local_step=32670 global_step=32670 loss=18.9, 77.2% complete INFO:tensorflow:local_step=32680 global_step=32680 loss=18.7, 77.2% complete INFO:tensorflow:local_step=32690 global_step=32690 loss=18.9, 77.2% complete INFO:tensorflow:local_step=32700 global_step=32700 loss=17.4, 77.3% complete INFO:tensorflow:local_step=32710 global_step=32710 loss=18.2, 77.3% complete INFO:tensorflow:local_step=32720 global_step=32720 loss=18.9, 77.3% complete INFO:tensorflow:local_step=32730 global_step=32730 loss=18.1, 77.3% complete INFO:tensorflow:local_step=32740 global_step=32740 loss=18.1, 77.4% complete INFO:tensorflow:local_step=32750 global_step=32750 loss=18.8, 77.4% complete INFO:tensorflow:local_step=32760 global_step=32760 loss=18.6, 77.4% complete INFO:tensorflow:local_step=32770 global_step=32770 loss=18.0, 77.4% complete INFO:tensorflow:local_step=32780 global_step=32780 loss=17.6, 77.5% complete INFO:tensorflow:local_step=32790 global_step=32790 loss=17.8, 77.5% complete INFO:tensorflow:local_step=32800 global_step=32800 loss=18.5, 77.5% complete INFO:tensorflow:local_step=32810 global_step=32810 loss=18.8, 77.5% complete INFO:tensorflow:local_step=32820 global_step=32820 loss=17.2, 77.6% complete INFO:tensorflow:local_step=32830 global_step=32830 loss=17.9, 77.6% complete INFO:tensorflow:local_step=32840 global_step=32840 loss=21.3, 77.6% complete INFO:tensorflow:local_step=32850 global_step=32850 loss=19.7, 77.6% complete INFO:tensorflow:local_step=32860 global_step=32860 loss=16.6, 77.6% complete INFO:tensorflow:local_step=32870 global_step=32870 loss=18.7, 77.7% complete INFO:tensorflow:local_step=32880 global_step=32880 loss=15.1, 77.7% complete INFO:tensorflow:local_step=32890 global_step=32890 loss=18.6, 77.7% complete INFO:tensorflow:local_step=32900 global_step=32900 loss=19.1, 77.7% complete INFO:tensorflow:local_step=32910 global_step=32910 loss=18.8, 77.8% complete INFO:tensorflow:local_step=32920 global_step=32920 loss=17.9, 77.8% complete INFO:tensorflow:local_step=32930 global_step=32930 loss=18.3, 77.8% complete INFO:tensorflow:local_step=32940 global_step=32940 loss=18.0, 77.8% complete INFO:tensorflow:local_step=32950 global_step=32950 loss=298.9, 77.9% complete INFO:tensorflow:local_step=32960 global_step=32960 loss=269.3, 77.9% complete INFO:tensorflow:local_step=32970 global_step=32970 loss=21.4, 77.9% complete INFO:tensorflow:local_step=32980 global_step=32980 loss=19.5, 77.9% complete INFO:tensorflow:local_step=32990 global_step=32990 loss=18.5, 78.0% complete INFO:tensorflow:local_step=33000 global_step=33000 loss=17.7, 78.0% complete INFO:tensorflow:local_step=33010 global_step=33010 loss=19.0, 78.0% complete INFO:tensorflow:local_step=33020 global_step=33020 loss=18.9, 78.0% complete INFO:tensorflow:local_step=33030 global_step=33030 loss=18.2, 78.0% complete INFO:tensorflow:local_step=33040 global_step=33040 loss=19.3, 78.1% complete INFO:tensorflow:local_step=33050 global_step=33050 loss=19.1, 78.1% complete INFO:tensorflow:local_step=33060 global_step=33060 loss=18.1, 78.1% complete INFO:tensorflow:local_step=33070 global_step=33070 loss=18.0, 78.1% complete INFO:tensorflow:local_step=33080 global_step=33080 loss=18.6, 78.2% complete INFO:tensorflow:local_step=33090 global_step=33090 loss=22.3, 78.2% complete INFO:tensorflow:local_step=33100 global_step=33100 loss=19.1, 78.2% complete INFO:tensorflow:local_step=33110 global_step=33110 loss=20.1, 78.2% complete INFO:tensorflow:local_step=33120 global_step=33120 loss=15.3, 78.3% complete INFO:tensorflow:local_step=33130 global_step=33130 loss=18.4, 78.3% complete INFO:tensorflow:local_step=33140 global_step=33140 loss=18.5, 78.3% complete INFO:tensorflow:local_step=33150 global_step=33150 loss=18.1, 78.3% complete INFO:tensorflow:local_step=33160 global_step=33160 loss=19.0, 78.4% complete INFO:tensorflow:local_step=33170 global_step=33170 loss=18.9, 78.4% complete INFO:tensorflow:local_step=33180 global_step=33180 loss=18.4, 78.4% complete INFO:tensorflow:local_step=33190 global_step=33190 loss=18.6, 78.4% complete INFO:tensorflow:local_step=33200 global_step=33200 loss=17.9, 78.4% complete INFO:tensorflow:local_step=33210 global_step=33210 loss=19.6, 78.5% complete INFO:tensorflow:local_step=33220 global_step=33220 loss=19.5, 78.5% complete INFO:tensorflow:local_step=33230 global_step=33230 loss=18.8, 78.5% complete INFO:tensorflow:local_step=33240 global_step=33240 loss=18.7, 78.5% complete INFO:tensorflow:local_step=33250 global_step=33250 loss=18.5, 78.6% complete INFO:tensorflow:local_step=33260 global_step=33260 loss=18.0, 78.6% complete INFO:tensorflow:local_step=33270 global_step=33270 loss=18.8, 78.6% complete INFO:tensorflow:local_step=33280 global_step=33280 loss=18.5, 78.6% complete INFO:tensorflow:local_step=33290 global_step=33290 loss=18.4, 78.7% complete INFO:tensorflow:local_step=33300 global_step=33300 loss=18.6, 78.7% complete INFO:tensorflow:local_step=33310 global_step=33310 loss=17.8, 78.7% complete INFO:tensorflow:local_step=33320 global_step=33320 loss=19.1, 78.7% complete INFO:tensorflow:local_step=33330 global_step=33330 loss=18.8, 78.8% complete INFO:tensorflow:local_step=33340 global_step=33340 loss=17.9, 78.8% complete INFO:tensorflow:local_step=33350 global_step=33350 loss=17.9, 78.8% complete INFO:tensorflow:local_step=33360 global_step=33360 loss=18.1, 78.8% complete INFO:tensorflow:local_step=33370 global_step=33370 loss=17.8, 78.9% complete INFO:tensorflow:local_step=33380 global_step=33380 loss=17.7, 78.9% complete INFO:tensorflow:local_step=33390 global_step=33390 loss=18.3, 78.9% complete INFO:tensorflow:local_step=33400 global_step=33400 loss=18.9, 78.9% complete INFO:tensorflow:local_step=33410 global_step=33410 loss=18.4, 78.9% complete INFO:tensorflow:local_step=33420 global_step=33420 loss=17.9, 79.0% complete INFO:tensorflow:local_step=33430 global_step=33430 loss=17.8, 79.0% complete INFO:tensorflow:local_step=33440 global_step=33440 loss=15.6, 79.0% complete INFO:tensorflow:local_step=33450 global_step=33450 loss=18.1, 79.0% complete INFO:tensorflow:local_step=33460 global_step=33460 loss=17.6, 79.1% complete INFO:tensorflow:local_step=33470 global_step=33470 loss=18.4, 79.1% complete INFO:tensorflow:local_step=33480 global_step=33480 loss=18.6, 79.1% complete INFO:tensorflow:local_step=33490 global_step=33490 loss=18.2, 79.1% complete INFO:tensorflow:local_step=33500 global_step=33500 loss=17.8, 79.2% complete INFO:tensorflow:local_step=33510 global_step=33510 loss=19.0, 79.2% complete INFO:tensorflow:local_step=33520 global_step=33520 loss=18.5, 79.2% complete INFO:tensorflow:local_step=33530 global_step=33530 loss=18.1, 79.2% complete INFO:tensorflow:local_step=33540 global_step=33540 loss=18.5, 79.3% complete INFO:tensorflow:local_step=33550 global_step=33550 loss=18.0, 79.3% complete INFO:tensorflow:local_step=33560 global_step=33560 loss=19.4, 79.3% complete INFO:tensorflow:local_step=33570 global_step=33570 loss=18.5, 79.3% complete INFO:tensorflow:local_step=33580 global_step=33580 loss=18.4, 79.3% complete INFO:tensorflow:local_step=33590 global_step=33590 loss=18.5, 79.4% complete INFO:tensorflow:local_step=33600 global_step=33600 loss=15.4, 79.4% complete INFO:tensorflow:local_step=33610 global_step=33610 loss=281.1, 79.4% complete INFO:tensorflow:local_step=33620 global_step=33620 loss=14.7, 79.4% complete INFO:tensorflow:local_step=33630 global_step=33630 loss=19.2, 79.5% complete INFO:tensorflow:local_step=33640 global_step=33640 loss=21.0, 79.5% complete INFO:tensorflow:local_step=33650 global_step=33650 loss=18.9, 79.5% complete INFO:tensorflow:local_step=33660 global_step=33660 loss=17.9, 79.5% complete INFO:tensorflow:local_step=33670 global_step=33670 loss=19.1, 79.6% complete INFO:tensorflow:local_step=33680 global_step=33680 loss=17.8, 79.6% complete INFO:tensorflow:local_step=33690 global_step=33690 loss=17.9, 79.6% complete INFO:tensorflow:local_step=33700 global_step=33700 loss=19.2, 79.6% complete INFO:tensorflow:local_step=33710 global_step=33710 loss=18.0, 79.7% complete INFO:tensorflow:local_step=33720 global_step=33720 loss=18.6, 79.7% complete INFO:tensorflow:local_step=33730 global_step=33730 loss=19.7, 79.7% complete INFO:tensorflow:local_step=33740 global_step=33740 loss=18.5, 79.7% complete INFO:tensorflow:local_step=33750 global_step=33750 loss=18.3, 79.7% complete INFO:tensorflow:local_step=33760 global_step=33760 loss=18.8, 79.8% complete INFO:tensorflow:local_step=33770 global_step=33770 loss=18.8, 79.8% complete INFO:tensorflow:local_step=33780 global_step=33780 loss=18.6, 79.8% complete INFO:tensorflow:local_step=33790 global_step=33790 loss=18.7, 79.8% complete INFO:tensorflow:local_step=33800 global_step=33800 loss=18.6, 79.9% complete INFO:tensorflow:local_step=33810 global_step=33810 loss=18.4, 79.9% complete INFO:tensorflow:local_step=33820 global_step=33820 loss=17.9, 79.9% complete INFO:tensorflow:local_step=33830 global_step=33830 loss=20.9, 79.9% complete INFO:tensorflow:local_step=33840 global_step=33840 loss=18.1, 80.0% complete INFO:tensorflow:local_step=33850 global_step=33850 loss=17.9, 80.0% complete INFO:tensorflow:local_step=33860 global_step=33860 loss=18.1, 80.0% complete INFO:tensorflow:local_step=33870 global_step=33870 loss=17.8, 80.0% complete INFO:tensorflow:local_step=33880 global_step=33880 loss=18.2, 80.1% complete INFO:tensorflow:local_step=33890 global_step=33890 loss=18.4, 80.1% complete INFO:tensorflow:local_step=33900 global_step=33900 loss=17.9, 80.1% complete INFO:tensorflow:local_step=33910 global_step=33910 loss=19.2, 80.1% complete INFO:tensorflow:local_step=33920 global_step=33920 loss=18.2, 80.2% complete INFO:tensorflow:local_step=33930 global_step=33930 loss=17.8, 80.2% complete INFO:tensorflow:local_step=33940 global_step=33940 loss=18.1, 80.2% complete INFO:tensorflow:local_step=33950 global_step=33950 loss=18.9, 80.2% complete INFO:tensorflow:local_step=33960 global_step=33960 loss=18.3, 80.2% complete INFO:tensorflow:local_step=33970 global_step=33970 loss=18.4, 80.3% complete INFO:tensorflow:local_step=33980 global_step=33980 loss=18.0, 80.3% complete INFO:tensorflow:local_step=33990 global_step=33990 loss=17.9, 80.3% complete INFO:tensorflow:local_step=34000 global_step=34000 loss=18.2, 80.3% complete INFO:tensorflow:local_step=34010 global_step=34010 loss=18.4, 80.4% complete INFO:tensorflow:local_step=34020 global_step=34020 loss=18.8, 80.4% complete INFO:tensorflow:local_step=34030 global_step=34030 loss=17.6, 80.4% complete INFO:tensorflow:local_step=34040 global_step=34040 loss=17.5, 80.4% complete INFO:tensorflow:local_step=34050 global_step=34050 loss=329.6, 80.5% complete INFO:tensorflow:local_step=34060 global_step=34060 loss=18.8, 80.5% complete INFO:tensorflow:local_step=34070 global_step=34070 loss=18.2, 80.5% complete INFO:tensorflow:local_step=34080 global_step=34080 loss=17.9, 80.5% complete INFO:tensorflow:local_step=34090 global_step=34090 loss=17.8, 80.6% complete INFO:tensorflow:local_step=34100 global_step=34100 loss=18.1, 80.6% complete INFO:tensorflow:local_step=34110 global_step=34110 loss=18.7, 80.6% complete INFO:tensorflow:local_step=34120 global_step=34120 loss=18.1, 80.6% complete INFO:tensorflow:local_step=34130 global_step=34130 loss=18.2, 80.6% complete INFO:tensorflow:local_step=34140 global_step=34140 loss=18.4, 80.7% complete INFO:tensorflow:local_step=34150 global_step=34150 loss=18.8, 80.7% complete INFO:tensorflow:local_step=34160 global_step=34160 loss=17.7, 80.7% complete INFO:tensorflow:local_step=34170 global_step=34170 loss=17.5, 80.7% complete INFO:tensorflow:local_step=34180 global_step=34180 loss=17.9, 80.8% complete INFO:tensorflow:local_step=34190 global_step=34190 loss=17.8, 80.8% complete INFO:tensorflow:local_step=34200 global_step=34200 loss=18.3, 80.8% complete INFO:tensorflow:local_step=34210 global_step=34210 loss=17.5, 80.8% complete INFO:tensorflow:local_step=34220 global_step=34220 loss=17.9, 80.9% complete INFO:tensorflow:local_step=34230 global_step=34230 loss=17.9, 80.9% complete INFO:tensorflow:local_step=34240 global_step=34240 loss=18.5, 80.9% complete INFO:tensorflow:local_step=34250 global_step=34250 loss=18.3, 80.9% complete INFO:tensorflow:local_step=34260 global_step=34260 loss=18.4, 81.0% complete INFO:tensorflow:local_step=34270 global_step=34270 loss=17.8, 81.0% complete INFO:tensorflow:local_step=34280 global_step=34280 loss=18.4, 81.0% complete INFO:tensorflow:local_step=34290 global_step=34290 loss=18.9, 81.0% complete INFO:tensorflow:local_step=34300 global_step=34300 loss=18.4, 81.0% complete INFO:tensorflow:local_step=34310 global_step=34310 loss=18.0, 81.1% complete INFO:tensorflow:local_step=34320 global_step=34320 loss=19.1, 81.1% complete INFO:tensorflow:local_step=34330 global_step=34330 loss=18.5, 81.1% complete INFO:tensorflow:local_step=34340 global_step=34340 loss=18.1, 81.1% complete INFO:tensorflow:local_step=34350 global_step=34350 loss=18.2, 81.2% complete INFO:tensorflow:local_step=34360 global_step=34360 loss=18.1, 81.2% complete INFO:tensorflow:local_step=34370 global_step=34370 loss=17.2, 81.2% complete INFO:tensorflow:local_step=34380 global_step=34380 loss=17.7, 81.2% complete INFO:tensorflow:local_step=34390 global_step=34390 loss=20.6, 81.3% complete INFO:tensorflow:local_step=34400 global_step=34400 loss=18.7, 81.3% complete INFO:tensorflow:local_step=34410 global_step=34410 loss=18.2, 81.3% complete INFO:tensorflow:local_step=34420 global_step=34420 loss=18.3, 81.3% complete INFO:tensorflow:local_step=34430 global_step=34430 loss=18.5, 81.4% complete INFO:tensorflow:local_step=34440 global_step=34440 loss=18.1, 81.4% complete INFO:tensorflow:local_step=34450 global_step=34450 loss=17.7, 81.4% complete INFO:tensorflow:local_step=34460 global_step=34460 loss=17.9, 81.4% complete INFO:tensorflow:local_step=34470 global_step=34470 loss=17.9, 81.5% complete INFO:tensorflow:local_step=34480 global_step=34480 loss=18.7, 81.5% complete INFO:tensorflow:local_step=34490 global_step=34490 loss=17.9, 81.5% complete INFO:tensorflow:local_step=34500 global_step=34500 loss=17.3, 81.5% complete INFO:tensorflow:local_step=34510 global_step=34510 loss=18.0, 81.5% complete INFO:tensorflow:local_step=34520 global_step=34520 loss=18.6, 81.6% complete INFO:tensorflow:local_step=34530 global_step=34530 loss=15.8, 81.6% complete INFO:tensorflow:local_step=34540 global_step=34540 loss=19.6, 81.6% complete INFO:tensorflow:local_step=34550 global_step=34550 loss=18.0, 81.6% complete INFO:tensorflow:local_step=34560 global_step=34560 loss=19.0, 81.7% complete INFO:tensorflow:local_step=34570 global_step=34570 loss=17.9, 81.7% complete INFO:tensorflow:local_step=34580 global_step=34580 loss=17.8, 81.7% complete INFO:tensorflow:local_step=34590 global_step=34590 loss=17.7, 81.7% complete INFO:tensorflow:local_step=34600 global_step=34600 loss=18.3, 81.8% complete INFO:tensorflow:local_step=34610 global_step=34610 loss=17.7, 81.8% complete INFO:tensorflow:local_step=34620 global_step=34620 loss=17.2, 81.8% complete INFO:tensorflow:local_step=34630 global_step=34630 loss=18.7, 81.8% complete INFO:tensorflow:local_step=34640 global_step=34640 loss=19.0, 81.9% complete INFO:tensorflow:local_step=34650 global_step=34650 loss=19.3, 81.9% complete INFO:tensorflow:local_step=34660 global_step=34660 loss=18.2, 81.9% complete INFO:tensorflow:local_step=34670 global_step=34670 loss=20.4, 81.9% complete INFO:tensorflow:local_step=34680 global_step=34680 loss=18.2, 81.9% complete INFO:tensorflow:local_step=34690 global_step=34690 loss=19.3, 82.0% complete INFO:tensorflow:local_step=34700 global_step=34700 loss=18.6, 82.0% complete INFO:tensorflow:local_step=34710 global_step=34710 loss=18.8, 82.0% complete INFO:tensorflow:local_step=34720 global_step=34720 loss=18.5, 82.0% complete INFO:tensorflow:local_step=34730 global_step=34730 loss=18.3, 82.1% complete INFO:tensorflow:local_step=34740 global_step=34740 loss=17.3, 82.1% complete INFO:tensorflow:local_step=34750 global_step=34750 loss=19.0, 82.1% complete INFO:tensorflow:local_step=34760 global_step=34760 loss=18.8, 82.1% complete INFO:tensorflow:local_step=34770 global_step=34770 loss=19.2, 82.2% complete INFO:tensorflow:local_step=34780 global_step=34780 loss=18.5, 82.2% complete INFO:tensorflow:local_step=34790 global_step=34790 loss=18.2, 82.2% complete INFO:tensorflow:local_step=34800 global_step=34800 loss=18.8, 82.2% complete INFO:tensorflow:local_step=34810 global_step=34810 loss=17.6, 82.3% complete INFO:tensorflow:local_step=34820 global_step=34820 loss=18.4, 82.3% complete INFO:tensorflow:local_step=34830 global_step=34830 loss=17.8, 82.3% complete INFO:tensorflow:local_step=34840 global_step=34840 loss=18.5, 82.3% complete INFO:tensorflow:local_step=34850 global_step=34850 loss=19.3, 82.3% complete INFO:tensorflow:local_step=34860 global_step=34860 loss=18.5, 82.4% complete INFO:tensorflow:local_step=34870 global_step=34870 loss=17.5, 82.4% complete INFO:tensorflow:local_step=34880 global_step=34880 loss=17.8, 82.4% complete INFO:tensorflow:local_step=34890 global_step=34890 loss=299.0, 82.4% complete INFO:tensorflow:local_step=34900 global_step=34900 loss=18.5, 82.5% complete INFO:tensorflow:local_step=34910 global_step=34910 loss=18.9, 82.5% complete INFO:tensorflow:local_step=34920 global_step=34920 loss=17.9, 82.5% complete INFO:tensorflow:local_step=34930 global_step=34930 loss=19.3, 82.5% complete INFO:tensorflow:local_step=34940 global_step=34940 loss=18.4, 82.6% complete INFO:tensorflow:local_step=34950 global_step=34950 loss=18.0, 82.6% complete INFO:tensorflow:local_step=34960 global_step=34960 loss=19.7, 82.6% complete INFO:tensorflow:local_step=34970 global_step=34970 loss=19.2, 82.6% complete INFO:tensorflow:local_step=34980 global_step=34980 loss=13.1, 82.7% complete INFO:tensorflow:local_step=34990 global_step=34990 loss=18.8, 82.7% complete INFO:tensorflow:local_step=35000 global_step=35000 loss=18.0, 82.7% complete INFO:tensorflow:local_step=35010 global_step=35010 loss=282.9, 82.7% complete INFO:tensorflow:local_step=35020 global_step=35020 loss=18.7, 82.8% complete INFO:tensorflow:local_step=35030 global_step=35030 loss=18.2, 82.8% complete INFO:tensorflow:local_step=35040 global_step=35040 loss=17.9, 82.8% complete INFO:tensorflow:local_step=35050 global_step=35050 loss=18.1, 82.8% complete INFO:tensorflow:local_step=35060 global_step=35060 loss=18.2, 82.8% complete INFO:tensorflow:local_step=35070 global_step=35070 loss=18.7, 82.9% complete INFO:tensorflow:local_step=35080 global_step=35080 loss=17.6, 82.9% complete INFO:tensorflow:local_step=35090 global_step=35090 loss=18.2, 82.9% complete INFO:tensorflow:local_step=35100 global_step=35100 loss=18.3, 82.9% complete INFO:tensorflow:local_step=35110 global_step=35110 loss=15.7, 83.0% complete INFO:tensorflow:local_step=35120 global_step=35120 loss=17.9, 83.0% complete INFO:tensorflow:local_step=35130 global_step=35130 loss=16.5, 83.0% complete INFO:tensorflow:local_step=35140 global_step=35140 loss=21.4, 83.0% complete INFO:tensorflow:local_step=35150 global_step=35150 loss=18.3, 83.1% complete INFO:tensorflow:local_step=35160 global_step=35160 loss=18.0, 83.1% complete INFO:tensorflow:local_step=35170 global_step=35170 loss=18.0, 83.1% complete INFO:tensorflow:local_step=35180 global_step=35180 loss=18.4, 83.1% complete INFO:tensorflow:local_step=35190 global_step=35190 loss=18.1, 83.2% complete INFO:tensorflow:local_step=35200 global_step=35200 loss=21.5, 83.2% complete INFO:tensorflow:local_step=35210 global_step=35210 loss=18.5, 83.2% complete INFO:tensorflow:local_step=35220 global_step=35220 loss=18.0, 83.2% complete INFO:tensorflow:local_step=35230 global_step=35230 loss=18.2, 83.2% complete INFO:tensorflow:local_step=35240 global_step=35240 loss=16.5, 83.3% complete INFO:tensorflow:local_step=35250 global_step=35250 loss=19.2, 83.3% complete INFO:tensorflow:local_step=35260 global_step=35260 loss=18.8, 83.3% complete INFO:tensorflow:local_step=35270 global_step=35270 loss=18.5, 83.3% complete INFO:tensorflow:local_step=35280 global_step=35280 loss=17.5, 83.4% complete INFO:tensorflow:local_step=35290 global_step=35290 loss=18.1, 83.4% complete INFO:tensorflow:local_step=35300 global_step=35300 loss=18.7, 83.4% complete INFO:tensorflow:local_step=35310 global_step=35310 loss=19.0, 83.4% complete INFO:tensorflow:local_step=35320 global_step=35320 loss=18.2, 83.5% complete INFO:tensorflow:local_step=35330 global_step=35330 loss=18.3, 83.5% complete INFO:tensorflow:local_step=35340 global_step=35340 loss=17.9, 83.5% complete INFO:tensorflow:local_step=35350 global_step=35350 loss=19.1, 83.5% complete INFO:tensorflow:local_step=35360 global_step=35360 loss=17.8, 83.6% complete INFO:tensorflow:local_step=35370 global_step=35370 loss=18.6, 83.6% complete INFO:tensorflow:local_step=35380 global_step=35380 loss=18.7, 83.6% complete INFO:tensorflow:local_step=35390 global_step=35390 loss=18.6, 83.6% complete INFO:tensorflow:local_step=35400 global_step=35400 loss=14.7, 83.6% complete INFO:tensorflow:local_step=35410 global_step=35410 loss=17.5, 83.7% complete INFO:tensorflow:local_step=35420 global_step=35420 loss=18.6, 83.7% complete INFO:tensorflow:local_step=35430 global_step=35430 loss=18.5, 83.7% complete INFO:tensorflow:local_step=35440 global_step=35440 loss=18.5, 83.7% complete INFO:tensorflow:local_step=35450 global_step=35450 loss=18.8, 83.8% complete INFO:tensorflow:local_step=35460 global_step=35460 loss=18.6, 83.8% complete INFO:tensorflow:local_step=35470 global_step=35470 loss=269.9, 83.8% complete INFO:tensorflow:local_step=35480 global_step=35480 loss=18.5, 83.8% complete INFO:tensorflow:local_step=35490 global_step=35490 loss=17.9, 83.9% complete INFO:tensorflow:local_step=35500 global_step=35500 loss=19.0, 83.9% complete INFO:tensorflow:local_step=35510 global_step=35510 loss=18.3, 83.9% complete INFO:tensorflow:local_step=35520 global_step=35520 loss=17.2, 83.9% complete INFO:tensorflow:local_step=35530 global_step=35530 loss=18.7, 84.0% complete INFO:tensorflow:local_step=35540 global_step=35540 loss=18.2, 84.0% complete INFO:tensorflow:local_step=35550 global_step=35550 loss=17.8, 84.0% complete INFO:tensorflow:local_step=35560 global_step=35560 loss=19.0, 84.0% complete INFO:tensorflow:local_step=35570 global_step=35570 loss=19.3, 84.1% complete INFO:tensorflow:local_step=35580 global_step=35580 loss=18.8, 84.1% complete INFO:tensorflow:local_step=35590 global_step=35590 loss=18.3, 84.1% complete INFO:tensorflow:local_step=35600 global_step=35600 loss=18.2, 84.1% complete INFO:tensorflow:local_step=35610 global_step=35610 loss=18.7, 84.1% complete INFO:tensorflow:local_step=35620 global_step=35620 loss=18.5, 84.2% complete INFO:tensorflow:local_step=35630 global_step=35630 loss=17.8, 84.2% complete INFO:tensorflow:local_step=35640 global_step=35640 loss=17.8, 84.2% complete INFO:tensorflow:local_step=35650 global_step=35650 loss=18.3, 84.2% complete INFO:tensorflow:local_step=35660 global_step=35660 loss=15.0, 84.3% complete INFO:tensorflow:local_step=35670 global_step=35670 loss=18.2, 84.3% complete INFO:tensorflow:local_step=35680 global_step=35680 loss=19.0, 84.3% complete INFO:tensorflow:local_step=35690 global_step=35690 loss=18.7, 84.3% complete INFO:tensorflow:local_step=35700 global_step=35700 loss=18.1, 84.4% complete INFO:tensorflow:local_step=35710 global_step=35710 loss=19.0, 84.4% complete INFO:tensorflow:local_step=35720 global_step=35720 loss=18.0, 84.4% complete INFO:tensorflow:local_step=35730 global_step=35730 loss=19.1, 84.4% complete INFO:tensorflow:local_step=35740 global_step=35740 loss=19.3, 84.5% complete INFO:tensorflow:local_step=35750 global_step=35750 loss=17.3, 84.5% complete INFO:tensorflow:local_step=35760 global_step=35760 loss=17.8, 84.5% complete INFO:tensorflow:local_step=35770 global_step=35770 loss=327.8, 84.5% complete INFO:tensorflow:local_step=35780 global_step=35780 loss=19.1, 84.5% complete INFO:tensorflow:local_step=35790 global_step=35790 loss=19.0, 84.6% complete INFO:tensorflow:local_step=35800 global_step=35800 loss=18.2, 84.6% complete INFO:tensorflow:local_step=35810 global_step=35810 loss=18.2, 84.6% complete INFO:tensorflow:local_step=35820 global_step=35820 loss=19.2, 84.6% complete INFO:tensorflow:local_step=35830 global_step=35830 loss=18.4, 84.7% complete INFO:tensorflow:local_step=35840 global_step=35840 loss=17.5, 84.7% complete INFO:tensorflow:local_step=35850 global_step=35850 loss=18.0, 84.7% complete INFO:tensorflow:local_step=35860 global_step=35860 loss=18.9, 84.7% complete INFO:tensorflow:local_step=35870 global_step=35870 loss=19.0, 84.8% complete INFO:tensorflow:local_step=35880 global_step=35880 loss=18.8, 84.8% complete INFO:tensorflow:local_step=35890 global_step=35890 loss=18.2, 84.8% complete INFO:tensorflow:local_step=35900 global_step=35900 loss=18.5, 84.8% complete INFO:tensorflow:local_step=35910 global_step=35910 loss=18.6, 84.9% complete INFO:tensorflow:local_step=35920 global_step=35920 loss=18.0, 84.9% complete INFO:tensorflow:local_step=35930 global_step=35930 loss=14.6, 84.9% complete INFO:tensorflow:local_step=35940 global_step=35940 loss=19.0, 84.9% complete INFO:tensorflow:local_step=35950 global_step=35950 loss=14.8, 84.9% complete INFO:tensorflow:local_step=35960 global_step=35960 loss=18.5, 85.0% complete INFO:tensorflow:local_step=35970 global_step=35970 loss=17.9, 85.0% complete INFO:tensorflow:local_step=35980 global_step=35980 loss=17.9, 85.0% complete INFO:tensorflow:local_step=35990 global_step=35990 loss=17.7, 85.0% complete INFO:tensorflow:local_step=36000 global_step=36000 loss=17.8, 85.1% complete INFO:tensorflow:local_step=36010 global_step=36010 loss=19.1, 85.1% complete INFO:tensorflow:local_step=36020 global_step=36020 loss=18.1, 85.1% complete INFO:tensorflow:local_step=36030 global_step=36030 loss=18.1, 85.1% complete INFO:tensorflow:local_step=36040 global_step=36040 loss=19.3, 85.2% complete INFO:tensorflow:local_step=36050 global_step=36050 loss=17.7, 85.2% complete INFO:tensorflow:local_step=36060 global_step=36060 loss=18.8, 85.2% complete INFO:tensorflow:local_step=36070 global_step=36070 loss=17.5, 85.2% complete INFO:tensorflow:local_step=36080 global_step=36080 loss=18.5, 85.3% complete INFO:tensorflow:local_step=36090 global_step=36090 loss=18.3, 85.3% complete INFO:tensorflow:local_step=36100 global_step=36100 loss=18.9, 85.3% complete INFO:tensorflow:local_step=36110 global_step=36110 loss=18.4, 85.3% complete INFO:tensorflow:local_step=36120 global_step=36120 loss=17.8, 85.3% complete INFO:tensorflow:local_step=36130 global_step=36130 loss=18.3, 85.4% complete INFO:tensorflow:local_step=36140 global_step=36140 loss=17.8, 85.4% complete INFO:tensorflow:local_step=36150 global_step=36150 loss=19.6, 85.4% complete INFO:tensorflow:local_step=36160 global_step=36160 loss=19.0, 85.4% complete INFO:tensorflow:local_step=36170 global_step=36170 loss=15.6, 85.5% complete INFO:tensorflow:local_step=36180 global_step=36180 loss=18.2, 85.5% complete INFO:tensorflow:local_step=36190 global_step=36190 loss=18.4, 85.5% complete INFO:tensorflow:local_step=36200 global_step=36200 loss=19.1, 85.5% complete INFO:tensorflow:local_step=36210 global_step=36210 loss=18.3, 85.6% complete INFO:tensorflow:local_step=36220 global_step=36220 loss=18.2, 85.6% complete INFO:tensorflow:local_step=36230 global_step=36230 loss=19.1, 85.6% complete INFO:tensorflow:local_step=36240 global_step=36240 loss=18.4, 85.6% complete INFO:tensorflow:local_step=36250 global_step=36250 loss=17.7, 85.7% complete INFO:tensorflow:local_step=36260 global_step=36260 loss=17.9, 85.7% complete INFO:tensorflow:local_step=36270 global_step=36270 loss=18.2, 85.7% complete INFO:tensorflow:local_step=36280 global_step=36280 loss=18.6, 85.7% complete INFO:tensorflow:local_step=36290 global_step=36290 loss=18.7, 85.8% complete INFO:tensorflow:local_step=36300 global_step=36300 loss=18.5, 85.8% complete INFO:tensorflow:local_step=36310 global_step=36310 loss=18.1, 85.8% complete INFO:tensorflow:local_step=36320 global_step=36320 loss=17.6, 85.8% complete INFO:tensorflow:local_step=36330 global_step=36330 loss=17.0, 85.8% complete INFO:tensorflow:local_step=36340 global_step=36340 loss=17.6, 85.9% complete INFO:tensorflow:local_step=36350 global_step=36350 loss=18.2, 85.9% complete INFO:tensorflow:local_step=36360 global_step=36360 loss=17.3, 85.9% complete INFO:tensorflow:local_step=36370 global_step=36370 loss=20.8, 85.9% complete INFO:tensorflow:local_step=36380 global_step=36380 loss=18.2, 86.0% complete INFO:tensorflow:local_step=36390 global_step=36390 loss=18.2, 86.0% complete INFO:tensorflow:local_step=36400 global_step=36400 loss=18.0, 86.0% complete INFO:tensorflow:local_step=36410 global_step=36410 loss=18.7, 86.0% complete INFO:tensorflow:local_step=36420 global_step=36420 loss=19.0, 86.1% complete INFO:tensorflow:local_step=36430 global_step=36430 loss=17.9, 86.1% complete INFO:tensorflow:local_step=36440 global_step=36440 loss=18.1, 86.1% complete INFO:tensorflow:local_step=36450 global_step=36450 loss=18.8, 86.1% complete INFO:tensorflow:local_step=36460 global_step=36460 loss=18.6, 86.2% complete INFO:tensorflow:local_step=36470 global_step=36470 loss=18.7, 86.2% complete INFO:tensorflow:local_step=36480 global_step=36480 loss=15.9, 86.2% complete INFO:tensorflow:local_step=36490 global_step=36490 loss=17.7, 86.2% complete INFO:tensorflow:local_step=36500 global_step=36500 loss=18.4, 86.2% complete INFO:tensorflow:local_step=36510 global_step=36510 loss=18.7, 86.3% complete INFO:tensorflow:local_step=36520 global_step=36520 loss=17.6, 86.3% complete INFO:tensorflow:local_step=36530 global_step=36530 loss=17.7, 86.3% complete INFO:tensorflow:local_step=36540 global_step=36540 loss=17.6, 86.3% complete INFO:tensorflow:local_step=36550 global_step=36550 loss=17.8, 86.4% complete INFO:tensorflow:local_step=36560 global_step=36560 loss=18.3, 86.4% complete INFO:tensorflow:local_step=36570 global_step=36570 loss=17.7, 86.4% complete INFO:tensorflow:local_step=36580 global_step=36580 loss=19.7, 86.4% complete INFO:tensorflow:local_step=36590 global_step=36590 loss=19.0, 86.5% complete INFO:tensorflow:local_step=36600 global_step=36600 loss=18.6, 86.5% complete INFO:tensorflow:local_step=36610 global_step=36610 loss=18.4, 86.5% complete INFO:tensorflow:local_step=36620 global_step=36620 loss=18.8, 86.5% complete INFO:tensorflow:local_step=36630 global_step=36630 loss=210.3, 86.6% complete INFO:tensorflow:local_step=36640 global_step=36640 loss=19.0, 86.6% complete INFO:tensorflow:local_step=36650 global_step=36650 loss=18.5, 86.6% complete INFO:tensorflow:local_step=36660 global_step=36660 loss=21.1, 86.6% complete INFO:tensorflow:local_step=36670 global_step=36670 loss=18.2, 86.6% complete INFO:tensorflow:local_step=36680 global_step=36680 loss=18.3, 86.7% complete INFO:tensorflow:local_step=36690 global_step=36690 loss=18.0, 86.7% complete INFO:tensorflow:local_step=36700 global_step=36700 loss=18.7, 86.7% complete INFO:tensorflow:local_step=36710 global_step=36710 loss=18.0, 86.7% complete INFO:tensorflow:local_step=36720 global_step=36720 loss=18.0, 86.8% complete INFO:tensorflow:local_step=36730 global_step=36730 loss=18.2, 86.8% complete INFO:tensorflow:local_step=36740 global_step=36740 loss=15.6, 86.8% complete INFO:tensorflow:local_step=36750 global_step=36750 loss=17.5, 86.8% complete INFO:tensorflow:local_step=36760 global_step=36760 loss=19.2, 86.9% complete INFO:tensorflow:local_step=36770 global_step=36770 loss=18.8, 86.9% complete INFO:tensorflow:local_step=36780 global_step=36780 loss=17.2, 86.9% complete INFO:tensorflow:local_step=36790 global_step=36790 loss=18.1, 86.9% complete INFO:tensorflow:local_step=36800 global_step=36800 loss=18.5, 87.0% complete INFO:tensorflow:local_step=36810 global_step=36810 loss=19.4, 87.0% complete INFO:tensorflow:local_step=36820 global_step=36820 loss=22.0, 87.0% complete INFO:tensorflow:local_step=36830 global_step=36830 loss=16.4, 87.0% complete INFO:tensorflow:local_step=36840 global_step=36840 loss=18.3, 87.1% complete INFO:tensorflow:local_step=36850 global_step=36850 loss=15.9, 87.1% complete INFO:tensorflow:local_step=36860 global_step=36860 loss=18.2, 87.1% complete INFO:tensorflow:local_step=36870 global_step=36870 loss=18.2, 87.1% complete INFO:tensorflow:local_step=36880 global_step=36880 loss=19.4, 87.1% complete INFO:tensorflow:local_step=36890 global_step=36890 loss=18.4, 87.2% complete INFO:tensorflow:local_step=36900 global_step=36900 loss=19.4, 87.2% complete INFO:tensorflow:local_step=36910 global_step=36910 loss=17.4, 87.2% complete INFO:tensorflow:local_step=36920 global_step=36920 loss=17.6, 87.2% complete INFO:tensorflow:local_step=36930 global_step=36930 loss=17.8, 87.3% complete INFO:tensorflow:local_step=36940 global_step=36940 loss=18.1, 87.3% complete INFO:tensorflow:local_step=36950 global_step=36950 loss=19.0, 87.3% complete INFO:tensorflow:local_step=36960 global_step=36960 loss=17.7, 87.3% complete INFO:tensorflow:local_step=36970 global_step=36970 loss=17.8, 87.4% complete INFO:tensorflow:local_step=36980 global_step=36980 loss=18.7, 87.4% complete INFO:tensorflow:local_step=36990 global_step=36990 loss=17.9, 87.4% complete INFO:tensorflow:local_step=37000 global_step=37000 loss=19.0, 87.4% complete INFO:tensorflow:local_step=37010 global_step=37010 loss=18.0, 87.5% complete INFO:tensorflow:local_step=37020 global_step=37020 loss=18.8, 87.5% complete INFO:tensorflow:local_step=37030 global_step=37030 loss=17.9, 87.5% complete INFO:tensorflow:local_step=37040 global_step=37040 loss=17.9, 87.5% complete INFO:tensorflow:local_step=37050 global_step=37050 loss=18.1, 87.5% complete INFO:tensorflow:local_step=37060 global_step=37060 loss=19.2, 87.6% complete INFO:tensorflow:local_step=37070 global_step=37070 loss=18.5, 87.6% complete INFO:tensorflow:local_step=37080 global_step=37080 loss=18.1, 87.6% complete INFO:tensorflow:local_step=37090 global_step=37090 loss=18.6, 87.6% complete INFO:tensorflow:local_step=37100 global_step=37100 loss=18.1, 87.7% complete INFO:tensorflow:local_step=37110 global_step=37110 loss=19.1, 87.7% complete INFO:tensorflow:local_step=37120 global_step=37120 loss=19.4, 87.7% complete INFO:tensorflow:local_step=37130 global_step=37130 loss=17.8, 87.7% complete INFO:tensorflow:local_step=37140 global_step=37140 loss=17.3, 87.8% complete INFO:tensorflow:local_step=37150 global_step=37150 loss=19.6, 87.8% complete INFO:tensorflow:local_step=37160 global_step=37160 loss=17.7, 87.8% complete INFO:tensorflow:local_step=37170 global_step=37170 loss=19.2, 87.8% complete INFO:tensorflow:local_step=37180 global_step=37180 loss=17.7, 87.9% complete INFO:tensorflow:local_step=37190 global_step=37190 loss=18.5, 87.9% complete INFO:tensorflow:local_step=37200 global_step=37200 loss=18.9, 87.9% complete INFO:tensorflow:local_step=37210 global_step=37210 loss=18.5, 87.9% complete INFO:tensorflow:local_step=37220 global_step=37220 loss=291.8, 87.9% complete INFO:tensorflow:local_step=37230 global_step=37230 loss=19.2, 88.0% complete INFO:tensorflow:local_step=37240 global_step=37240 loss=18.7, 88.0% complete INFO:tensorflow:local_step=37250 global_step=37250 loss=19.0, 88.0% complete INFO:tensorflow:local_step=37260 global_step=37260 loss=14.4, 88.0% complete INFO:tensorflow:local_step=37270 global_step=37270 loss=18.1, 88.1% complete INFO:tensorflow:local_step=37280 global_step=37280 loss=17.9, 88.1% complete INFO:tensorflow:local_step=37290 global_step=37290 loss=17.5, 88.1% complete INFO:tensorflow:local_step=37300 global_step=37300 loss=18.9, 88.1% complete INFO:tensorflow:local_step=37310 global_step=37310 loss=17.7, 88.2% complete INFO:tensorflow:local_step=37320 global_step=37320 loss=19.3, 88.2% complete INFO:tensorflow:local_step=37330 global_step=37330 loss=19.0, 88.2% complete INFO:tensorflow:local_step=37340 global_step=37340 loss=18.0, 88.2% complete INFO:tensorflow:local_step=37350 global_step=37350 loss=15.7, 88.3% complete INFO:tensorflow:local_step=37360 global_step=37360 loss=19.3, 88.3% complete INFO:tensorflow:local_step=37370 global_step=37370 loss=17.4, 88.3% complete INFO:tensorflow:local_step=37380 global_step=37380 loss=18.1, 88.3% complete INFO:tensorflow:local_step=37390 global_step=37390 loss=18.3, 88.4% complete INFO:tensorflow:local_step=37400 global_step=37400 loss=18.6, 88.4% complete INFO:tensorflow:local_step=37410 global_step=37410 loss=19.1, 88.4% complete INFO:tensorflow:local_step=37420 global_step=37420 loss=18.5, 88.4% complete INFO:tensorflow:local_step=37430 global_step=37430 loss=18.5, 88.4% complete INFO:tensorflow:local_step=37440 global_step=37440 loss=18.9, 88.5% complete INFO:tensorflow:local_step=37450 global_step=37450 loss=17.4, 88.5% complete INFO:tensorflow:local_step=37460 global_step=37460 loss=21.6, 88.5% complete INFO:tensorflow:local_step=37470 global_step=37470 loss=15.0, 88.5% complete INFO:tensorflow:local_step=37480 global_step=37480 loss=18.6, 88.6% complete INFO:tensorflow:local_step=37490 global_step=37490 loss=18.2, 88.6% complete INFO:tensorflow:local_step=37500 global_step=37500 loss=17.6, 88.6% complete INFO:tensorflow:local_step=37510 global_step=37510 loss=18.9, 88.6% complete INFO:tensorflow:local_step=37520 global_step=37520 loss=19.5, 88.7% complete INFO:tensorflow:local_step=37530 global_step=37530 loss=18.7, 88.7% complete INFO:tensorflow:local_step=37540 global_step=37540 loss=17.7, 88.7% complete INFO:tensorflow:local_step=37550 global_step=37550 loss=17.8, 88.7% complete INFO:tensorflow:local_step=37560 global_step=37560 loss=17.8, 88.8% complete INFO:tensorflow:local_step=37570 global_step=37570 loss=18.1, 88.8% complete INFO:tensorflow:local_step=37580 global_step=37580 loss=17.8, 88.8% complete INFO:tensorflow:local_step=37590 global_step=37590 loss=19.1, 88.8% complete INFO:tensorflow:local_step=37600 global_step=37600 loss=18.6, 88.8% complete INFO:tensorflow:local_step=37610 global_step=37610 loss=19.2, 88.9% complete INFO:tensorflow:local_step=37620 global_step=37620 loss=18.3, 88.9% complete INFO:tensorflow:local_step=37630 global_step=37630 loss=18.6, 88.9% complete INFO:tensorflow:local_step=37640 global_step=37640 loss=18.0, 88.9% complete INFO:tensorflow:local_step=37650 global_step=37650 loss=19.7, 89.0% complete INFO:tensorflow:local_step=37660 global_step=37660 loss=17.7, 89.0% complete INFO:tensorflow:Recording summary at step 37662. INFO:tensorflow:global_step/sec: 126.7 INFO:tensorflow:local_step=37670 global_step=37670 loss=18.9, 89.0% complete INFO:tensorflow:local_step=37680 global_step=37680 loss=18.0, 89.0% complete INFO:tensorflow:local_step=37690 global_step=37690 loss=18.0, 89.1% complete INFO:tensorflow:local_step=37700 global_step=37700 loss=18.1, 89.1% complete INFO:tensorflow:local_step=37710 global_step=37710 loss=18.6, 89.1% complete INFO:tensorflow:local_step=37720 global_step=37720 loss=18.2, 89.1% complete INFO:tensorflow:local_step=37730 global_step=37730 loss=18.4, 89.2% complete INFO:tensorflow:local_step=37740 global_step=37740 loss=17.7, 89.2% complete INFO:tensorflow:local_step=37750 global_step=37750 loss=19.2, 89.2% complete INFO:tensorflow:local_step=37760 global_step=37760 loss=17.4, 89.2% complete INFO:tensorflow:local_step=37770 global_step=37770 loss=18.2, 89.2% complete INFO:tensorflow:local_step=37780 global_step=37780 loss=17.7, 89.3% complete INFO:tensorflow:local_step=37790 global_step=37790 loss=21.2, 89.3% complete INFO:tensorflow:local_step=37800 global_step=37800 loss=17.8, 89.3% complete INFO:tensorflow:local_step=37810 global_step=37810 loss=18.8, 89.3% complete INFO:tensorflow:local_step=37820 global_step=37820 loss=19.3, 89.4% complete INFO:tensorflow:local_step=37830 global_step=37830 loss=19.4, 89.4% complete INFO:tensorflow:local_step=37840 global_step=37840 loss=15.6, 89.4% complete INFO:tensorflow:local_step=37850 global_step=37850 loss=18.0, 89.4% complete INFO:tensorflow:local_step=37860 global_step=37860 loss=18.8, 89.5% complete INFO:tensorflow:local_step=37870 global_step=37870 loss=17.5, 89.5% complete INFO:tensorflow:local_step=37880 global_step=37880 loss=18.4, 89.5% complete INFO:tensorflow:local_step=37890 global_step=37890 loss=19.0, 89.5% complete INFO:tensorflow:local_step=37900 global_step=37900 loss=18.4, 89.6% complete INFO:tensorflow:local_step=37910 global_step=37910 loss=19.8, 89.6% complete INFO:tensorflow:local_step=37920 global_step=37920 loss=18.1, 89.6% complete INFO:tensorflow:local_step=37930 global_step=37930 loss=19.1, 89.6% complete INFO:tensorflow:local_step=37940 global_step=37940 loss=19.4, 89.7% complete INFO:tensorflow:local_step=37950 global_step=37950 loss=18.3, 89.7% complete INFO:tensorflow:local_step=37960 global_step=37960 loss=19.3, 89.7% complete INFO:tensorflow:local_step=37970 global_step=37970 loss=18.3, 89.7% complete INFO:tensorflow:local_step=37980 global_step=37980 loss=18.1, 89.7% complete INFO:tensorflow:local_step=37990 global_step=37990 loss=18.7, 89.8% complete INFO:tensorflow:local_step=38000 global_step=38000 loss=19.0, 89.8% complete INFO:tensorflow:local_step=38010 global_step=38010 loss=17.9, 89.8% complete INFO:tensorflow:local_step=38020 global_step=38020 loss=18.8, 89.8% complete INFO:tensorflow:local_step=38030 global_step=38030 loss=18.0, 89.9% complete INFO:tensorflow:local_step=38040 global_step=38040 loss=18.7, 89.9% complete INFO:tensorflow:local_step=38050 global_step=38050 loss=18.6, 89.9% complete INFO:tensorflow:local_step=38060 global_step=38060 loss=19.1, 89.9% complete INFO:tensorflow:local_step=38070 global_step=38070 loss=18.5, 90.0% complete INFO:tensorflow:local_step=38080 global_step=38080 loss=15.5, 90.0% complete INFO:tensorflow:local_step=38090 global_step=38090 loss=17.8, 90.0% complete INFO:tensorflow:local_step=38100 global_step=38100 loss=17.5, 90.0% complete INFO:tensorflow:local_step=38110 global_step=38110 loss=18.6, 90.1% complete INFO:tensorflow:local_step=38120 global_step=38120 loss=18.8, 90.1% complete INFO:tensorflow:local_step=38130 global_step=38130 loss=17.5, 90.1% complete INFO:tensorflow:local_step=38140 global_step=38140 loss=223.9, 90.1% complete INFO:tensorflow:local_step=38150 global_step=38150 loss=17.8, 90.1% complete INFO:tensorflow:local_step=38160 global_step=38160 loss=18.1, 90.2% complete INFO:tensorflow:local_step=38170 global_step=38170 loss=18.2, 90.2% complete INFO:tensorflow:local_step=38180 global_step=38180 loss=18.2, 90.2% complete INFO:tensorflow:local_step=38190 global_step=38190 loss=18.3, 90.2% complete INFO:tensorflow:local_step=38200 global_step=38200 loss=18.5, 90.3% complete INFO:tensorflow:local_step=38210 global_step=38210 loss=17.7, 90.3% complete INFO:tensorflow:local_step=38220 global_step=38220 loss=18.5, 90.3% complete INFO:tensorflow:local_step=38230 global_step=38230 loss=18.1, 90.3% complete INFO:tensorflow:local_step=38240 global_step=38240 loss=15.4, 90.4% complete INFO:tensorflow:local_step=38250 global_step=38250 loss=16.0, 90.4% complete INFO:tensorflow:local_step=38260 global_step=38260 loss=17.2, 90.4% complete INFO:tensorflow:local_step=38270 global_step=38270 loss=18.0, 90.4% complete INFO:tensorflow:local_step=38280 global_step=38280 loss=17.2, 90.5% complete INFO:tensorflow:local_step=38290 global_step=38290 loss=18.2, 90.5% complete INFO:tensorflow:local_step=38300 global_step=38300 loss=18.7, 90.5% complete INFO:tensorflow:local_step=38310 global_step=38310 loss=18.5, 90.5% complete INFO:tensorflow:local_step=38320 global_step=38320 loss=18.1, 90.5% complete INFO:tensorflow:local_step=38330 global_step=38330 loss=17.8, 90.6% complete INFO:tensorflow:local_step=38340 global_step=38340 loss=18.8, 90.6% complete INFO:tensorflow:local_step=38350 global_step=38350 loss=17.9, 90.6% complete INFO:tensorflow:local_step=38360 global_step=38360 loss=18.9, 90.6% complete INFO:tensorflow:local_step=38370 global_step=38370 loss=17.8, 90.7% complete INFO:tensorflow:local_step=38380 global_step=38380 loss=17.9, 90.7% complete INFO:tensorflow:local_step=38390 global_step=38390 loss=18.6, 90.7% complete INFO:tensorflow:local_step=38400 global_step=38400 loss=18.9, 90.7% complete INFO:tensorflow:local_step=38410 global_step=38410 loss=18.4, 90.8% complete INFO:tensorflow:local_step=38420 global_step=38420 loss=18.1, 90.8% complete INFO:tensorflow:local_step=38430 global_step=38430 loss=17.8, 90.8% complete INFO:tensorflow:local_step=38440 global_step=38440 loss=17.8, 90.8% complete INFO:tensorflow:local_step=38450 global_step=38450 loss=18.2, 90.9% complete INFO:tensorflow:local_step=38460 global_step=38460 loss=18.3, 90.9% complete INFO:tensorflow:local_step=38470 global_step=38470 loss=17.6, 90.9% complete INFO:tensorflow:local_step=38480 global_step=38480 loss=17.6, 90.9% complete INFO:tensorflow:local_step=38490 global_step=38490 loss=18.3, 90.9% complete INFO:tensorflow:local_step=38500 global_step=38500 loss=19.0, 91.0% complete INFO:tensorflow:local_step=38510 global_step=38510 loss=18.0, 91.0% complete INFO:tensorflow:local_step=38520 global_step=38520 loss=17.9, 91.0% complete INFO:tensorflow:local_step=38530 global_step=38530 loss=223.9, 91.0% complete INFO:tensorflow:local_step=38540 global_step=38540 loss=18.1, 91.1% complete INFO:tensorflow:local_step=38550 global_step=38550 loss=17.9, 91.1% complete INFO:tensorflow:local_step=38560 global_step=38560 loss=18.1, 91.1% complete INFO:tensorflow:local_step=38570 global_step=38570 loss=19.2, 91.1% complete INFO:tensorflow:local_step=38580 global_step=38580 loss=19.3, 91.2% complete INFO:tensorflow:local_step=38590 global_step=38590 loss=21.6, 91.2% complete INFO:tensorflow:local_step=38600 global_step=38600 loss=21.0, 91.2% complete INFO:tensorflow:local_step=38610 global_step=38610 loss=18.1, 91.2% complete INFO:tensorflow:local_step=38620 global_step=38620 loss=17.5, 91.3% complete INFO:tensorflow:local_step=38630 global_step=38630 loss=15.3, 91.3% complete INFO:tensorflow:local_step=38640 global_step=38640 loss=18.8, 91.3% complete INFO:tensorflow:local_step=38650 global_step=38650 loss=17.8, 91.3% complete INFO:tensorflow:local_step=38660 global_step=38660 loss=18.9, 91.4% complete INFO:tensorflow:local_step=38670 global_step=38670 loss=18.5, 91.4% complete INFO:tensorflow:local_step=38680 global_step=38680 loss=20.9, 91.4% complete INFO:tensorflow:local_step=38690 global_step=38690 loss=14.7, 91.4% complete INFO:tensorflow:local_step=38700 global_step=38700 loss=21.1, 91.4% complete INFO:tensorflow:local_step=38710 global_step=38710 loss=18.3, 91.5% complete INFO:tensorflow:local_step=38720 global_step=38720 loss=18.0, 91.5% complete INFO:tensorflow:local_step=38730 global_step=38730 loss=18.3, 91.5% complete INFO:tensorflow:local_step=38740 global_step=38740 loss=18.8, 91.5% complete INFO:tensorflow:local_step=38750 global_step=38750 loss=14.9, 91.6% complete INFO:tensorflow:local_step=38760 global_step=38760 loss=19.1, 91.6% complete INFO:tensorflow:local_step=38770 global_step=38770 loss=18.3, 91.6% complete INFO:tensorflow:local_step=38780 global_step=38780 loss=17.8, 91.6% complete INFO:tensorflow:local_step=38790 global_step=38790 loss=18.1, 91.7% complete INFO:tensorflow:local_step=38800 global_step=38800 loss=17.7, 91.7% complete INFO:tensorflow:local_step=38810 global_step=38810 loss=18.8, 91.7% complete INFO:tensorflow:local_step=38820 global_step=38820 loss=18.1, 91.7% complete INFO:tensorflow:local_step=38830 global_step=38830 loss=18.2, 91.8% complete INFO:tensorflow:local_step=38840 global_step=38840 loss=18.4, 91.8% complete INFO:tensorflow:local_step=38850 global_step=38850 loss=17.7, 91.8% complete INFO:tensorflow:local_step=38860 global_step=38860 loss=18.7, 91.8% complete INFO:tensorflow:local_step=38870 global_step=38870 loss=18.9, 91.8% complete INFO:tensorflow:local_step=38880 global_step=38880 loss=18.0, 91.9% complete INFO:tensorflow:local_step=38890 global_step=38890 loss=18.9, 91.9% complete INFO:tensorflow:local_step=38900 global_step=38900 loss=18.8, 91.9% complete INFO:tensorflow:local_step=38910 global_step=38910 loss=18.6, 91.9% complete INFO:tensorflow:local_step=38920 global_step=38920 loss=17.8, 92.0% complete INFO:tensorflow:local_step=38930 global_step=38930 loss=18.5, 92.0% complete INFO:tensorflow:local_step=38940 global_step=38940 loss=19.1, 92.0% complete INFO:tensorflow:local_step=38950 global_step=38950 loss=14.6, 92.0% complete INFO:tensorflow:local_step=38960 global_step=38960 loss=18.8, 92.1% complete INFO:tensorflow:local_step=38970 global_step=38970 loss=18.7, 92.1% complete INFO:tensorflow:local_step=38980 global_step=38980 loss=18.2, 92.1% complete INFO:tensorflow:local_step=38990 global_step=38990 loss=18.5, 92.1% complete INFO:tensorflow:local_step=39000 global_step=39000 loss=19.0, 92.2% complete INFO:tensorflow:local_step=39010 global_step=39010 loss=17.8, 92.2% complete INFO:tensorflow:local_step=39020 global_step=39020 loss=18.5, 92.2% complete INFO:tensorflow:local_step=39030 global_step=39030 loss=17.0, 92.2% complete INFO:tensorflow:local_step=39040 global_step=39040 loss=17.8, 92.2% complete INFO:tensorflow:local_step=39050 global_step=39050 loss=19.4, 92.3% complete INFO:tensorflow:local_step=39060 global_step=39060 loss=18.2, 92.3% complete INFO:tensorflow:local_step=39070 global_step=39070 loss=18.1, 92.3% complete INFO:tensorflow:local_step=39080 global_step=39080 loss=18.3, 92.3% complete INFO:tensorflow:local_step=39090 global_step=39090 loss=18.5, 92.4% complete INFO:tensorflow:local_step=39100 global_step=39100 loss=18.0, 92.4% complete INFO:tensorflow:local_step=39110 global_step=39110 loss=17.7, 92.4% complete INFO:tensorflow:local_step=39120 global_step=39120 loss=19.2, 92.4% complete INFO:tensorflow:local_step=39130 global_step=39130 loss=18.6, 92.5% complete INFO:tensorflow:local_step=39140 global_step=39140 loss=19.3, 92.5% complete INFO:tensorflow:local_step=39150 global_step=39150 loss=18.1, 92.5% complete INFO:tensorflow:local_step=39160 global_step=39160 loss=17.8, 92.5% complete INFO:tensorflow:local_step=39170 global_step=39170 loss=18.1, 92.6% complete INFO:tensorflow:local_step=39180 global_step=39180 loss=324.5, 92.6% complete INFO:tensorflow:local_step=39190 global_step=39190 loss=18.9, 92.6% complete INFO:tensorflow:local_step=39200 global_step=39200 loss=18.1, 92.6% complete INFO:tensorflow:local_step=39210 global_step=39210 loss=18.7, 92.7% complete INFO:tensorflow:local_step=39220 global_step=39220 loss=18.9, 92.7% complete INFO:tensorflow:local_step=39230 global_step=39230 loss=18.6, 92.7% complete INFO:tensorflow:local_step=39240 global_step=39240 loss=20.4, 92.7% complete INFO:tensorflow:local_step=39250 global_step=39250 loss=18.3, 92.7% complete INFO:tensorflow:local_step=39260 global_step=39260 loss=18.5, 92.8% complete INFO:tensorflow:local_step=39270 global_step=39270 loss=17.8, 92.8% complete INFO:tensorflow:local_step=39280 global_step=39280 loss=19.0, 92.8% complete INFO:tensorflow:local_step=39290 global_step=39290 loss=18.6, 92.8% complete INFO:tensorflow:local_step=39300 global_step=39300 loss=17.7, 92.9% complete INFO:tensorflow:local_step=39310 global_step=39310 loss=18.7, 92.9% complete INFO:tensorflow:local_step=39320 global_step=39320 loss=19.4, 92.9% complete INFO:tensorflow:local_step=39330 global_step=39330 loss=17.8, 92.9% complete INFO:tensorflow:local_step=39340 global_step=39340 loss=18.4, 93.0% complete INFO:tensorflow:local_step=39350 global_step=39350 loss=18.2, 93.0% complete INFO:tensorflow:local_step=39360 global_step=39360 loss=18.6, 93.0% complete INFO:tensorflow:local_step=39370 global_step=39370 loss=18.0, 93.0% complete INFO:tensorflow:local_step=39380 global_step=39380 loss=18.1, 93.1% complete INFO:tensorflow:local_step=39390 global_step=39390 loss=17.6, 93.1% complete INFO:tensorflow:local_step=39400 global_step=39400 loss=18.8, 93.1% complete INFO:tensorflow:local_step=39410 global_step=39410 loss=17.9, 93.1% complete INFO:tensorflow:local_step=39420 global_step=39420 loss=18.8, 93.1% complete INFO:tensorflow:local_step=39430 global_step=39430 loss=19.2, 93.2% complete INFO:tensorflow:local_step=39440 global_step=39440 loss=17.5, 93.2% complete INFO:tensorflow:local_step=39450 global_step=39450 loss=275.9, 93.2% complete INFO:tensorflow:local_step=39460 global_step=39460 loss=18.0, 93.2% complete INFO:tensorflow:local_step=39470 global_step=39470 loss=18.2, 93.3% complete INFO:tensorflow:local_step=39480 global_step=39480 loss=17.9, 93.3% complete INFO:tensorflow:local_step=39490 global_step=39490 loss=18.9, 93.3% complete INFO:tensorflow:local_step=39500 global_step=39500 loss=17.6, 93.3% complete INFO:tensorflow:local_step=39510 global_step=39510 loss=18.2, 93.4% complete INFO:tensorflow:local_step=39520 global_step=39520 loss=14.5, 93.4% complete INFO:tensorflow:local_step=39530 global_step=39530 loss=17.8, 93.4% complete INFO:tensorflow:local_step=39540 global_step=39540 loss=18.9, 93.4% complete INFO:tensorflow:local_step=39550 global_step=39550 loss=17.9, 93.5% complete INFO:tensorflow:local_step=39560 global_step=39560 loss=18.9, 93.5% complete INFO:tensorflow:local_step=39570 global_step=39570 loss=19.2, 93.5% complete INFO:tensorflow:local_step=39580 global_step=39580 loss=18.1, 93.5% complete INFO:tensorflow:local_step=39590 global_step=39590 loss=18.0, 93.5% complete INFO:tensorflow:local_step=39600 global_step=39600 loss=18.6, 93.6% complete INFO:tensorflow:local_step=39610 global_step=39610 loss=18.3, 93.6% complete INFO:tensorflow:local_step=39620 global_step=39620 loss=18.8, 93.6% complete INFO:tensorflow:local_step=39630 global_step=39630 loss=18.0, 93.6% complete INFO:tensorflow:local_step=39640 global_step=39640 loss=17.7, 93.7% complete INFO:tensorflow:local_step=39650 global_step=39650 loss=18.5, 93.7% complete INFO:tensorflow:local_step=39660 global_step=39660 loss=18.8, 93.7% complete INFO:tensorflow:local_step=39670 global_step=39670 loss=17.8, 93.7% complete INFO:tensorflow:local_step=39680 global_step=39680 loss=17.8, 93.8% complete INFO:tensorflow:local_step=39690 global_step=39690 loss=18.3, 93.8% complete INFO:tensorflow:local_step=39700 global_step=39700 loss=18.9, 93.8% complete INFO:tensorflow:local_step=39710 global_step=39710 loss=18.7, 93.8% complete INFO:tensorflow:local_step=39720 global_step=39720 loss=18.4, 93.9% complete INFO:tensorflow:local_step=39730 global_step=39730 loss=19.3, 93.9% complete INFO:tensorflow:local_step=39740 global_step=39740 loss=18.3, 93.9% complete INFO:tensorflow:local_step=39750 global_step=39750 loss=304.9, 93.9% complete INFO:tensorflow:local_step=39760 global_step=39760 loss=18.2, 94.0% complete INFO:tensorflow:local_step=39770 global_step=39770 loss=18.5, 94.0% complete INFO:tensorflow:local_step=39780 global_step=39780 loss=17.9, 94.0% complete INFO:tensorflow:local_step=39790 global_step=39790 loss=19.1, 94.0% complete INFO:tensorflow:local_step=39800 global_step=39800 loss=18.5, 94.0% complete INFO:tensorflow:local_step=39810 global_step=39810 loss=19.3, 94.1% complete INFO:tensorflow:local_step=39820 global_step=39820 loss=19.6, 94.1% complete INFO:tensorflow:local_step=39830 global_step=39830 loss=17.5, 94.1% complete INFO:tensorflow:local_step=39840 global_step=39840 loss=267.4, 94.1% complete INFO:tensorflow:local_step=39850 global_step=39850 loss=18.1, 94.2% complete INFO:tensorflow:local_step=39860 global_step=39860 loss=19.6, 94.2% complete INFO:tensorflow:local_step=39870 global_step=39870 loss=18.1, 94.2% complete INFO:tensorflow:local_step=39880 global_step=39880 loss=17.9, 94.2% complete INFO:tensorflow:local_step=39890 global_step=39890 loss=18.6, 94.3% complete INFO:tensorflow:local_step=39900 global_step=39900 loss=18.6, 94.3% complete INFO:tensorflow:local_step=39910 global_step=39910 loss=19.3, 94.3% complete INFO:tensorflow:local_step=39920 global_step=39920 loss=18.1, 94.3% complete INFO:tensorflow:local_step=39930 global_step=39930 loss=15.6, 94.4% complete INFO:tensorflow:local_step=39940 global_step=39940 loss=18.7, 94.4% complete INFO:tensorflow:local_step=39950 global_step=39950 loss=18.7, 94.4% complete INFO:tensorflow:local_step=39960 global_step=39960 loss=19.2, 94.4% complete INFO:tensorflow:local_step=39970 global_step=39970 loss=19.4, 94.4% complete INFO:tensorflow:local_step=39980 global_step=39980 loss=17.7, 94.5% complete INFO:tensorflow:local_step=39990 global_step=39990 loss=18.2, 94.5% complete INFO:tensorflow:local_step=40000 global_step=40000 loss=17.8, 94.5% complete INFO:tensorflow:local_step=40010 global_step=40010 loss=18.5, 94.5% complete INFO:tensorflow:local_step=40020 global_step=40020 loss=18.0, 94.6% complete INFO:tensorflow:local_step=40030 global_step=40030 loss=18.4, 94.6% complete INFO:tensorflow:local_step=40040 global_step=40040 loss=17.7, 94.6% complete INFO:tensorflow:local_step=40050 global_step=40050 loss=19.0, 94.6% complete INFO:tensorflow:local_step=40060 global_step=40060 loss=19.2, 94.7% complete INFO:tensorflow:local_step=40070 global_step=40070 loss=18.6, 94.7% complete INFO:tensorflow:local_step=40080 global_step=40080 loss=18.1, 94.7% complete INFO:tensorflow:local_step=40090 global_step=40090 loss=19.0, 94.7% complete INFO:tensorflow:local_step=40100 global_step=40100 loss=18.0, 94.8% complete INFO:tensorflow:local_step=40110 global_step=40110 loss=18.7, 94.8% complete INFO:tensorflow:local_step=40120 global_step=40120 loss=19.2, 94.8% complete INFO:tensorflow:local_step=40130 global_step=40130 loss=18.7, 94.8% complete INFO:tensorflow:local_step=40140 global_step=40140 loss=18.5, 94.8% complete INFO:tensorflow:local_step=40150 global_step=40150 loss=18.4, 94.9% complete INFO:tensorflow:local_step=40160 global_step=40160 loss=18.4, 94.9% complete INFO:tensorflow:local_step=40170 global_step=40170 loss=19.1, 94.9% complete INFO:tensorflow:local_step=40180 global_step=40180 loss=268.1, 94.9% complete INFO:tensorflow:local_step=40190 global_step=40190 loss=19.0, 95.0% complete INFO:tensorflow:local_step=40200 global_step=40200 loss=17.7, 95.0% complete INFO:tensorflow:local_step=40210 global_step=40210 loss=17.8, 95.0% complete INFO:tensorflow:local_step=40220 global_step=40220 loss=17.4, 95.0% complete INFO:tensorflow:local_step=40230 global_step=40230 loss=17.2, 95.1% complete INFO:tensorflow:local_step=40240 global_step=40240 loss=17.5, 95.1% complete INFO:tensorflow:local_step=40250 global_step=40250 loss=18.7, 95.1% complete INFO:tensorflow:local_step=40260 global_step=40260 loss=18.2, 95.1% complete INFO:tensorflow:local_step=40270 global_step=40270 loss=18.1, 95.2% complete INFO:tensorflow:local_step=40280 global_step=40280 loss=18.2, 95.2% complete INFO:tensorflow:local_step=40290 global_step=40290 loss=18.8, 95.2% complete INFO:tensorflow:local_step=40300 global_step=40300 loss=18.4, 95.2% complete INFO:tensorflow:local_step=40310 global_step=40310 loss=19.1, 95.3% complete INFO:tensorflow:local_step=40320 global_step=40320 loss=18.1, 95.3% complete INFO:tensorflow:local_step=40330 global_step=40330 loss=18.3, 95.3% complete INFO:tensorflow:local_step=40340 global_step=40340 loss=17.8, 95.3% complete INFO:tensorflow:local_step=40350 global_step=40350 loss=18.8, 95.3% complete INFO:tensorflow:local_step=40360 global_step=40360 loss=18.4, 95.4% complete INFO:tensorflow:local_step=40370 global_step=40370 loss=18.0, 95.4% complete INFO:tensorflow:local_step=40380 global_step=40380 loss=18.5, 95.4% complete INFO:tensorflow:local_step=40390 global_step=40390 loss=17.0, 95.4% complete INFO:tensorflow:local_step=40400 global_step=40400 loss=17.9, 95.5% complete INFO:tensorflow:local_step=40410 global_step=40410 loss=18.1, 95.5% complete INFO:tensorflow:local_step=40420 global_step=40420 loss=18.4, 95.5% complete INFO:tensorflow:local_step=40430 global_step=40430 loss=271.8, 95.5% complete INFO:tensorflow:local_step=40440 global_step=40440 loss=18.3, 95.6% complete INFO:tensorflow:local_step=40450 global_step=40450 loss=17.3, 95.6% complete INFO:tensorflow:local_step=40460 global_step=40460 loss=18.9, 95.6% complete INFO:tensorflow:local_step=40470 global_step=40470 loss=19.9, 95.6% complete INFO:tensorflow:local_step=40480 global_step=40480 loss=18.7, 95.7% complete INFO:tensorflow:local_step=40490 global_step=40490 loss=18.8, 95.7% complete INFO:tensorflow:local_step=40500 global_step=40500 loss=17.5, 95.7% complete INFO:tensorflow:local_step=40510 global_step=40510 loss=17.8, 95.7% complete INFO:tensorflow:local_step=40520 global_step=40520 loss=18.8, 95.7% complete INFO:tensorflow:local_step=40530 global_step=40530 loss=146.4, 95.8% complete INFO:tensorflow:local_step=40540 global_step=40540 loss=18.5, 95.8% complete INFO:tensorflow:local_step=40550 global_step=40550 loss=18.4, 95.8% complete INFO:tensorflow:local_step=40560 global_step=40560 loss=19.3, 95.8% complete INFO:tensorflow:local_step=40570 global_step=40570 loss=19.4, 95.9% complete INFO:tensorflow:local_step=40580 global_step=40580 loss=18.5, 95.9% complete INFO:tensorflow:local_step=40590 global_step=40590 loss=18.4, 95.9% complete INFO:tensorflow:local_step=40600 global_step=40600 loss=18.6, 95.9% complete INFO:tensorflow:local_step=40610 global_step=40610 loss=18.5, 96.0% complete INFO:tensorflow:local_step=40620 global_step=40620 loss=19.1, 96.0% complete INFO:tensorflow:local_step=40630 global_step=40630 loss=18.5, 96.0% complete INFO:tensorflow:local_step=40640 global_step=40640 loss=18.0, 96.0% complete INFO:tensorflow:local_step=40650 global_step=40650 loss=18.1, 96.1% complete INFO:tensorflow:local_step=40660 global_step=40660 loss=18.4, 96.1% complete INFO:tensorflow:local_step=40670 global_step=40670 loss=18.3, 96.1% complete INFO:tensorflow:local_step=40680 global_step=40680 loss=17.7, 96.1% complete INFO:tensorflow:local_step=40690 global_step=40690 loss=19.2, 96.1% complete INFO:tensorflow:local_step=40700 global_step=40700 loss=17.9, 96.2% complete INFO:tensorflow:local_step=40710 global_step=40710 loss=17.5, 96.2% complete INFO:tensorflow:local_step=40720 global_step=40720 loss=17.6, 96.2% complete INFO:tensorflow:local_step=40730 global_step=40730 loss=17.8, 96.2% complete INFO:tensorflow:local_step=40740 global_step=40740 loss=19.3, 96.3% complete INFO:tensorflow:local_step=40750 global_step=40750 loss=19.3, 96.3% complete INFO:tensorflow:local_step=40760 global_step=40760 loss=19.0, 96.3% complete INFO:tensorflow:local_step=40770 global_step=40770 loss=17.6, 96.3% complete INFO:tensorflow:local_step=40780 global_step=40780 loss=18.1, 96.4% complete INFO:tensorflow:local_step=40790 global_step=40790 loss=17.8, 96.4% complete INFO:tensorflow:local_step=40800 global_step=40800 loss=18.1, 96.4% complete INFO:tensorflow:local_step=40810 global_step=40810 loss=18.9, 96.4% complete INFO:tensorflow:local_step=40820 global_step=40820 loss=17.9, 96.5% complete INFO:tensorflow:local_step=40830 global_step=40830 loss=18.3, 96.5% complete INFO:tensorflow:local_step=40840 global_step=40840 loss=18.5, 96.5% complete INFO:tensorflow:local_step=40850 global_step=40850 loss=18.7, 96.5% complete INFO:tensorflow:local_step=40860 global_step=40860 loss=17.8, 96.6% complete INFO:tensorflow:local_step=40870 global_step=40870 loss=15.3, 96.6% complete INFO:tensorflow:local_step=40880 global_step=40880 loss=19.6, 96.6% complete INFO:tensorflow:local_step=40890 global_step=40890 loss=19.0, 96.6% complete INFO:tensorflow:local_step=40900 global_step=40900 loss=16.7, 96.6% complete INFO:tensorflow:local_step=40910 global_step=40910 loss=18.9, 96.7% complete INFO:tensorflow:local_step=40920 global_step=40920 loss=18.3, 96.7% complete INFO:tensorflow:local_step=40930 global_step=40930 loss=17.9, 96.7% complete INFO:tensorflow:local_step=40940 global_step=40940 loss=18.4, 96.7% complete INFO:tensorflow:local_step=40950 global_step=40950 loss=18.3, 96.8% complete INFO:tensorflow:local_step=40960 global_step=40960 loss=17.7, 96.8% complete INFO:tensorflow:local_step=40970 global_step=40970 loss=18.0, 96.8% complete INFO:tensorflow:local_step=40980 global_step=40980 loss=18.2, 96.8% complete INFO:tensorflow:local_step=40990 global_step=40990 loss=18.5, 96.9% complete INFO:tensorflow:local_step=41000 global_step=41000 loss=18.8, 96.9% complete INFO:tensorflow:local_step=41010 global_step=41010 loss=18.6, 96.9% complete INFO:tensorflow:local_step=41020 global_step=41020 loss=18.7, 96.9% complete INFO:tensorflow:local_step=41030 global_step=41030 loss=18.2, 97.0% complete INFO:tensorflow:local_step=41040 global_step=41040 loss=17.9, 97.0% complete INFO:tensorflow:local_step=41050 global_step=41050 loss=18.2, 97.0% complete INFO:tensorflow:local_step=41060 global_step=41060 loss=18.2, 97.0% complete INFO:tensorflow:local_step=41070 global_step=41070 loss=17.9, 97.0% complete INFO:tensorflow:local_step=41080 global_step=41080 loss=19.2, 97.1% complete INFO:tensorflow:local_step=41090 global_step=41090 loss=17.7, 97.1% complete INFO:tensorflow:local_step=41100 global_step=41100 loss=18.0, 97.1% complete INFO:tensorflow:local_step=41110 global_step=41110 loss=17.9, 97.1% complete INFO:tensorflow:local_step=41120 global_step=41120 loss=17.8, 97.2% complete INFO:tensorflow:local_step=41130 global_step=41130 loss=19.0, 97.2% complete INFO:tensorflow:local_step=41140 global_step=41140 loss=17.6, 97.2% complete INFO:tensorflow:local_step=41150 global_step=41150 loss=18.0, 97.2% complete INFO:tensorflow:local_step=41160 global_step=41160 loss=17.9, 97.3% complete INFO:tensorflow:local_step=41170 global_step=41170 loss=18.7, 97.3% complete INFO:tensorflow:local_step=41180 global_step=41180 loss=15.4, 97.3% complete INFO:tensorflow:local_step=41190 global_step=41190 loss=18.1, 97.3% complete INFO:tensorflow:local_step=41200 global_step=41200 loss=18.1, 97.4% complete INFO:tensorflow:local_step=41210 global_step=41210 loss=19.3, 97.4% complete INFO:tensorflow:local_step=41220 global_step=41220 loss=19.0, 97.4% complete INFO:tensorflow:local_step=41230 global_step=41230 loss=260.4, 97.4% complete INFO:tensorflow:local_step=41240 global_step=41240 loss=18.8, 97.4% complete INFO:tensorflow:local_step=41250 global_step=41250 loss=17.8, 97.5% complete INFO:tensorflow:local_step=41260 global_step=41260 loss=18.3, 97.5% complete INFO:tensorflow:local_step=41270 global_step=41270 loss=17.3, 97.5% complete INFO:tensorflow:local_step=41280 global_step=41280 loss=18.2, 97.5% complete INFO:tensorflow:local_step=41290 global_step=41290 loss=19.4, 97.6% complete INFO:tensorflow:local_step=41300 global_step=41300 loss=19.2, 97.6% complete INFO:tensorflow:local_step=41310 global_step=41310 loss=18.5, 97.6% complete INFO:tensorflow:local_step=41320 global_step=41320 loss=247.6, 97.6% complete INFO:tensorflow:local_step=41330 global_step=41330 loss=19.8, 97.7% complete INFO:tensorflow:local_step=41340 global_step=41340 loss=18.0, 97.7% complete INFO:tensorflow:local_step=41350 global_step=41350 loss=17.8, 97.7% complete INFO:tensorflow:local_step=41360 global_step=41360 loss=17.8, 97.7% complete INFO:tensorflow:local_step=41370 global_step=41370 loss=17.6, 97.8% complete INFO:tensorflow:local_step=41380 global_step=41380 loss=18.7, 97.8% complete INFO:tensorflow:local_step=41390 global_step=41390 loss=17.5, 97.8% complete INFO:tensorflow:local_step=41400 global_step=41400 loss=18.0, 97.8% complete INFO:tensorflow:local_step=41410 global_step=41410 loss=19.0, 97.8% complete INFO:tensorflow:local_step=41420 global_step=41420 loss=18.6, 97.9% complete INFO:tensorflow:local_step=41430 global_step=41430 loss=18.2, 97.9% complete INFO:tensorflow:local_step=41440 global_step=41440 loss=18.9, 97.9% complete INFO:tensorflow:local_step=41450 global_step=41450 loss=18.4, 97.9% complete INFO:tensorflow:local_step=41460 global_step=41460 loss=18.3, 98.0% complete INFO:tensorflow:local_step=41470 global_step=41470 loss=18.7, 98.0% complete INFO:tensorflow:local_step=41480 global_step=41480 loss=19.0, 98.0% complete INFO:tensorflow:local_step=41490 global_step=41490 loss=18.1, 98.0% complete INFO:tensorflow:local_step=41500 global_step=41500 loss=19.1, 98.1% complete INFO:tensorflow:local_step=41510 global_step=41510 loss=18.5, 98.1% complete INFO:tensorflow:local_step=41520 global_step=41520 loss=18.3, 98.1% complete INFO:tensorflow:local_step=41530 global_step=41530 loss=18.9, 98.1% complete INFO:tensorflow:local_step=41540 global_step=41540 loss=18.3, 98.2% complete INFO:tensorflow:local_step=41550 global_step=41550 loss=18.0, 98.2% complete INFO:tensorflow:local_step=41560 global_step=41560 loss=18.9, 98.2% complete INFO:tensorflow:local_step=41570 global_step=41570 loss=18.1, 98.2% complete INFO:tensorflow:local_step=41580 global_step=41580 loss=18.3, 98.3% complete INFO:tensorflow:local_step=41590 global_step=41590 loss=273.2, 98.3% complete INFO:tensorflow:local_step=41600 global_step=41600 loss=18.5, 98.3% complete INFO:tensorflow:local_step=41610 global_step=41610 loss=15.3, 98.3% complete INFO:tensorflow:local_step=41620 global_step=41620 loss=19.3, 98.3% complete INFO:tensorflow:local_step=41630 global_step=41630 loss=18.0, 98.4% complete INFO:tensorflow:local_step=41640 global_step=41640 loss=19.5, 98.4% complete INFO:tensorflow:local_step=41650 global_step=41650 loss=18.3, 98.4% complete INFO:tensorflow:local_step=41660 global_step=41660 loss=17.3, 98.4% complete INFO:tensorflow:local_step=41670 global_step=41670 loss=19.3, 98.5% complete INFO:tensorflow:local_step=41680 global_step=41680 loss=17.9, 98.5% complete INFO:tensorflow:local_step=41690 global_step=41690 loss=18.4, 98.5% complete INFO:tensorflow:local_step=41700 global_step=41700 loss=18.0, 98.5% complete INFO:tensorflow:local_step=41710 global_step=41710 loss=18.9, 98.6% complete INFO:tensorflow:local_step=41720 global_step=41720 loss=17.4, 98.6% complete INFO:tensorflow:local_step=41730 global_step=41730 loss=17.9, 98.6% complete INFO:tensorflow:local_step=41740 global_step=41740 loss=18.7, 98.6% complete INFO:tensorflow:local_step=41750 global_step=41750 loss=18.2, 98.7% complete INFO:tensorflow:local_step=41760 global_step=41760 loss=19.3, 98.7% complete INFO:tensorflow:local_step=41770 global_step=41770 loss=18.2, 98.7% complete INFO:tensorflow:local_step=41780 global_step=41780 loss=18.4, 98.7% complete INFO:tensorflow:local_step=41790 global_step=41790 loss=18.5, 98.7% complete INFO:tensorflow:local_step=41800 global_step=41800 loss=18.6, 98.8% complete INFO:tensorflow:local_step=41810 global_step=41810 loss=18.5, 98.8% complete INFO:tensorflow:local_step=41820 global_step=41820 loss=18.5, 98.8% complete INFO:tensorflow:local_step=41830 global_step=41830 loss=270.7, 98.8% complete INFO:tensorflow:local_step=41840 global_step=41840 loss=18.0, 98.9% complete INFO:tensorflow:local_step=41850 global_step=41850 loss=19.1, 98.9% complete INFO:tensorflow:local_step=41860 global_step=41860 loss=19.1, 98.9% complete INFO:tensorflow:local_step=41870 global_step=41870 loss=240.9, 98.9% complete INFO:tensorflow:local_step=41880 global_step=41880 loss=19.4, 99.0% complete INFO:tensorflow:local_step=41890 global_step=41890 loss=18.6, 99.0% complete INFO:tensorflow:local_step=41900 global_step=41900 loss=17.2, 99.0% complete INFO:tensorflow:local_step=41910 global_step=41910 loss=18.1, 99.0% complete INFO:tensorflow:local_step=41920 global_step=41920 loss=17.7, 99.1% complete INFO:tensorflow:local_step=41930 global_step=41930 loss=19.9, 99.1% complete INFO:tensorflow:local_step=41940 global_step=41940 loss=17.6, 99.1% complete INFO:tensorflow:local_step=41950 global_step=41950 loss=21.5, 99.1% complete INFO:tensorflow:local_step=41960 global_step=41960 loss=18.5, 99.1% complete INFO:tensorflow:local_step=41970 global_step=41970 loss=19.3, 99.2% complete INFO:tensorflow:local_step=41980 global_step=41980 loss=18.4, 99.2% complete INFO:tensorflow:local_step=41990 global_step=41990 loss=17.8, 99.2% complete INFO:tensorflow:local_step=42000 global_step=42000 loss=18.1, 99.2% complete INFO:tensorflow:local_step=42010 global_step=42010 loss=18.6, 99.3% complete INFO:tensorflow:local_step=42020 global_step=42020 loss=17.9, 99.3% complete INFO:tensorflow:local_step=42030 global_step=42030 loss=18.3, 99.3% complete INFO:tensorflow:local_step=42040 global_step=42040 loss=18.1, 99.3% complete INFO:tensorflow:local_step=42050 global_step=42050 loss=17.6, 99.4% complete INFO:tensorflow:local_step=42060 global_step=42060 loss=19.1, 99.4% complete INFO:tensorflow:local_step=42070 global_step=42070 loss=17.6, 99.4% complete INFO:tensorflow:local_step=42080 global_step=42080 loss=18.6, 99.4% complete INFO:tensorflow:local_step=42090 global_step=42090 loss=19.0, 99.5% complete INFO:tensorflow:local_step=42100 global_step=42100 loss=18.5, 99.5% complete INFO:tensorflow:local_step=42110 global_step=42110 loss=19.0, 99.5% complete INFO:tensorflow:local_step=42120 global_step=42120 loss=17.8, 99.5% complete INFO:tensorflow:local_step=42130 global_step=42130 loss=18.7, 99.6% complete INFO:tensorflow:local_step=42140 global_step=42140 loss=18.7, 99.6% complete INFO:tensorflow:local_step=42150 global_step=42150 loss=17.8, 99.6% complete INFO:tensorflow:local_step=42160 global_step=42160 loss=15.4, 99.6% complete INFO:tensorflow:local_step=42170 global_step=42170 loss=18.5, 99.6% complete INFO:tensorflow:local_step=42180 global_step=42180 loss=18.8, 99.7% complete INFO:tensorflow:local_step=42190 global_step=42190 loss=18.5, 99.7% complete INFO:tensorflow:local_step=42200 global_step=42200 loss=18.0, 99.7% complete INFO:tensorflow:local_step=42210 global_step=42210 loss=18.9, 99.7% complete INFO:tensorflow:local_step=42220 global_step=42220 loss=18.9, 99.8% complete INFO:tensorflow:local_step=42230 global_step=42230 loss=17.8, 99.8% complete INFO:tensorflow:local_step=42240 global_step=42240 loss=18.8, 99.8% complete INFO:tensorflow:local_step=42250 global_step=42250 loss=17.9, 99.8% complete INFO:tensorflow:local_step=42260 global_step=42260 loss=17.9, 99.9% complete INFO:tensorflow:local_step=42270 global_step=42270 loss=18.0, 99.9% complete INFO:tensorflow:local_step=42280 global_step=42280 loss=21.3, 99.9% complete INFO:tensorflow:local_step=42290 global_step=42290 loss=17.8, 99.9% complete INFO:tensorflow:local_step=42300 global_step=42300 loss=18.9, 100.0% complete INFO:tensorflow:local_step=42310 global_step=42310 loss=18.2, 100.0% complete INFO:tensorflow:local_step=42320 global_step=42320 loss=18.7, 100.0% complete WARNING:tensorflow:Issue encountered when serializing global_step. Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore. 'Tensor' object has no attribute 'to_proto'
MIT
06_shakespeare_exercise.ipynb
flaviomerenda/tutorial
Checking the context of the 'vec' directory. Should contain checkpoints of the model plus tsv files for column and row embeddings.
os.listdir(vec_path)
_____no_output_____
MIT
06_shakespeare_exercise.ipynb
flaviomerenda/tutorial
Converting tsv to bin:
!python /content/tutorial/scripts/swivel/text2bin.py --vocab={vec_path}vocab.txt --output={vec_path}vecs.bin \ {vec_path}row_embedding.tsv \ {vec_path}col_embedding.tsv %ls {vec_path}
checkpoint col_embedding.tsv events.out.tfevents.1539004459.46972dad0a54 graph.pbtxt model.ckpt-0.data-00000-of-00001 model.ckpt-0.index model.ckpt-0.meta model.ckpt-42320.data-00000-of-00001 model.ckpt-42320.index model.ckpt-42320.meta row_embedding.tsv vecs.bin vocab.txt
MIT
06_shakespeare_exercise.ipynb
flaviomerenda/tutorial
Read stored binary embeddings and inspect them
import importlib.util spec = importlib.util.spec_from_file_location("vecs", "/content/tutorial/scripts/swivel/vecs.py") m = importlib.util.module_from_spec(spec) spec.loader.exec_module(m) shakespeare_vecs = m.Vecs(vec_path + 'vocab.txt', vec_path + 'vecs.bin')
Opening vector with expected size 23552 from file /content/tutorial/lit/vec/vocab.txt vocab size 23552 (unique 23552) read rows
MIT
06_shakespeare_exercise.ipynb
flaviomerenda/tutorial
Basic method to print the k nearest neighbors for a given word
def k_neighbors(vec, word, k=10): res = vec.neighbors(word) if not res: print('%s is not in the vocabulary, try e.g. %s' % (word, vecs.random_word_in_vocab())) else: for word, sim in res[:10]: print('%0.4f: %s' % (sim, word)) k_neighbors(shakespeare_vecs, 'strife') k_neighbors(shakespeare_vecs,'youth')
1.0000: youth 0.3436: tall, 0.3350: vanity, 0.2945: idleness. 0.2929: womb; 0.2847: tall 0.2823: suffering 0.2742: stillness 0.2671: flow'ring 0.2671: observation
MIT
06_shakespeare_exercise.ipynb
flaviomerenda/tutorial
Load vecsigrafo from UMBC over WordNet
%ls !wget https://zenodo.org/record/1446214/files/vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz %ls !tar -xvzf vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz !rm vecsigrafo_umbc_tlgs_ls_f_6e_160d_row_embedding.tar.gz umbc_wn_vec_path = '/content/tutorial/lit/vecsi_tlgs_wnscd_ls_f_6e_160d/'
_____no_output_____
MIT
06_shakespeare_exercise.ipynb
flaviomerenda/tutorial
Extracting the vocabulary from the .tsv file:
with open(umbc_wn_vec_path + 'vocab.txt', 'w', encoding='utf_8') as f: with open(umbc_wn_vec_path + 'row_embedding.tsv', 'r', encoding='utf_8') as vec_lines: vocab = [line.split('\t')[0].strip() for line in vec_lines] for word in vocab: print(word, file=f)
_____no_output_____
MIT
06_shakespeare_exercise.ipynb
flaviomerenda/tutorial
Converting tsv to bin:
!python /content/tutorial/scripts/swivel/text2bin.py --vocab={umbc_wn_vec_path}vocab.txt --output={umbc_wn_vec_path}vecs.bin \ {umbc_wn_vec_path}row_embedding.tsv %ls umbc_wn_vecs = m.Vecs(umbc_wn_vec_path + 'vocab.txt', umbc_wn_vec_path + 'vecs.bin') k_neighbors(umbc_wn_vecs, 'lem_California')
1.0000: lem_California 0.6301: lem_Central Valley 0.5959: lem_University of California 0.5542: lem_Southern California 0.5254: lem_Santa Cruz 0.5241: lem_Astro Aerospace 0.5168: lem_San Francisco Bay 0.5092: lem_San Diego County 0.5074: lem_Santa Barbara 0.5069: lem_Santa Rosa
MIT
06_shakespeare_exercise.ipynb
flaviomerenda/tutorial
T81-558: Applications of Deep Neural Networks**Module 4: Training for Tabular Data*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 4 Material* Part 4.1: Encoding a Feature Vector for Keras Deep Learning [[Video]](https://www.youtube.com/watch?v=Vxz-gfs9nMQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_1_feature_encode.ipynb)* Part 4.2: Keras Multiclass Classification for Deep Neural Networks with ROC and AUC [[Video]](https://www.youtube.com/watch?v=-f3bg9dLMks&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_2_multi_class.ipynb)* **Part 4.3: Keras Regression for Deep Neural Networks with RMSE** [[Video]](https://www.youtube.com/watch?v=wNhBUC6X5-E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_3_regression.ipynb)* Part 4.4: Backpropagation, Nesterov Momentum, and ADAM Neural Network Training [[Video]](https://www.youtube.com/watch?v=VbDg8aBgpck&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_4_backprop.ipynb)* Part 4.5: Neural Network RMSE and Log Loss Error Calculation from Scratch [[Video]](https://www.youtube.com/watch?v=wmQX1t2PHJc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_04_5_rmse_logloss.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
try: %tensorflow_version 2.x COLAB = True print("Note: using Google CoLab") except: print("Note: not using Google CoLab") COLAB = False
Note: not using Google CoLab
Apache-2.0
t81_558_class_04_3_regression.ipynb
akramsystems/t81_558_deep_learning
Part 4.3: Keras Regression for Deep Neural Networks with RMSERegression results are evaluated differently than classification. Consider the following code that trains a neural network for regression on the data set **jh-simple-dataset.csv**.
import pandas as pd from scipy.stats import zscore from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt # Read the data set df = pd.read_csv( "https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv", na_values=['NA','?']) # Generate dummies for job df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1) df.drop('job', axis=1, inplace=True) # Generate dummies for area df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1) df.drop('area', axis=1, inplace=True) # Generate dummies for product df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1) df.drop('product', axis=1, inplace=True) # Missing values for income med = df['income'].median() df['income'] = df['income'].fillna(med) # Standardize ranges df['income'] = zscore(df['income']) df['aspect'] = zscore(df['aspect']) df['save_rate'] = zscore(df['save_rate']) df['subscriptions'] = zscore(df['subscriptions']) # Convert to numpy - Classification x_columns = df.columns.drop('age').drop('id') x = df[x_columns].values y = df['age'].values # Create train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42) from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.callbacks import EarlyStopping # Build the neural network model = Sequential() model.add(Dense(25, input_dim=x.shape[1], activation='relu')) # Hidden 1 model.add(Dense(10, activation='relu')) # Hidden 2 model.add(Dense(1)) # Output model.compile(loss='mean_squared_error', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto', restore_best_weights=True) model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=2,epochs=1000)
Train on 1500 samples, validate on 500 samples Epoch 1/1000 1500/1500 - 1s - loss: 1905.4454 - val_loss: 1628.1341 Epoch 2/1000 1500/1500 - 0s - loss: 1331.4213 - val_loss: 889.0575 Epoch 3/1000 1500/1500 - 0s - loss: 554.8426 - val_loss: 303.7261 Epoch 4/1000 1500/1500 - 0s - loss: 276.2087 - val_loss: 241.2495 Epoch 5/1000 1500/1500 - 0s - loss: 232.2832 - val_loss: 208.2143 Epoch 6/1000 1500/1500 - 0s - loss: 198.5331 - val_loss: 179.5262 Epoch 7/1000 1500/1500 - 0s - loss: 169.0791 - val_loss: 154.5270 Epoch 8/1000 1500/1500 - 0s - loss: 144.1286 - val_loss: 132.8691 Epoch 9/1000 1500/1500 - 0s - loss: 122.9873 - val_loss: 115.0928 Epoch 10/1000 1500/1500 - 0s - loss: 104.7249 - val_loss: 98.7375 Epoch 11/1000 1500/1500 - 0s - loss: 89.8292 - val_loss: 86.2749 Epoch 12/1000 1500/1500 - 0s - loss: 77.3071 - val_loss: 75.0022 Epoch 13/1000 1500/1500 - 0s - loss: 67.0604 - val_loss: 66.1396 Epoch 14/1000 1500/1500 - 0s - loss: 58.9584 - val_loss: 58.4367 Epoch 15/1000 1500/1500 - 0s - loss: 51.2491 - val_loss: 52.7136 Epoch 16/1000 1500/1500 - 0s - loss: 45.1765 - val_loss: 46.5179 Epoch 17/1000 1500/1500 - 0s - loss: 39.8843 - val_loss: 41.3721 Epoch 18/1000 1500/1500 - 0s - loss: 35.1468 - val_loss: 37.2132 Epoch 19/1000 1500/1500 - 0s - loss: 31.1755 - val_loss: 33.0697 Epoch 20/1000 1500/1500 - 0s - loss: 27.6307 - val_loss: 30.3131 Epoch 21/1000 1500/1500 - 0s - loss: 24.8457 - val_loss: 26.9474 Epoch 22/1000 1500/1500 - 0s - loss: 22.4056 - val_loss: 24.3656 Epoch 23/1000 1500/1500 - 0s - loss: 20.3071 - val_loss: 22.1642 Epoch 24/1000 1500/1500 - 0s - loss: 18.5446 - val_loss: 20.4782 Epoch 25/1000 1500/1500 - 0s - loss: 17.1571 - val_loss: 18.8670 Epoch 26/1000 1500/1500 - 0s - loss: 15.9407 - val_loss: 17.6862 Epoch 27/1000 1500/1500 - 0s - loss: 14.9866 - val_loss: 16.5275 Epoch 28/1000 1500/1500 - 0s - loss: 14.1251 - val_loss: 15.6342 Epoch 29/1000 1500/1500 - 0s - loss: 13.4655 - val_loss: 14.8625 Epoch 30/1000 1500/1500 - 0s - loss: 12.8994 - val_loss: 14.2826 Epoch 31/1000 1500/1500 - 0s - loss: 12.5566 - val_loss: 13.6121 Epoch 32/1000 1500/1500 - 0s - loss: 12.0077 - val_loss: 13.3087 Epoch 33/1000 1500/1500 - 0s - loss: 11.5357 - val_loss: 12.6593 Epoch 34/1000 1500/1500 - 0s - loss: 11.2365 - val_loss: 12.1849 Epoch 35/1000 1500/1500 - 0s - loss: 10.8074 - val_loss: 11.9388 Epoch 36/1000 1500/1500 - 0s - loss: 10.5593 - val_loss: 11.4006 Epoch 37/1000 1500/1500 - 0s - loss: 10.2093 - val_loss: 10.9751 Epoch 38/1000 1500/1500 - 0s - loss: 9.8386 - val_loss: 10.8651 Epoch 39/1000 1500/1500 - 0s - loss: 9.5938 - val_loss: 10.5728 Epoch 40/1000 1500/1500 - 0s - loss: 9.1488 - val_loss: 9.8661 Epoch 41/1000 1500/1500 - 0s - loss: 8.8920 - val_loss: 9.5228 Epoch 42/1000 1500/1500 - 0s - loss: 8.5156 - val_loss: 9.1506 Epoch 43/1000 1500/1500 - 0s - loss: 8.2628 - val_loss: 8.9486 Epoch 44/1000 1500/1500 - 0s - loss: 7.9219 - val_loss: 8.5034 Epoch 45/1000 1500/1500 - 0s - loss: 7.7077 - val_loss: 8.0760 Epoch 46/1000 1500/1500 - 0s - loss: 7.3165 - val_loss: 7.6620 Epoch 47/1000 1500/1500 - 0s - loss: 7.0259 - val_loss: 7.4933 Epoch 48/1000 1500/1500 - 0s - loss: 6.7422 - val_loss: 7.0583 Epoch 49/1000 1500/1500 - 0s - loss: 6.5163 - val_loss: 6.8024 Epoch 50/1000 1500/1500 - 0s - loss: 6.2633 - val_loss: 7.3045 Epoch 51/1000 1500/1500 - 0s - loss: 6.0029 - val_loss: 6.2712 Epoch 52/1000 1500/1500 - 0s - loss: 5.6791 - val_loss: 5.9342 Epoch 53/1000 1500/1500 - 0s - loss: 5.4798 - val_loss: 6.0110 Epoch 54/1000 1500/1500 - 0s - loss: 5.2115 - val_loss: 5.3928 Epoch 55/1000 1500/1500 - 0s - loss: 4.9592 - val_loss: 5.2215 Epoch 56/1000 1500/1500 - 0s - loss: 4.7189 - val_loss: 5.0103 Epoch 57/1000 1500/1500 - 0s - loss: 4.4683 - val_loss: 4.7098 Epoch 58/1000 1500/1500 - 0s - loss: 4.2650 - val_loss: 4.5259 Epoch 59/1000 1500/1500 - 0s - loss: 4.0953 - val_loss: 4.4263 Epoch 60/1000 1500/1500 - 0s - loss: 3.8027 - val_loss: 4.1103 Epoch 61/1000 1500/1500 - 0s - loss: 3.5759 - val_loss: 3.7770 Epoch 62/1000 1500/1500 - 0s - loss: 3.3755 - val_loss: 3.5737 Epoch 63/1000 1500/1500 - 0s - loss: 3.1781 - val_loss: 3.4833 Epoch 64/1000 1500/1500 - 0s - loss: 3.0001 - val_loss: 3.2246 Epoch 65/1000 1500/1500 - 0s - loss: 2.7691 - val_loss: 3.1021 Epoch 66/1000 1500/1500 - 0s - loss: 2.6227 - val_loss: 2.8215 Epoch 67/1000 1500/1500 - 0s - loss: 2.4682 - val_loss: 2.7528 Epoch 68/1000 1500/1500 - 0s - loss: 2.3243 - val_loss: 2.5394 Epoch 69/1000 1500/1500 - 0s - loss: 2.1664 - val_loss: 2.3886 Epoch 70/1000 1500/1500 - 0s - loss: 2.0377 - val_loss: 2.2536 Epoch 71/1000 1500/1500 - 0s - loss: 1.8845 - val_loss: 2.2354 Epoch 72/1000 1500/1500 - 0s - loss: 1.7931 - val_loss: 2.0831 Epoch 73/1000 1500/1500 - 0s - loss: 1.6889 - val_loss: 1.8866 Epoch 74/1000 1500/1500 - 0s - loss: 1.5820 - val_loss: 1.7964 Epoch 75/1000 1500/1500 - 0s - loss: 1.5085 - val_loss: 1.7138 Epoch 76/1000 1500/1500 - 0s - loss: 1.4159 - val_loss: 1.6468 Epoch 77/1000 1500/1500 - 0s - loss: 1.3606 - val_loss: 1.5906 Epoch 78/1000 1500/1500 - 0s - loss: 1.2652 - val_loss: 1.5063 Epoch 79/1000 1500/1500 - 0s - loss: 1.1937 - val_loss: 1.4506 Epoch 80/1000 1500/1500 - 0s - loss: 1.1180 - val_loss: 1.4817 Epoch 81/1000 1500/1500 - 0s - loss: 1.1412 - val_loss: 1.2800 Epoch 82/1000 1500/1500 - 0s - loss: 1.0385 - val_loss: 1.2412 Epoch 83/1000 1500/1500 - 0s - loss: 0.9846 - val_loss: 1.1891 Epoch 84/1000 1500/1500 - 0s - loss: 0.9937 - val_loss: 1.1322 Epoch 85/1000 1500/1500 - 0s - loss: 0.8915 - val_loss: 1.0847 Epoch 86/1000 1500/1500 - 0s - loss: 0.8562 - val_loss: 1.1110 Epoch 87/1000 1500/1500 - 0s - loss: 0.8468 - val_loss: 1.0686 Epoch 88/1000 1500/1500 - 0s - loss: 0.7947 - val_loss: 0.9805 Epoch 89/1000 1500/1500 - 0s - loss: 0.7807 - val_loss: 0.9463 Epoch 90/1000 1500/1500 - 0s - loss: 0.7502 - val_loss: 0.9965 Epoch 91/1000 1500/1500 - 0s - loss: 0.7529 - val_loss: 0.9532 Epoch 92/1000 1500/1500 - 0s - loss: 0.6857 - val_loss: 0.8712 Epoch 93/1000 1500/1500 - 0s - loss: 0.6717 - val_loss: 0.8498 Epoch 94/1000 1500/1500 - 0s - loss: 0.6869 - val_loss: 0.8518 Epoch 95/1000 1500/1500 - 0s - loss: 0.6626 - val_loss: 0.8275 Epoch 96/1000 1500/1500 - 0s - loss: 0.6308 - val_loss: 0.7850 Epoch 97/1000 1500/1500 - 0s - loss: 0.6056 - val_loss: 0.7708 Epoch 98/1000 1500/1500 - 0s - loss: 0.5991 - val_loss: 0.7643 Epoch 99/1000 1500/1500 - 0s - loss: 0.6102 - val_loss: 0.8104 Epoch 100/1000 1500/1500 - 0s - loss: 0.5647 - val_loss: 0.7227 Epoch 101/1000 1500/1500 - 0s - loss: 0.5474 - val_loss: 0.7107 Epoch 102/1000 1500/1500 - 0s - loss: 0.5395 - val_loss: 0.6847 Epoch 103/1000 1500/1500 - 0s - loss: 0.5350 - val_loss: 0.7383 Epoch 104/1000 1500/1500 - 0s - loss: 0.5551 - val_loss: 0.6698 Epoch 105/1000 1500/1500 - 0s - loss: 0.5032 - val_loss: 0.6520 Epoch 106/1000 1500/1500 - 0s - loss: 0.5418 - val_loss: 0.7518 Epoch 107/1000 1500/1500 - 0s - loss: 0.4949 - val_loss: 0.6307 Epoch 108/1000 1500/1500 - 0s - loss: 0.5166 - val_loss: 0.6741 Epoch 109/1000 1500/1500 - 0s - loss: 0.4992 - val_loss: 0.6195 Epoch 110/1000 1500/1500 - 0s - loss: 0.4610 - val_loss: 0.6268 Epoch 111/1000 1500/1500 - 0s - loss: 0.4554 - val_loss: 0.5956 Epoch 112/1000 1500/1500 - 0s - loss: 0.4704 - val_loss: 0.5977 Epoch 113/1000 1500/1500 - 0s - loss: 0.4687 - val_loss: 0.5736 Epoch 114/1000 1500/1500 - 0s - loss: 0.4497 - val_loss: 0.5817 Epoch 115/1000 1500/1500 - 0s - loss: 0.4326 - val_loss: 0.5833 Epoch 116/1000 1500/1500 - 0s - loss: 0.4181 - val_loss: 0.5738 Epoch 117/1000 1500/1500 - 0s - loss: 0.4252 - val_loss: 0.5688 Epoch 118/1000 1500/1500 - 0s - loss: 0.4675 - val_loss: 0.5680 Epoch 119/1000 1500/1500 - 0s - loss: 0.4328 - val_loss: 0.5463 Epoch 120/1000 1500/1500 - 0s - loss: 0.4091 - val_loss: 0.5912 Epoch 121/1000 1500/1500 - 0s - loss: 0.4047 - val_loss: 0.5459 Epoch 122/1000 1500/1500 - 0s - loss: 0.4456 - val_loss: 0.5509 Epoch 123/1000 1500/1500 - 0s - loss: 0.4081 - val_loss: 0.5540 Epoch 124/1000 Restoring model weights from the end of the best epoch. 1500/1500 - 0s - loss: 0.4353 - val_loss: 0.5538 Epoch 00124: early stopping
Apache-2.0
t81_558_class_04_3_regression.ipynb
akramsystems/t81_558_deep_learning
Mean Square ErrorThe mean square error is the sum of the squared differences between the prediction ($\hat{y}$) and the expected ($y$). MSE values are not of a particular unit. If an MSE value has decreased for a model, that is good. However, beyond this, there is not much more you can determine. Low MSE values are desired.$ \mbox{MSE} = \frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2 $
from sklearn import metrics # Predict pred = model.predict(x_test) # Measure MSE error. score = metrics.mean_squared_error(pred,y_test) print("Final score (MSE): {}".format(score))
Final score (MSE): 0.5463447829677607
Apache-2.0
t81_558_class_04_3_regression.ipynb
akramsystems/t81_558_deep_learning
Root Mean Square ErrorThe root mean square (RMSE) is essentially the square root of the MSE. Because of this, the RMSE error is in the same units as the training data outcome. Low RMSE values are desired.$ \mbox{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^n \left(\hat{y}_i - y_i\right)^2} $
import numpy as np # Measure RMSE error. RMSE is common for regression. score = np.sqrt(metrics.mean_squared_error(pred,y_test)) print("Final score (RMSE): {}".format(score))
Final score (RMSE): 0.7391513938076291
Apache-2.0
t81_558_class_04_3_regression.ipynb
akramsystems/t81_558_deep_learning
Lift ChartTo generate a lift chart, perform the following activities:* Sort the data by expected output. Plot the blue line above.* For every point on the x-axis plot the predicted value for that same data point. This is the green line above.* The x-axis is just 0 to 100% of the dataset. The expected always starts low and ends high.* The y-axis is ranged according to the values predicted.Reading a lift chart:* The expected and predict lines should be close. Notice where one is above the ot other.* The below chart is the most accurate on lower age.
# Regression chart. def chart_regression(pred, y, sort=True): t = pd.DataFrame({'pred': pred, 'y': y.flatten()}) if sort: t.sort_values(by=['y'], inplace=True) plt.plot(t['y'].tolist(), label='expected') plt.plot(t['pred'].tolist(), label='prediction') plt.ylabel('output') plt.legend() plt.show() # Plot the chart chart_regression(pred.flatten(),y_test)
_____no_output_____
Apache-2.0
t81_558_class_04_3_regression.ipynb
akramsystems/t81_558_deep_learning
Test zplot
zplot() zplot(area=0.80, two_tailed=False) zplot(area=0.80, two_tailed=False, align_right=True)
_____no_output_____
MIT
notebooks/test_plot.ipynb
rajvpatil5/ab-framework
Test abplot
abplot(n=4000, bcr=0.11, d_hat=0.03, show_alpha=True)
_____no_output_____
MIT
notebooks/test_plot.ipynb
rajvpatil5/ab-framework
About this NotebookIn this notebook, we provide the tensor factorization implementation using an iterative Alternating Least Square (ALS), which is a good starting point for understanding tensor factorization.
import numpy as np from numpy.linalg import inv as inv
_____no_output_____
MIT
experiments/Imputation-TF-ALS.ipynb
shawnwang-tech/transdim
Part 1: Matrix Computation Concepts 1) Kronecker product- **Definition**:Given two matrices $A\in\mathbb{R}^{m_1\times n_1}$ and $B\in\mathbb{R}^{m_2\times n_2}$, then, the **Kronecker product** between these two matrices is defined as$$A\otimes B=\left[ \begin{array}{cccc} a_{11}B & a_{12}B & \cdots & a_{1m_2}B \\ a_{21}B & a_{22}B & \cdots & a_{2m_2}B \\ \vdots & \vdots & \ddots & \vdots \\ a_{m_11}B & a_{m_12}B & \cdots & a_{m_1m_2}B \\ \end{array} \right]$$where the symbol $\otimes$ denotes Kronecker product, and the size of resulted $A\otimes B$ is $(m_1m_2)\times (n_1n_2)$ (i.e., $m_1\times m_2$ columns and $n_1\times n_2$ rows).- **Example**:If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]$ and $B=\left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10 \\ \end{array} \right]$, then, we have$$A\otimes B=\left[ \begin{array}{cc} 1\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 2\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ 3\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] & 4\times \left[ \begin{array}{ccc} 5 & 6 & 7\\ 8 & 9 & 10\\ \end{array} \right] \\ \end{array} \right]$$$$=\left[ \begin{array}{cccccc} 5 & 6 & 7 & 10 & 12 & 14 \\ 8 & 9 & 10 & 16 & 18 & 20 \\ 15 & 18 & 21 & 20 & 24 & 28 \\ 24 & 27 & 30 & 32 & 36 & 40 \\ \end{array} \right]\in\mathbb{R}^{4\times 6}.$$ 2) Khatri-Rao product (`kr_prod`)- **Definition**:Given two matrices $A=\left( \boldsymbol{a}_1,\boldsymbol{a}_2,...,\boldsymbol{a}_r \right)\in\mathbb{R}^{m\times r}$ and $B=\left( \boldsymbol{b}_1,\boldsymbol{b}_2,...,\boldsymbol{b}_r \right)\in\mathbb{R}^{n\times r}$ with same number of columns, then, the **Khatri-Rao product** (or **column-wise Kronecker product**) between $A$ and $B$ is given as follows,$$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2,...,\boldsymbol{a}_r\otimes \boldsymbol{b}_r \right)\in\mathbb{R}^{(mn)\times r},$$where the symbol $\odot$ denotes Khatri-Rao product, and $\otimes$ denotes Kronecker product.- **Example**:If $A=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]=\left( \boldsymbol{a}_1,\boldsymbol{a}_2 \right) $ and $B=\left[ \begin{array}{cc} 5 & 6 \\ 7 & 8 \\ 9 & 10 \\ \end{array} \right]=\left( \boldsymbol{b}_1,\boldsymbol{b}_2 \right) $, then, we have$$A\odot B=\left( \boldsymbol{a}_1\otimes \boldsymbol{b}_1,\boldsymbol{a}_2\otimes \boldsymbol{b}_2 \right) $$$$=\left[ \begin{array}{cc} \left[ \begin{array}{c} 1 \\ 3 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 5 \\ 7 \\ 9 \\ \end{array} \right] & \left[ \begin{array}{c} 2 \\ 4 \\ \end{array} \right]\otimes \left[ \begin{array}{c} 6 \\ 8 \\ 10 \\ \end{array} \right] \\ \end{array} \right]$$$$=\left[ \begin{array}{cc} 5 & 12 \\ 7 & 16 \\ 9 & 20 \\ 15 & 24 \\ 21 & 32 \\ 27 & 40 \\ \end{array} \right]\in\mathbb{R}^{6\times 2}.$$
def kr_prod(a, b): return np.einsum('ir, jr -> ijr', a, b).reshape(a.shape[0] * b.shape[0], -1) A = np.array([[1, 2], [3, 4]]) B = np.array([[5, 6], [7, 8], [9, 10]]) print(kr_prod(A, B))
[[ 5 12] [ 7 16] [ 9 20] [15 24] [21 32] [27 40]]
MIT
experiments/Imputation-TF-ALS.ipynb
shawnwang-tech/transdim
3) CP decomposition CP Combination (`cp_combination`)- **Definition**:The CP decomposition factorizes a tensor into a sum of outer products of vectors. For example, for a third-order tensor $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$, the CP decomposition can be written as$$\hat{\mathcal{Y}}=\sum_{s=1}^{r}\boldsymbol{u}_{s}\circ\boldsymbol{v}_{s}\circ\boldsymbol{x}_{s},$$or element-wise,$$\hat{y}_{ijt}=\sum_{s=1}^{r}u_{is}v_{js}x_{ts},\forall (i,j,t),$$where vectors $\boldsymbol{u}_{s}\in\mathbb{R}^{m},\boldsymbol{v}_{s}\in\mathbb{R}^{n},\boldsymbol{x}_{s}\in\mathbb{R}^{f}$ are columns of factor matrices $U\in\mathbb{R}^{m\times r},V\in\mathbb{R}^{n\times r},X\in\mathbb{R}^{f\times r}$, respectively. The symbol $\circ$ denotes vector outer product.- **Example**:Given matrices $U=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right]\in\mathbb{R}^{2\times 2}$, $V=\left[ \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ 5 & 6 \\ \end{array} \right]\in\mathbb{R}^{3\times 2}$ and $X=\left[ \begin{array}{cc} 1 & 5 \\ 2 & 6 \\ 3 & 7 \\ 4 & 8 \\ \end{array} \right]\in\mathbb{R}^{4\times 2}$, then if $\hat{\mathcal{Y}}=\sum_{s=1}^{r}\boldsymbol{u}_{s}\circ\boldsymbol{v}_{s}\circ\boldsymbol{x}_{s}$, then, we have$$\hat{Y}_1=\hat{\mathcal{Y}}(:,:,1)=\left[ \begin{array}{ccc} 31 & 42 & 65 \\ 63 & 86 & 135 \\ \end{array} \right],$$$$\hat{Y}_2=\hat{\mathcal{Y}}(:,:,2)=\left[ \begin{array}{ccc} 38 & 52 & 82 \\ 78 & 108 & 174 \\ \end{array} \right],$$$$\hat{Y}_3=\hat{\mathcal{Y}}(:,:,3)=\left[ \begin{array}{ccc} 45 & 62 & 99 \\ 93 & 130 & 213 \\ \end{array} \right],$$$$\hat{Y}_4=\hat{\mathcal{Y}}(:,:,4)=\left[ \begin{array}{ccc} 52 & 72 & 116 \\ 108 & 152 & 252 \\ \end{array} \right].$$
def cp_combine(U, V, X): return np.einsum('is, js, ts -> ijt', U, V, X) U = np.array([[1, 2], [3, 4]]) V = np.array([[1, 3], [2, 4], [5, 6]]) X = np.array([[1, 5], [2, 6], [3, 7], [4, 8]]) print(cp_combine(U, V, X)) print() print('tensor size:') print(cp_combine(U, V, X).shape)
[[[ 31 38 45 52] [ 42 52 62 72] [ 65 82 99 116]] [[ 63 78 93 108] [ 86 108 130 152] [135 174 213 252]]] tensor size: (2, 3, 4)
MIT
experiments/Imputation-TF-ALS.ipynb
shawnwang-tech/transdim
4) Tensor Unfolding (`ten2mat`)Using numpy reshape to perform 3rd rank tensor unfold operation. [[**link**](https://stackoverflow.com/questions/49970141/using-numpy-reshape-to-perform-3rd-rank-tensor-unfold-operation)]
def ten2mat(tensor, mode): return np.reshape(np.moveaxis(tensor, mode, 0), (tensor.shape[mode], -1), order = 'F') X = np.array([[[1, 2, 3, 4], [3, 4, 5, 6]], [[5, 6, 7, 8], [7, 8, 9, 10]], [[9, 10, 11, 12], [11, 12, 13, 14]]]) print('tensor size:') print(X.shape) print('original tensor:') print(X) print() print('(1) mode-1 tensor unfolding:') print(ten2mat(X, 0)) print() print('(2) mode-2 tensor unfolding:') print(ten2mat(X, 1)) print() print('(3) mode-3 tensor unfolding:') print(ten2mat(X, 2))
tensor size: (3, 2, 4) original tensor: [[[ 1 2 3 4] [ 3 4 5 6]] [[ 5 6 7 8] [ 7 8 9 10]] [[ 9 10 11 12] [11 12 13 14]]] (1) mode-1 tensor unfolding: [[ 1 3 2 4 3 5 4 6] [ 5 7 6 8 7 9 8 10] [ 9 11 10 12 11 13 12 14]] (2) mode-2 tensor unfolding: [[ 1 5 9 2 6 10 3 7 11 4 8 12] [ 3 7 11 4 8 12 5 9 13 6 10 14]] (3) mode-3 tensor unfolding: [[ 1 5 9 3 7 11] [ 2 6 10 4 8 12] [ 3 7 11 5 9 13] [ 4 8 12 6 10 14]]
MIT
experiments/Imputation-TF-ALS.ipynb
shawnwang-tech/transdim
Part 2: Tensor CP Factorization using ALS (TF-ALS)Regarding CP factorization as a machine learning problem, we could perform a learning task by minimizing the loss function over factor matrices, that is,$$\min _{U, V, X} \sum_{(i, j, t) \in \Omega}\left(y_{i j t}-\sum_{r=1}^{R}u_{ir}v_{jr}x_{tr}\right)^{2}.$$Within this optimization problem, multiplication among three factor matrices (acted as parameters) makes this problem difficult. Alternatively, we apply the ALS algorithm for CP factorization.In particular, the optimization problem for each row $\boldsymbol{u}_{i}\in\mathbb{R}^{R},\forall i\in\left\{1,2,...,M\right\}$ of factor matrix $U\in\mathbb{R}^{M\times R}$ is given by$$\min _{\boldsymbol{u}_{i}} \sum_{j,t:(i, j, t) \in \Omega}\left[y_{i j t}-\boldsymbol{u}_{i}^\top\left(\boldsymbol{x}_{t}\odot\boldsymbol{v}_{j}\right)\right]\left[y_{i j t}-\boldsymbol{u}_{i}^\top\left(\boldsymbol{x}_{t}\odot\boldsymbol{v}_{j}\right)\right]^\top.$$The least square for this optimization is$$u_{i} \Leftarrow\left(\sum_{j, t, i, j, t ) \in \Omega} \left(x_{t} \odot v_{j}\right)\left(x_{t} \odot v_{j}\right)^{\top}\right)^{-1}\left(\sum_{j, t :(i, j, t) \in \Omega} y_{i j t} \left(x_{t} \odot v_{j}\right)\right), \forall i \in\{1,2, \ldots, M\}.$$The alternating least squares for $V\in\mathbb{R}^{N\times R}$ and $X\in\mathbb{R}^{T\times R}$ are$$\boldsymbol{v}_{j}\Leftarrow\left(\sum_{i,t:(i,j,t)\in\Omega}\left(\boldsymbol{x}_{t}\odot\boldsymbol{u}_{i}\right)\left(\boldsymbol{x}_{t}\odot\boldsymbol{u}_{i}\right)^\top\right)^{-1}\left(\sum_{i,t:(i,j,t)\in\Omega}y_{ijt}\left(\boldsymbol{x}_{t}\odot\boldsymbol{u}_{i}\right)\right),\forall j\in\left\{1,2,...,N\right\},$$$$\boldsymbol{x}_{t}\Leftarrow\left(\sum_{i,j:(i,j,t)\in\Omega}\left(\boldsymbol{v}_{j}\odot\boldsymbol{u}_{i}\right)\left(\boldsymbol{v}_{j}\odot\boldsymbol{u}_{i}\right)^\top\right)^{-1}\left(\sum_{i,j:(i,j,t)\in\Omega}y_{ijt}\left(\boldsymbol{v}_{j}\odot\boldsymbol{u}_{i}\right)\right),\forall t\in\left\{1,2,...,T\right\}.$$
def CP_ALS(sparse_tensor, rank, maxiter): dim1, dim2, dim3 = sparse_tensor.shape dim = np.array([dim1, dim2, dim3]) U = 0.1 * np.random.rand(dim1, rank) V = 0.1 * np.random.rand(dim2, rank) X = 0.1 * np.random.rand(dim3, rank) pos = np.where(sparse_tensor != 0) binary_tensor = np.zeros((dim1, dim2, dim3)) binary_tensor[pos] = 1 tensor_hat = np.zeros((dim1, dim2, dim3)) for iters in range(maxiter): for order in range(dim.shape[0]): if order == 0: var1 = kr_prod(X, V).T elif order == 1: var1 = kr_prod(X, U).T else: var1 = kr_prod(V, U).T var2 = kr_prod(var1, var1) var3 = np.matmul(var2, ten2mat(binary_tensor, order).T).reshape([rank, rank, dim[order]]) var4 = np.matmul(var1, ten2mat(sparse_tensor, order).T) for i in range(dim[order]): var_Lambda = var3[ :, :, i] inv_var_Lambda = inv((var_Lambda + var_Lambda.T)/2 + 10e-12 * np.eye(rank)) vec = np.matmul(inv_var_Lambda, var4[:, i]) if order == 0: U[i, :] = vec.copy() elif order == 1: V[i, :] = vec.copy() else: X[i, :] = vec.copy() tensor_hat = cp_combine(U, V, X) mape = np.sum(np.abs(sparse_tensor[pos] - tensor_hat[pos])/sparse_tensor[pos])/sparse_tensor[pos].shape[0] rmse = np.sqrt(np.sum((sparse_tensor[pos] - tensor_hat[pos]) ** 2)/sparse_tensor[pos].shape[0]) if (iters + 1) % 100 == 0: print('Iter: {}'.format(iters + 1)) print('Training MAPE: {:.6}'.format(mape)) print('Training RMSE: {:.6}'.format(rmse)) print() return tensor_hat, U, V, X
_____no_output_____
MIT
experiments/Imputation-TF-ALS.ipynb
shawnwang-tech/transdim
Part 3: Data Organization 1) Matrix StructureWe consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{f},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We express spatio-temporal dataset as a matrix $Y\in\mathbb{R}^{m\times f}$ with $m$ rows (e.g., locations) and $f$ columns (e.g., discrete time intervals),$$Y=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{m1} & y_{m2} & \cdots & y_{mf} \\ \end{array} \right]\in\mathbb{R}^{m\times f}.$$ 2) Tensor StructureWe consider a dataset of $m$ discrete time series $\boldsymbol{y}_{i}\in\mathbb{R}^{nf},i\in\left\{1,2,...,m\right\}$. The time series may have missing elements. We partition each time series into intervals of predifined length $f$. We express each partitioned time series as a matrix $Y_{i}$ with $n$ rows (e.g., days) and $f$ columns (e.g., discrete time intervals per day),$$Y_{i}=\left[ \begin{array}{cccc} y_{11} & y_{12} & \cdots & y_{1f} \\ y_{21} & y_{22} & \cdots & y_{2f} \\ \vdots & \vdots & \ddots & \vdots \\ y_{n1} & y_{n2} & \cdots & y_{nf} \\ \end{array} \right]\in\mathbb{R}^{n\times f},i=1,2,...,m,$$therefore, the resulting structure is a tensor $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$. **How to transform a data set into something we can use for time series imputation?** Part 4: Experiments on Guangzhou Data Set
import scipy.io tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/tensor.mat') dense_tensor = tensor['tensor'] random_matrix = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_matrix.mat') random_matrix = random_matrix['random_matrix'] random_tensor = scipy.io.loadmat('../datasets/Guangzhou-data-set/random_tensor.mat') random_tensor = random_tensor['random_tensor'] missing_rate = 0.2 # ============================================================================= ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) # ============================================================================= # ============================================================================= ### Non-random missing (NM) scenario: # binary_tensor = np.zeros(dense_tensor.shape) # for i1 in range(dense_tensor.shape[0]): # for i2 in range(dense_tensor.shape[1]): # binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor)
_____no_output_____
MIT
experiments/Imputation-TF-ALS.ipynb
shawnwang-tech/transdim
**Question**: Given only the partially observed data $\mathcal{Y}\in\mathbb{R}^{m\times n\times f}$, how can we impute the unknown missing values?The main influential factors for such imputation model are:- `rank`.- `maxiter`.
import time start = time.time() rank = 80 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start))
Iter: 100 Training MAPE: 0.0809251 Training RMSE: 3.47736 Iter: 200 Training MAPE: 0.0805399 Training RMSE: 3.46261 Iter: 300 Training MAPE: 0.0803688 Training RMSE: 3.45631 Iter: 400 Training MAPE: 0.0802661 Training RMSE: 3.45266 Iter: 500 Training MAPE: 0.0801768 Training RMSE: 3.44986 Iter: 600 Training MAPE: 0.0800948 Training RMSE: 3.44755 Iter: 700 Training MAPE: 0.0800266 Training RMSE: 3.4456 Iter: 800 Training MAPE: 0.0799675 Training RMSE: 3.44365 Iter: 900 Training MAPE: 0.07992 Training RMSE: 3.4419 Iter: 1000 Training MAPE: 0.079885 Training RMSE: 3.44058 Final Imputation MAPE: 0.0833307 Final Imputation RMSE: 3.59283 Running time: 2908 seconds
MIT
experiments/Imputation-TF-ALS.ipynb
shawnwang-tech/transdim
**Experiment results** of missing data imputation using TF-ALS:| scenario |`rank`| `maxiter`| mape | rmse ||:----------|-----:|---------:|-----------:|----------:||**20%, RM**| 80 | 1000 | **0.0833** | **3.5928**||**40%, RM**| 80 | 1000 | **0.0837** | **3.6190**||**20%, NM**| 10 | 1000 | **0.1027** | **4.2960**||**40%, NM**| 10 | 1000 | **0.1028** | **4.3274**| Part 5: Experiments on Birmingham Data Set
import scipy.io tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/tensor.mat') dense_tensor = tensor['tensor'] random_matrix = scipy.io.loadmat('../datasets/Birmingham-data-set/random_matrix.mat') random_matrix = random_matrix['random_matrix'] random_tensor = scipy.io.loadmat('../datasets/Birmingham-data-set/random_tensor.mat') random_tensor = random_tensor['random_tensor'] missing_rate = 0.3 # ============================================================================= ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) # ============================================================================= # ============================================================================= ### Non-random missing (NM) scenario: # binary_tensor = np.zeros(dense_tensor.shape) # for i1 in range(dense_tensor.shape[0]): # for i2 in range(dense_tensor.shape[1]): # binary_tensor[i1, i2, :] = np.round(random_matrix[i1,i2] + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 30 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start))
Iter: 100 Training MAPE: 0.0509401 Training RMSE: 15.3163 Iter: 200 Training MAPE: 0.0498774 Training RMSE: 14.9599 Iter: 300 Training MAPE: 0.0490062 Training RMSE: 14.768 Iter: 400 Training MAPE: 0.0481006 Training RMSE: 14.6343 Iter: 500 Training MAPE: 0.0474233 Training RMSE: 14.5365 Iter: 600 Training MAPE: 0.0470442 Training RMSE: 14.4642 Iter: 700 Training MAPE: 0.0469617 Training RMSE: 14.4082 Iter: 800 Training MAPE: 0.0470459 Training RMSE: 14.3623 Iter: 900 Training MAPE: 0.0472333 Training RMSE: 14.3235 Iter: 1000 Training MAPE: 0.047408 Training RMSE: 14.2898 Final Imputation MAPE: 0.0583358 Final Imputation RMSE: 18.9148 Running time: 38 seconds
MIT
experiments/Imputation-TF-ALS.ipynb
shawnwang-tech/transdim
**Experiment results** of missing data imputation using TF-ALS:| scenario |`rank`| `maxiter`| mape | rmse ||:----------|-----:|---------:|-----------:|-----------:||**10%, RM**| 30 | 1000 | **0.0615** | **18.5005**||**30%, RM**| 30 | 1000 | **0.0583** | **18.9148**||**10%, NM**| 10 | 1000 | **0.1447** | **41.6710**||**30%, NM**| 10 | 1000 | **0.1765** | **63.8465**| Part 6: Experiments on Hangzhou Data Set
import scipy.io tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/tensor.mat') dense_tensor = tensor['tensor'] random_matrix = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_matrix.mat') random_matrix = random_matrix['random_matrix'] random_tensor = scipy.io.loadmat('../datasets/Hangzhou-data-set/random_tensor.mat') random_tensor = random_tensor['random_tensor'] missing_rate = 0.4 # ============================================================================= ### Random missing (RM) scenario: binary_tensor = np.round(random_tensor + 0.5 - missing_rate) # ============================================================================= # ============================================================================= ### Non-random missing (NM) scenario: # binary_tensor = np.zeros(dense_tensor.shape) # for i1 in range(dense_tensor.shape[0]): # for i2 in range(dense_tensor.shape[1]): # binary_tensor[i1, i2, :] = np.round(random_matrix[i1, i2] + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 50 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start))
Iter: 100 Training MAPE: 0.176548 Training RMSE: 17.0263 Iter: 200 Training MAPE: 0.174888 Training RMSE: 16.8609 Iter: 300 Training MAPE: 0.175056 Training RMSE: 16.7835 Iter: 400 Training MAPE: 0.174988 Training RMSE: 16.7323 Iter: 500 Training MAPE: 0.175013 Training RMSE: 16.6942 Iter: 600 Training MAPE: 0.174928 Training RMSE: 16.6654 Iter: 700 Training MAPE: 0.174722 Training RMSE: 16.6441 Iter: 800 Training MAPE: 0.174565 Training RMSE: 16.6284 Iter: 900 Training MAPE: 0.174454 Training RMSE: 16.6159 Iter: 1000 Training MAPE: 0.174409 Training RMSE: 16.6054 Final Imputation MAPE: 0.209776 Final Imputation RMSE: 100.315 Running time: 279 seconds
MIT
experiments/Imputation-TF-ALS.ipynb
shawnwang-tech/transdim
**Experiment results** of missing data imputation using TF-ALS:| scenario |`rank`| `maxiter`| mape | rmse ||:----------|-----:|---------:|-----------:|----------:||**20%, RM**| 50 | 1000 | **0.1991** |**111.303**||**40%, RM**| 50 | 1000 | **0.2098** |**100.315**||**20%, NM**| 5 | 1000 | **0.2837** |**42.6136**||**40%, NM**| 5 | 1000 | **0.2811** |**38.4201**| Part 7: Experiments on New York Data Set
import scipy.io tensor = scipy.io.loadmat('../datasets/NYC-data-set/tensor.mat') dense_tensor = tensor['tensor'] rm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/rm_tensor.mat') rm_tensor = rm_tensor['rm_tensor'] nm_tensor = scipy.io.loadmat('../datasets/NYC-data-set/nm_tensor.mat') nm_tensor = nm_tensor['nm_tensor'] missing_rate = 0.1 # ============================================================================= ### Random missing (RM) scenario ### Set the RM scenario by: # binary_tensor = np.round(rm_tensor + 0.5 - missing_rate) # ============================================================================= # ============================================================================= ### Non-random missing (NM) scenario ### Set the NM scenario by: binary_tensor = np.zeros(dense_tensor.shape) for i1 in range(dense_tensor.shape[0]): for i2 in range(dense_tensor.shape[1]): for i3 in range(61): binary_tensor[i1, i2, i3 * 24 : (i3 + 1) * 24] = np.round(nm_tensor[i1, i2, i3] + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 30 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start))
Iter: 100 Training MAPE: 0.511739 Training RMSE: 4.07981 Iter: 200 Training MAPE: 0.501094 Training RMSE: 4.0612 Iter: 300 Training MAPE: 0.504264 Training RMSE: 4.05578 Iter: 400 Training MAPE: 0.507211 Training RMSE: 4.05119 Iter: 500 Training MAPE: 0.509956 Training RMSE: 4.04623 Iter: 600 Training MAPE: 0.51046 Training RMSE: 4.04129 Iter: 700 Training MAPE: 0.509797 Training RMSE: 4.03294 Iter: 800 Training MAPE: 0.509531 Training RMSE: 4.02976 Iter: 900 Training MAPE: 0.509265 Training RMSE: 4.02861 Iter: 1000 Training MAPE: 0.508873 Training RMSE: 4.02796 Final Imputation MAPE: 0.540363 Final Imputation RMSE: 5.66633 Running time: 742 seconds
MIT
experiments/Imputation-TF-ALS.ipynb
shawnwang-tech/transdim
**Experiment results** of missing data imputation using TF-ALS:| scenario |`rank`| `maxiter`| mape | rmse ||:----------|-----:|---------:|-----------:|----------:||**10%, RM**| 30 | 1000 | **0.5262** | **6.2444**||**30%, RM**| 30 | 1000 | **0.5488** | **6.8968**||**10%, NM**| 30 | 1000 | **0.5170** | **5.9863**||**30%, NM**| 30 | 100 | **-** | **-**| Part 8: Experiments on Seattle Data Set
import pandas as pd dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0) RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0) dense_mat = dense_mat.values RM_mat = RM_mat.values dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]) RM_tensor = RM_mat.reshape([RM_mat.shape[0], 28, 288]) missing_rate = 0.2 # ============================================================================= ### Random missing (RM) scenario ### Set the RM scenario by: binary_tensor = np.round(RM_tensor + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 50 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start)) import pandas as pd dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0) RM_mat = pd.read_csv('../datasets/Seattle-data-set/RM_mat.csv', index_col = 0) dense_mat = dense_mat.values RM_mat = RM_mat.values dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]) RM_tensor = RM_mat.reshape([RM_mat.shape[0], 28, 288]) missing_rate = 0.4 # ============================================================================= ### Random missing (RM) scenario ### Set the RM scenario by: binary_tensor = np.round(RM_tensor + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 50 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start)) import pandas as pd dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0) NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0) dense_mat = dense_mat.values NM_mat = NM_mat.values dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]) missing_rate = 0.2 # ============================================================================= ### Non-random missing (NM) scenario ### Set the NM scenario by: binary_tensor = np.zeros((dense_mat.shape[0], 28, 288)) for i1 in range(binary_tensor.shape[0]): for i2 in range(binary_tensor.shape[1]): binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 10 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start)) import pandas as pd dense_mat = pd.read_csv('../datasets/Seattle-data-set/mat.csv', index_col = 0) NM_mat = pd.read_csv('../datasets/Seattle-data-set/NM_mat.csv', index_col = 0) dense_mat = dense_mat.values NM_mat = NM_mat.values dense_tensor = dense_mat.reshape([dense_mat.shape[0], 28, 288]) missing_rate = 0.4 # ============================================================================= ### Non-random missing (NM) scenario ### Set the NM scenario by: binary_tensor = np.zeros((dense_mat.shape[0], 28, 288)) for i1 in range(binary_tensor.shape[0]): for i2 in range(binary_tensor.shape[1]): binary_tensor[i1, i2, :] = np.round(NM_mat[i1, i2] + 0.5 - missing_rate) # ============================================================================= sparse_tensor = np.multiply(dense_tensor, binary_tensor) import time start = time.time() rank = 10 maxiter = 1000 tensor_hat, U, V, X = CP_ALS(sparse_tensor, rank, maxiter) pos = np.where((dense_tensor != 0) & (sparse_tensor == 0)) final_mape = np.sum(np.abs(dense_tensor[pos] - tensor_hat[pos])/dense_tensor[pos])/dense_tensor[pos].shape[0] final_rmse = np.sqrt(np.sum((dense_tensor[pos] - tensor_hat[pos]) ** 2)/dense_tensor[pos].shape[0]) print('Final Imputation MAPE: {:.6}'.format(final_mape)) print('Final Imputation RMSE: {:.6}'.format(final_rmse)) print() end = time.time() print('Running time: %d seconds'%(end - start))
Iter: 100 Training MAPE: 0.0996282 Training RMSE: 5.55963 Iter: 200 Training MAPE: 0.0992568 Training RMSE: 5.53825 Iter: 300 Training MAPE: 0.0986723 Training RMSE: 5.51806 Iter: 400 Training MAPE: 0.0967838 Training RMSE: 5.46447 Iter: 500 Training MAPE: 0.0962312 Training RMSE: 5.44762 Iter: 600 Training MAPE: 0.0961017 Training RMSE: 5.44322 Iter: 700 Training MAPE: 0.0959531 Training RMSE: 5.43927 Iter: 800 Training MAPE: 0.0958815 Training RMSE: 5.43619 Iter: 900 Training MAPE: 0.0958781 Training RMSE: 5.4344 Iter: 1000 Training MAPE: 0.0958921 Training RMSE: 5.43266 Final Imputation MAPE: 0.10038 Final Imputation RMSE: 5.7034 Running time: 304 seconds
MIT
experiments/Imputation-TF-ALS.ipynb
shawnwang-tech/transdim
Let's look at:Number of labels per image (histogram)Quality score per image for images with multiple labels (sigmoid?)
import csv from itertools import islice from collections import defaultdict import pandas as pd import matplotlib.pyplot as plt import torch import torchvision import numpy as np CSV_PATH = 'wgangp_data.csv' realness = {} # real_votes = defaultdict(int) # fake_votes = defaultdict(int) total_votes = defaultdict(int) correct_votes = defaultdict(int) with open(CSV_PATH) as f: dictreader = csv.DictReader(f) for line in dictreader: img_name = line['img'] assert(line['realness'] in ('True', 'False')) assert(line['correctness'] in ('True', 'False')) realness[img_name] = line['realness'] == 'True' if line['correctness'] == 'True': correct_votes[img_name] += 1 total_votes[img_name] += 1 pdx = pd.read_csv(CSV_PATH) pdx pdx[pdx.groupby('img').count() > 50] pdx #df.img # print(df.columns) # print(df['img']) # How much of the time do people guess "fake"? Slightly more than half! pdx[pdx.correctness != pdx.realness].count()/pdx.count() # How much of the time do people guess right? 94.4% pdx[pdx.correctness].count()/pdx.count() #90.3% of the time, real images are correctly labeled as real pdx[pdx.realness][pdx.correctness].count()/pdx[pdx.realness].count() #98.5% of the time, fake images are correctly labeled as fake pdx[~pdx.realness][pdx.correctness].count()/pdx[~pdx.realness].count() len(total_votes.values()) img_dict = {img: [realness[img], correct_votes[img], total_votes[img], correct_votes[img]/total_votes[img]] for img in realness } # print(img_dict.keys()) #img_dict['celeba500/005077_crop.jpg'] plt.hist([v[3] for k,v in img_dict.items() if 'celeb' in k]) def getVotesDict(img_dict): votes_dict = defaultdict(int) for img in total_votes: votes_dict[img_dict[img][2]] += 1 return votes_dict votes_dict = getVotesDict(img_dict) for i in sorted(votes_dict.keys()): print(i, votes_dict[i]) selected_img_dict = {img:value for img, value in img_dict.items() if img_dict[img][2] > 10} less_than_50_dict = {img:value for img, value in img_dict.items() if img_dict[img][2] < 10} imgs_over_50 = list(selected_img_dict.keys()) # print(len(selected_img_dict)) # print(imgs_over_50) pdx_50 = pdx[pdx.img.apply(lambda x: x in imgs_over_50)] len(pdx_50) pdx_under_50 = pdx[pdx.img.apply(lambda x: x not in imgs_over_50)] len(pdx_under_50) len(pdx_under_50[pdx_under_50.img.apply(lambda x: 'wgan' not in x)]) correctness = sorted([value[3] for key, value in selected_img_dict.items()]) print(correctness) plt.hist(correctness) plt.show() correctness = sorted([value[3] for key, value in less_than_50_dict.items()]) # print(correctness) plt.hist(correctness) plt.show() ct = [] # selected_img = [img in total_votes.keys() if total_votes[img] > 1 ] discriminator = torch.load('discriminator.pt', map_location='cpu') # torch.load_state_dict('discriminator.pt') discriminator(torch.zeros(64,64,3))
_____no_output_____
MIT
wgan_experiment/WGAN_experiment.ipynb
kolchinski/humanception-score
Naive Bayes ClassifierPredicting positivty/negativity of movie reviews using Naive Bayes algorithm 1. Import DatasetLabels:* 0 : Negative review* 1 : Positive review
import pandas as pd import warnings warnings.filterwarnings('ignore') reviews = pd.read_csv('ratings_train.txt', delimiter='\t') reviews.head(10) #divide between negative and positive reviews with more than 30 words in length neg = reviews[(reviews.document.str.len() >= 30) & (reviews.label == 0)].sample(3000, random_state=43) pos = reviews[(reviews.document.str.len() >= 30) & (reviews.label == 1)].sample(3000, random_state=43) pos.head() #NLP method import re import konlpy from konlpy.tag import Twitter okt = Twitter() def parse(s): s = re.sub(r'[?$.!,-_\'\"(){}~]+', '', s) try: return okt.nouns(s) except: return [] #okt.morphs is another option neg['parsed_doc'] = neg.document.apply(parse) pos['parsed_doc'] = pos.document.apply(parse) neg.head() pos.head() # create 5800 training data / 200 test data neg_train = neg[:2900] pos_train = pos[:2900] neg_test = neg[2900:] pos_test = pos[2900:]
_____no_output_____
MIT
algorithm_exercise/semantic_analysis/semantic_analysis_naive_bayes_algorithm.ipynb
jbaeckn/learning_projects
2. Create Corpus
neg_corpus = set(neg_train.parsed_doc.sum()) pos_corpus = set(pos_train.parsed_doc.sum()) corpus = list((neg_corpus).union(pos_corpus)) print('corpus length : ', len(corpus)) corpus[:10]
_____no_output_____
MIT
algorithm_exercise/semantic_analysis/semantic_analysis_naive_bayes_algorithm.ipynb
jbaeckn/learning_projects
3. Create Bag of Words
from collections import OrderedDict neg_bow_vecs = [] for _, doc in neg.parsed_doc.items(): bow_vecs = OrderedDict() for w in corpus: if w in doc: bow_vecs[w] = 1 else: bow_vecs[w] = 0 neg_bow_vecs.append(bow_vecs) pos_bow_vecs = [] for _, doc in pos.parsed_doc.items(): bow_vecs = OrderedDict() for w in corpus: if w in doc: bow_vecs[w] = 1 else: bow_vecs[w] = 0 pos_bow_vecs.append(bow_vecs) #bag of word vector example #this length is equal to the length of the corpus neg_bow_vecs[0].values()
_____no_output_____
MIT
algorithm_exercise/semantic_analysis/semantic_analysis_naive_bayes_algorithm.ipynb
jbaeckn/learning_projects
4. Model Training $n$ is the dimension of each document, in other words, the length of corpus $$\large p(pos|doc) = \Large \frac{p(doc|pos) \cdot p(pos)}{p(doc)}$$$$\large p(neg|doc) = \Large \frac{p(doc|neg) \cdot p(neg)}{p(doc)}$$**Likelihood functions:** $p(word_{i}|pos) = \large \frac{\text{the number of positive documents that contain the word}}{\text{the number of positive documents}}$$p(word_{i}|neg) = \large \frac{\text{the number of negative documents that contain the word}}{\text{the number of negative documents}}$
import numpy as np corpus[:5] list(neg_train.parsed_doc.items())[0] #this counts how many times a word in corpus appeares in neg_train data neg_words_likelihood_cnts = {} for w in corpus: cnt = 0 for _, doc in neg_train.parsed_doc.items(): if w in doc: cnt += 1 neg_words_likelihood_cnts[w] = cnt #this counts how many times a word in corpus appeares in pos_train data : p(neg) pos_words_likelihood_cnts = {} for w in corpus: cnt = 0 for _, doc in pos_train.parsed_doc.items(): if w in doc: cnt += 1 pos_words_likelihood_cnts[w] = cnt import operator sorted(neg_words_likelihood_cnts.items(), key=operator.itemgetter(1), reverse=True)[:10] sorted(pos_words_likelihood_cnts.items(), key=operator.itemgetter(1), reverse=True)[:10]
_____no_output_____
MIT
algorithm_exercise/semantic_analysis/semantic_analysis_naive_bayes_algorithm.ipynb
jbaeckn/learning_projects
5. Classifier* We represent each documents in terms of bag of words. If the size of Corpus is $n$, this means that each bag of word of document is $n-dimensional$* When there isn't a word, we use **Laclacian Smoothing**
test_data = pd.concat([neg_test, pos_test], axis=0) test_data.head() def predict(doc): pos_prior, neg_prior = 1/2, 1/2 #because we have equal number of pos and neg training documents # Posterior of pos pos_prob = np.log(1) for word in corpus: if word in doc: # the word is in the current document and has appeared in pos documents if word in pos_words_likelihood_cnts: pos_prob += np.log((pos_words_likelihood_cnts[word] + 1) / (len(pos_train) + len(corpus))) else: # the word is in the current document, but has never appeared in pos documents : Laplacian pos_prob += np.log(1 / (len(pos_train) + len(corpus))) else: # the word is not in the current document, but has appeared in pos documents # we can find the possibility that the word is not in pos if word in pos_words_likelihood_cnts: pos_prob += \ np.log((len(pos_train) - pos_words_likelihood_cnts[word] + 1) / (len(pos_train) + len(corpus))) else: # the word is not in the current document, and has never appeared in pos documents : Laplacian pos_prob += np.log((len(pos_train) + 1) / (len(pos_train) + len(corpus))) pos_prob += np.log(pos_prior) # Posterior of neg neg_prob = 1 for word in corpus: if word in doc: # ๋‹จ์–ด๊ฐ€ ํ˜„์žฌ ๋ฌธ์žฅ์— ์กด์žฌํ•˜๊ณ , neg ๋ฌธ์žฅ์— ๋‚˜์˜จ์ ์ด ์žˆ๋Š” ๊ฒฝ์šฐ if word in neg_words_likelihood_cnts: neg_prob += np.log((neg_words_likelihood_cnts[word] + 1) / (len(neg_train) + len(corpus))) else: # ๋‹จ์–ด๊ฐ€ ํ˜„์žฌ ๋ฌธ์žฅ์— ์กด์žฌํ•˜๊ณ , neg ๋ฌธ์žฅ์—์„œ ํ•œ ๋ฒˆ๋„ ๋‚˜์˜จ์ ์ด ์—†๋Š” ๊ฒฝ์šฐ : ๋ผํ”Œ๋ผ์‹œ์•ˆ ์Šค๋ฌด๋”ฉ neg_prob += np.log(1 / (len(neg_train) + len(corpus))) else: # ๋‹จ์–ด๊ฐ€ ํ˜„์žฌ ๋ฌธ์žฅ์— ์กด์žฌํ•˜์ง€ ์•Š๊ณ , neg ๋ฌธ์žฅ์— ๋‚˜์˜จ์ ์ด ์žˆ๋Š” ๊ฒฝ์šฐ (neg์—์„œ ํ•ด๋‹น๋‹จ์–ด๊ฐ€ ์—†๋Š” ํ™•๋ฅ ์„ ๊ตฌํ•  ์ˆ˜ ์žˆ๋‹ค.) if word in neg_words_likelihood_cnts: neg_prob += \ np.log((len(neg_train) - neg_words_likelihood_cnts[word] + 1) / (len(neg_train) + len(corpus))) else: # ๋‹จ์–ด๊ฐ€ ํ˜„์žฌ ๋ฌธ์žฅ์— ์กด์žฌํ•˜์ง€ ์•Š๊ณ , pos ๋ฌธ์žฅ์—์„œ ๋‹จ ํ•œ ๋ฒˆ๋„ ๋‚˜์˜จ์ ์ด ์—†๋Š” ๊ฒฝ์šฐ : ๋ผํ”Œ๋ผ์‹œ์•ˆ ์Šค๋ฌด๋”ฉ neg_prob += np.log((len(neg_train) + 1) / (len(neg_train) + len(corpus))) neg_prob += np.log(neg_prior) if pos_prob >= neg_prob: return 1 else: return 0 test_data['pred'] = test_data.parsed_doc.apply(predict) test_data.head() test_data.shape sum(test_data.label ^ test_data.pred)
_____no_output_____
MIT
algorithm_exercise/semantic_analysis/semantic_analysis_naive_bayes_algorithm.ipynb
jbaeckn/learning_projects
There are a total of 200 test documents, and of these 200 tests only 46 were different
1 - sum(test_data.label ^ test_data.pred) / len(test_data)
_____no_output_____
MIT
algorithm_exercise/semantic_analysis/semantic_analysis_naive_bayes_algorithm.ipynb
jbaeckn/learning_projects
Auditing a dataframeIn this notebook, we shall demonstrate how to use `privacypanda` to _audit_ the privacy of your data. `privacypanda` provides a simple function which prints the names of any columns which break privacy. Currently, these are:- Addresses - E.g. "10 Downing Street"; "221b Baker St"; "EC2R 8AH"- Phonenumbers (UK mobile) - E.g. "+447123456789"- Email addresses - Ending in ".com", ".co.uk", ".org", ".edu" (to be expanded soon)
%load_ext watermark %watermark -n -p pandas,privacypanda -g import pandas as pd import privacypanda as pp
_____no_output_____
Apache-2.0
examples/01_auditing_a_dataframe.ipynb
TTitcombe/PrivacyPanda
--- Firstly, we need data
data = pd.DataFrame( { "user ID": [ 1665, 1, 5287, 42, ], "User email": [ "xxxxxxxxxxxxx", "xxxxxxxx", "I'm not giving you that", "[email protected]", ], "User address": [ "AB1 1AB", "", "XXX XXX", "EC2R 8AH", ], "Likes raclette": [ 1, 0, 1, 1, ], } )
_____no_output_____
Apache-2.0
examples/01_auditing_a_dataframe.ipynb
TTitcombe/PrivacyPanda
You will notice two things about this dataframe:1. _Some_ of the data has already been anonymized, for example by replacing characters with "x"s. However, the person who collected this data has not been fastidious with its cleaning as there is still some raw, potentially problematic private information. As the dataset grows, it becomes easier to miss entries with private information2. Not all columns expose privacy: "Likes raclette" is pretty benign information (but be careful, lots of benign information can be combined to form a unique fingerprint identifying an individual - let's not worry about this at the moment, though), and "user ID" is already an anonymized labelling of an individual. --- Auditing the data's privacyAs a data scientist, we want a simple way to tell which columns, if any break privacy. More importantly, _how_ they break privacy determines how we deal with them. For example, emails will likely be superfluous information for analysis and can therefore be removed from the data, but age may be important and so we may wish instead to apply differential privacy to the dataset.We can use `privacypanda`'s `report_privacy` function to see which data is problematic.
report = pp.report_privacy(data) print(report)
User address: ['address'] User email: ['email']
Apache-2.0
examples/01_auditing_a_dataframe.ipynb
TTitcombe/PrivacyPanda
read datafiles- C-18 for language population- C-13 for particular age-range population from a state
c18=pd.read_excel('datasets/C-18.xlsx',skiprows=6,header=None,engine='openpyxl') c13=pd.read_excel('datasets/C-13.xls',skiprows=7,header=None)
_____no_output_____
MIT
Q8_asgn2.ipynb
sunil-dhaka/census-language-analysis
particular age groups are- 5-9- 10-14- 15-19- 20-24- 25-29- 30-49- 50-69- 70+- Age not stated obtain useful data from C-13 and C-18 for age-groups- first get particular state names for identifying specific states- get particular age-groups from C-18 file- make list of particular age group row/col for a particular states- now just simply iterate through each state to get relevent data and store it into a csv file - to get total pop of particular age-range I have used C-13 file - to get total pop that speaks more than 3 languages from a state in a particular age-range I have used C-18 file
# STATE_NAMES=[list(np.unique(c18.iloc[:,2].values))] STATE_NAMES=[] for state in c18.iloc[:,2].values: if not (state in STATE_NAMES): STATE_NAMES.append(state) AGE_GROUPS=list(c18.iloc[1:10,4].values) # although it is a bit of manual work but it is worth the efforts AGE_GROUP_RANGES=[list(range(5,10)),list(range(10,15)),list(range(15,20)),list(range(20,25)),list(range(25,30)),list(range(30,50)),list(range(50,70)),list(range(70,100))+['100+'],['Age not stated']] useful_data=[] for i,state in enumerate(STATE_NAMES): for j,age_grp in enumerate(AGE_GROUPS): # this list is to get only the years in the particular age-group true_false_list=[] for single_year_age in c13.iloc[:,4].values: if single_year_age in AGE_GROUP_RANGES[j]: true_false_list.append(True) else: true_false_list.append(False) # here i is the state code male_pop=c13[(c13.loc[:,1]==i) & (true_false_list)].iloc[:,6].values.sum() female_pop=c13[(c13.loc[:,1]==i) & (true_false_list)].iloc[:,7].values.sum() # tri tri_male=c18[(c18.iloc[:,0]==i) & (c18.iloc[:,4]==age_grp) & (c18.iloc[:,3]=='Total')].iloc[0,9] tri_female=c18[(c18.iloc[:,0]==i) & (c18.iloc[:,4]==age_grp) & (c18.iloc[:,3]=='Total')].iloc[0,10] #bi bi_male=c18[(c18.iloc[:,0]==i) & (c18.iloc[:,4]==age_grp) & (c18.iloc[:,3]=='Total')].iloc[0,6] - tri_male bi_female=c18[(c18.iloc[:,0]==i) & (c18.iloc[:,4]==age_grp) & (c18.iloc[:,3]=='Total')].iloc[0,7] - tri_female #uni uni_male=male_pop-bi_male-tri_male uni_female=female_pop-bi_female-tri_female item={ 'state-code':i, 'state-name':state, 'age-group':age_grp, 'age-group-male-pop':male_pop, 'age-group-female-pop':female_pop, 'tri-male-ratio':tri_male/male_pop, 'tri-female-ratio':tri_female/female_pop, 'bi-male-ratio':bi_male/male_pop, 'bi-female-ratio':bi_female/female_pop, 'uni-male-ratio':uni_male/male_pop, 'uni-female-ratio':uni_female/female_pop } useful_data.append(item) df=pd.DataFrame(useful_data)
_____no_output_____
MIT
Q8_asgn2.ipynb
sunil-dhaka/census-language-analysis
age-analysis - get highest ratio age-group for a state and store it into csv file- above process can be repeated for all parts of the question
tri_list=[] bi_list=[] uni_list=[] for i in range(36): male_values=df[df['state-code']==i].sort_values(by='tri-male-ratio',ascending=False).iloc[0,[2,5]].values female_values=df[df['state-code']==i].sort_values(by='tri-male-ratio',ascending=False).iloc[0,[2,6]].values tri_item={ 'state/ut':i, 'age-group-males':male_values[0], 'ratio-males':male_values[1], 'age-group-females':female_values[0], 'ratio-females':female_values[1] } tri_list.append(tri_item) male_values=df[df['state-code']==i].sort_values(by='bi-male-ratio',ascending=False).iloc[0,[2,7]].values female_values=df[df['state-code']==i].sort_values(by='bi-male-ratio',ascending=False).iloc[0,[2,8]].values bi_item={ 'state/ut':i, 'age-group-males':male_values[0], 'ratio-males':male_values[1], 'age-group-females':female_values[0], 'ratio-females':female_values[1] } bi_list.append(bi_item) male_values=df[df['state-code']==i].sort_values(by='uni-male-ratio',ascending=False).iloc[0,[2,9]].values female_values=df[df['state-code']==i].sort_values(by='uni-male-ratio',ascending=False).iloc[0,[2,10]].values uni_item={ 'state/ut':i, 'age-group-males':male_values[0], 'ratio-males':male_values[1], 'age-group-females':female_values[0], 'ratio-females':female_values[1] } uni_list.append(uni_item)
_____no_output_____
MIT
Q8_asgn2.ipynb
sunil-dhaka/census-language-analysis
- convert into pandas dataframes and store into CSVs
tri_df=pd.DataFrame(tri_list) bi_df=pd.DataFrame(bi_list) uni_df=pd.DataFrame(uni_list) tri_df.to_csv('outputs/age-gender-a.csv',index=False) bi_df.to_csv('outputs/age-gender-b.csv',index=False) uni_df.to_csv('outputs/age-gender-c.csv',index=False)
_____no_output_____
MIT
Q8_asgn2.ipynb
sunil-dhaka/census-language-analysis
observations- in almost all states(and all cases) both highest ratio female and male age-groups are same.- interestingly in only one language case for all states '5-9' age group dominates, and it is also quite intutive; since at that early stage in life children only speak their mother toung only
uni_df
_____no_output_____
MIT
Q8_asgn2.ipynb
sunil-dhaka/census-language-analysis
Copyright 2018 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License");
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
site/en/guide/data.ipynb
zyberg2091/docs
tf.data: Build TensorFlow input pipelines View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook The `tf.data` API enables you to build complex input pipelines from simple,reusable pieces. For example, the pipeline for an image model might aggregatedata from files in a distributed file system, apply random perturbations to eachimage, and merge randomly selected images into a batch for training. Thepipeline for a text model might involve extracting symbols from raw text data,converting them to embedding identifiers with a lookup table, and batchingtogether sequences of different lengths. The `tf.data` API makes it possible tohandle large amounts of data, read from different data formats, and performcomplex transformations.The `tf.data` API introduces a `tf.data.Dataset` abstraction that represents asequence of elements, in which each element consists of one or more components.For example, in an image pipeline, an element might be a single trainingexample, with a pair of tensor components representing the image and its label.There are two distinct ways to create a dataset:* A data **source** constructs a `Dataset` from data stored in memory or in one or more files.* A data **transformation** constructs a dataset from one or more `tf.data.Dataset` objects.
import tensorflow as tf import pathlib import os import matplotlib.pyplot as plt import pandas as pd import numpy as np np.set_printoptions(precision=4)
_____no_output_____
Apache-2.0
site/en/guide/data.ipynb
zyberg2091/docs
Basic mechanicsTo create an input pipeline, you must start with a data *source*. For example,to construct a `Dataset` from data in memory, you can use`tf.data.Dataset.from_tensors()` or `tf.data.Dataset.from_tensor_slices()`.Alternatively, if your input data is stored in a file in the recommendedTFRecord format, you can use `tf.data.TFRecordDataset()`.Once you have a `Dataset` object, you can *transform* it into a new `Dataset` bychaining method calls on the `tf.data.Dataset` object. For example, you canapply per-element transformations such as `Dataset.map()`, and multi-elementtransformations such as `Dataset.batch()`. See the documentation for`tf.data.Dataset` for a complete list of transformations.The `Dataset` object is a Python iterable. This makes it possible to consume itselements using a for loop:
dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset for elem in dataset: print(elem.numpy())
_____no_output_____
Apache-2.0
site/en/guide/data.ipynb
zyberg2091/docs
Or by explicitly creating a Python iterator using `iter` and consuming itselements using `next`:
it = iter(dataset) print(next(it).numpy())
_____no_output_____
Apache-2.0
site/en/guide/data.ipynb
zyberg2091/docs
Alternatively, dataset elements can be consumed using the `reduce`transformation, which reduces all elements to produce a single result. Thefollowing example illustrates how to use the `reduce` transformation to computethe sum of a dataset of integers.
print(dataset.reduce(0, lambda state, value: state + value).numpy())
_____no_output_____
Apache-2.0
site/en/guide/data.ipynb
zyberg2091/docs