id
stringlengths
50
55
text
stringlengths
54
694k
global_01_local_1_shard_00001926_processed.jsonl/61157
OpenEd17: The 14th Annual Open Education Conference October 11 – 13, 2017  ::  Anaheim, CA Return to the Conference Website  View analytic Thursday, October 12 • 11:00am - 11:25am Table 20 - Go Open with Google Apps Feedback form is now closed. Learn about the new G Suite for Education, formerly Google Apps for Education, and how it can be integrated with Open Educational Resources. We will explore the free resources provided by Google and how to harness them for use with OERs in both classroom and professional settings. With uses ranging from collaborative documents to live hangouts, the G Suite provides a number of efficient, effective, and free tools to allow students to interact locally or remotely, synchronously or asynchronously. avatar for Edie Erickson Edie Erickson Instructional Designer, Bay College Edie Erickson is an instructional designer at Bay de Noc Community College in Michigan's Upper Peninsula. She is the recipient of the 2016 MACUL Technology Using Teacher of the Year Award for her work in K-12 education and is also an instructor for Michigan State University where... Read More → Thursday October 12, 2017 11:00am - 11:25am Royal Ballroom Attendees (24)
global_01_local_1_shard_00001926_processed.jsonl/61183
News tagged with water vapour Related topics: Why steam burns are particularly vicious dateMay 08, 2018 in Other shares3 comments 0 The cosmic water trail uncovered by Herschel During almost four years of observing the cosmos, the Herschel Space Observatory traced out the presence of water. With its unprecedented sensitivity and spectral resolution at key wavelengths, Herschel revealed this crucial ... dateSep 19, 2017 in Astronomy shares628 comments 3 Video: Earth as a planet dateApr 24, 2017 in Space Exploration shares4 comments 0
global_01_local_1_shard_00001926_processed.jsonl/61190
philip hallstrom 1. a Senior Software Engineer for Instructure. 2. one who works with the Internet, esp. that of websites, databases, and content management solutions: a ruby on rails developer. 3. an avid golfer (see addicted) • late 20th Century, from the Seattle area (see Olympia, WA) usage & examples 1. Philip is helping Instructure take Bridge to the next level. 2. Philip helped Supreme Golf make it easy for golfers to compare prices and tee times from thousands of golf courses, online tee time retailers and deal sites all at once. 3. Philip wrote the Rubygem Slackistrano to send Capistrano deployment notifications to Slack. 4. Philip developed a Safari extension to easily check font properties via right click.
global_01_local_1_shard_00001926_processed.jsonl/61205
Link in Protected Content Membership Descriptions I'd like to add a link to the description for my "Member" membership level, but it doesn't show up as a clickable link on the registration page when I add one. Screenshot of link added to description: Screenshot of membership description on registration page with no link:
global_01_local_1_shard_00001926_processed.jsonl/61208
Copper Etching & Photopolymer – Printing Posted on: September 27th, 2014 Intaglio printing is the process of etching or manually scratching lines into copper (in our case) and then filling those lines with ink. The print is made when a damp piece of paper is pressed into those lines, lifting the ink out of the plate and onto the paper. VOILA! You’ve just made a print! With all intaglio processes, you will need to wet your paper by soaking it in the water tray. Shortly before printing, remove your paper from the water and pat it dry with towels. The paper should be slightly damp with no sheen. Place the damp paper on top of your inked plate (which is positioned on the press) and print. Categorised under: Copper, Etching
global_01_local_1_shard_00001926_processed.jsonl/61246
Document Type Grand Teton National Park Report First Page Last Page Atmospheric nitrogen (N) deposition rates show increasing N loadings since the 1980s in the western U.S. associated with increasing N emissions from industrial, urban, and agricultural sources (Fenn et al., 2003). Compared to the eastern U.S., the number of NADP/NTN (National Atmospheric Deposition Program/ National Trends Network) and CASTNet (Clean Air Status Network) monitoring sites is much more limited in the west, and they are rarely located in the highest-elevations, where ecosystems are likely to be more sensitive (Bums 2004). Although N deposition tends to increase with elevation in this region (Williams and Tonnesen 2000), there are considerable uncertainties about the actual N deposition levels in the Rocky Mountains. Model-simulations indicate a "hotspot" of N deposition near Grand Teton National Park (GRTE) (Fenn et al., 2003; Nanus et al. 2003), with feedlot and fertilizer N emissions in Southern Idaho, as potential sources impacting the alpine communities in GRTE. However, little data is currently available on the actual atmospheric N inputs to alpine ecosystems in GRTE, either as snow during winter or wet and dry deposition during the short snow-free period.
global_01_local_1_shard_00001926_processed.jsonl/61275
List of Abbreviations - Latex This is a short post about generating a list of abbreviations for your document with Latex. While there are many packages available for this, I'm going to use glossaries package. You can find the user mannual of glossaries package from here. Here is the basic example. \documentclass[a4paper,12pt]{report} %We are creating a report \usepackage[automake]{glossaries} %Load glossaries package %Here we define a set of example acronyms \newglossaryentry{Aaa}{name={AAA},description={First abbreviation}} \newglossaryentry{Bbb}{name={BBB},description={Second abbreviation}} \newglossaryentry{Ccc}{name={CCC},description={Third abbreviation}} \printglossary[title={List of Abbreviations}] %Generate List of Abbreviations \chapter{Sample Chapter} Here we are using the first abbreviation \gls{Aaa}. This is our second abbreviation \gls{Bbb}. The last one is \gls{Ccc} The above code snippt provides following output. List of Abbreviations Sample Chapter Note that "automake" parameter is not necessary to generate List of Abbreviations if you are using MikTex to compile Latex. You can pass different parameters glossaries package load command to achieve various customizations.  A list of few as follows. You can find more details from the user manual. 1. Remove the dot at the end of each abbreviation - nopostdot 2. Removw page number at the end of each abbreviation - nonumberlist 3. Prevent abbreviation grouping - nogroupskip 4. Add "List of Abbreviations" to Table of Content - toc Further, if you want to change line spacing between elements in the abbreviation list you can use \singlespacing, \onehalfspacing or \doublespacing according to your requirement. Here is the complete example. \usepackage[nopostdot,nogroupskip,style=super,nonumberlist,toc,automake,toc]{glossaries} %Load glossaries package %Here we define a set of example acronyms \chapter{Sample Chapter} Here is the final output. Table of Contents List of Abbreviations Sample Chapter Hope this quick guide will help you. Happy writing! Grad-CAM Implementation in pycaffe You can find the code discussed in this post in this git repository. This post discusses how to implement Gradient-weighted Class Activation Mapping (Grad-CAM) approach discussed in the paper Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization. Grad-CAM is a technique that makes Convolutional Neural Network (CNN) based models more interpretable by visualizing input regions where the model would look at while making a predictions. Grad-CAM model architecture I'm not going to go deeper in the paper, for a more detailed explanation please refer the paper. You can find different implementations of this technique in KerasTorch+Caffe, and Tensorflow. However, I was not able to find pycaffe implementation of GradCAM in the web. As pycaffe is a commonly used deep learning framework for CNN based classification model development, it would be useful to have a pycaffe implementation as well. If you are looking for a quick solution to interpret your Caffe classification model, this post is for you! If you are completely new to Caffe, refer the Caffe official page for installation instructions and some tutorials. As we are going to use python interface to Caffe (pycaffe), make sure you install pycaffe as well. All the required instructions are given in the Caffe web site. For this implementation I'm using a pretrained image classification model downloaded from the community in Caffe Model Zoo. For this example, I will use BVLC reference caffenet model which is trained to classify images into 1000 classes. To download the model, go to the folder where you installed Caffe, e.g. C:\Caffe and run ./scripts/download_model_binary.py models/bvlc_reference_caffenet Then let's write the gradCAM.py script #load the model net = caffe.Net('---path to caffe installation folder---/models/bvlc_reference_caffenet/deploy.prototxt', '---path to caffe installation folder---/models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel', # load input and preprocess it transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape}) transformer.set_mean('data', np.load('--path to caffe installation folder--/python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1)) transformer.set_transpose('data', (2,0,1)) transformer.set_channel_swap('data', (2,1,0)) transformer.set_raw_scale('data', 255.0) #We reshape the image as we classify only one image #load the image to the data layer of the model im = caffe.io.load_image('--path to caffe installation folder--/examples/images/cat.jpg') #classify the image out = net.forward() #predicted class print (out['prob'].argmax()) Next we have to calculate the gradient of the predicted class socre w.r.t to the convolution layer of interest. This is the tricky part. Caffe framework provides an inbuilt function to calculate gradients of the network. However, if you study the documentation of backward() function you would understand that, this method calculates gradients of  loss w.r.t. input layer (or as commonly used in Caffe 'data' layer). To implement Grad-CAM we need gradients of the layer just before the softmax layer with respect to a convolution layer, preferably the last convolution layer. To achieve this you have to modify the deploy.prototxt file. You just have to remove the softmax layer and add following line just after the model name. force_backward: true Then by using the following code snippet we can derive Grad-CAM final_layer = "fc8" #output layer whose gradients are being calculated image_size = (227,227) #input image size feature_map_shape = (13, 13) #size of the feature map generated by 'conv5' layer_name = 'conv5' #convolution layer of interest category_index = out['fc8'].argmax() #-if you want to get the saliency map of predicted class or else you can get saliency map for any interested class by specifying here #Make the loss value class specific label = np.zeros(input_model.blobs[final_layer].shape) label[0, category_index] = 1 imdiff = net.backward(diffs= ['data', layer_name], **{input_model.outputs[0]: label}) gradients = imdiff[layer_name] #gradients of the loss value/ predicted class score w.r.t conv5 layer #Normalizing gradients for better visualization gradients = gradients/(np.sqrt(np.mean(np.square(gradients)))+1e-5) gradients = gradients[0,:,:,:] print("Gradients Calculated") activations = net.blobs[layer_name].data[0, :, :, :] #Calculating importance of each activation map weights = np.mean(gradients, axis=(1, 2)) cam = np.ones(feature_map_shape, dtype=np.float32) for i, w in enumerate(weights): cam += w * activations[i, :, :] #Let's visualize Grad-CAM cam = cv2.resize(cam, image_size) cam = np.maximum(cam, 0) heatmap = cam / np.max(cam) cam = cv2.applyColorMap(np.uint8(255 * heatmap), cv2.COLORMAP_JET) #We are going to overlay the saliency map on the image new_image = cv2.imread(''--path to caffe installation folder--/examples/images/cat.jpg'') new_image = cv2.resize(new_image, image_size) cam = np.float32(cam) + np.float32(new_image) cam = 255 * cam / np.max(cam) cam = np.uint8(cam) #Finally saving the result cv2.imwrite("gradcam.jpg", cam) That's it. If everything goes smoothly you will get the following result. Input Image Grad-CAM image Hope this will be helpful. If you need any clarification please feel free to comment below, I'm happy to help you. Data Pre-Processing with R In this post I hope to discuss how we can pre-process data using R language. I'm using R Studio for the data analysis. For this analysis I'm using freely available Ta-Feng Supermarket data set. You can download the data set here. A description about the data set can be found here. First of all we have to set our working directory. Let's assume that we are going to use the folder named "R_Work_Space" in the Desktop as our working directory. Then we can set the working directly as: setwd("{Path to Desktop}/Desktop/R_Work_Space") Then let's load our data set as follows. suppermarket_dataset <- read.csv(file="SupperMarketData.csv",head=TRUE,sep=",") You can view the loaded data set from "View" command. You may see the data set as follows in R Studio. Let's start pre-processing. First of all let's see the type of each attribute in the data set. This will be useful for our future analysis. Use "str ( )" function in R for this purpose. The output will be a description as follows: Here Customer ID and Product Sub class being integer values doesn't make any sense as those fields have distinct values.  We should convert those fields to have factorial values. Following code segment does the job. suppermarket_dataset$Customer.ID <- as.factor(suppermarket_dataset$Customer.ID) suppermarket_dataset$Product.Subclass <- as.factor(suppermarket_dataset$Product.Subclass) Again use "str( )" function to verify your conversion. Next, for our analysis we need the day of the week information. Using R we can add a new column named "Day" to our data set with the day of the week related to value in the "Date" column. suppermarket_dataset$Day <- as.factor(weekdays(as.Date(suppermarket_dataset$Date, "%m/%d/%Y"))) Use "View" command to view the data set and you can now see that a new column called "Day" has been added to the data set. If we had the "Time" information and if we want to analysis data hour wise, we can extract hour information from time using "hour" in "lubridate" package. For that first we have to install "lubridate" package using install.packages( ). Next step is loading the installed package. Now we can call "hour" function as below to derive hour from the "Time" information. suppermarket_dataset$Hour <- hour(strptime(suppermarket_dataset$Time, "%H:%M:%S")) Please note that to use the above command, we should have our time in the international standard notation. Then, we have to calculate total amount spend by each customer in each transaction. We can calculate that by, suppermarket_dataset$Total_Amount <- suppermarket_dataset$Amount*suppermarket_dataset$Sales.price In our data set we have a column called "Assest" which would not be used for our analysis. Therefore let's remove that column suppermarket_dataset <- suppermarket_dataset[,-c(8)] Let's assume that in our analysis we want to exclude the purchases done by people belong "Below 25" age category. "Below 25" age category is represented by the letter "A" in "Age" column. suppermarket_dataset <- suppermarket_dataset[-(suppermarket_dataset$Age == "A"),] We can use the above code segment to remove records that belong to "Below 25" age category. Now we are done with pre-processing our data set. We should save our new data set for future use. We can write the data set to a .csv file using following command. write.csv(file="Final_Suppermarket_Dataset.csv", x=suppermarket_dataset) Done for the day! Let's meet with the next post which will discuss how to do a descriptive analysis of this data set. Implementing a sever fail over feature Suppose you have a client-server application and you want to automatically switch to a standby/backup server when the primary server is unavailable due to either failure or scheduled shut down. I present you how to achieve that by this post. We can have the primary and secondary urls in a configuration file and load them to a list at the beginning of the system. For the demonstration purpose I have hard coded list of urls. We can achieve it easily by checking the response code sent by the server. I believe the code it self-explanatory. public class Envision { HttpURLConnection httpURLConnection = null; InputStream inputStream = null; BufferedReader bufferedReader = null; String result = ""; String inStr = null; urls.add(new URL("http://example_primary.com")); urls.add(new URL("http://example_secondary.com")); try { for (int i = 0; i < urls.size(); i++) { try { httpURLConnection = (HttpURLConnection) urls.get(i).openConnection(); inputStream = httpURLConnection.getInputStream(); bufferedReader = new BufferedReader(new InputStreamReader(inputStream)); int responseCode = httpURLConnection.getResponseCode(); if (responseCode == 200) { while ((inStr = bufferedReader.readLine()) != null) { result = result + inStr; } catch (Exception e) { System.out.println("Error: While requesting data from server "+urls.get(i)+" " + e.getMessage() ); System.out.println(">>>>Res: " + result); } catch (Exception e) { System.out.println("Error: While requesting data from server" + e.getMessage()); Happy Coding! How to create a batch file to run a Java program? I demonstrated how to create an .exe using maven in my previous post. Another way of distributing a software is by a run time which includes a batch file. What is a batch file?  A batch file is a type of script file which contains a series of instructions to be executed in turn. These are used to automate frequently performed tasks.   You can write a batch file to compile, to create the JAR file and run the program. But in this post I mostly focus on creating run time and I assume that you have the .jar file already with you. You can use a tool like Maven or Ant to build your .jar file. The following image shows the folder structure of my run time. Folder Structure of the runtime Here Blog-1.0-SNAPSHOT.jar is the JAR file of my program. My program is the same one I used in the previous post. So the program requires jdom2 library. One thing you should notice is that if you are using maven to build the program and unless you use a maven plugin to add dependencies into your JAR file, it does not include the other used .jars. Therefore in creating the run time you have to add those libraries in to the /lib folder. Also if your program requires any config files you can place them in the /config folder. Like wise if you have any log files write the program so that log files are placed in the /log folder. Then let's create the start.bat file. Actually what you have to do is very easy. Just place following lines in a text file and save it with the extension of .bat. title=XML Reader java -Xmx256m -DAlert=true -classpath .;Blog-1.0-SNAPSHOT.jar;.\lib\jdom-2.0.5.jar Envision As you can see I have given the name of the JAR file, libraries to be used in the program (if you have many libraries give them as a series separated by semi colon ) and at the end the name of the main class. It's really easy to make a run time, isn't it? Happy coding!  How to create an exe for a java program using maven? Usually a software is distributed as an executable file. Therefore as programmers we would need to create an .exe file for our programs. Through this post I will present you how to achieve it easily with Maven (a build automation tool used primarily for Java projects) . A windows executable can be created by using a combination of two maven plug-ins , Maven Shade plugin and launch4j plugin.  Here is my program. It is to read an .xml file and write its content to the standard out put. In order to read the .xml file we have to use a library. Here I have used jdom2. Like that in developing software we have to depend on many libraries in order to prevent reinventing the wheel. If we are building our projects using maven we can include those dependencies in the pom.xml file import org.jdom2.Document; import org.jdom2.Element; import org.jdom2.JDOMException; import org.jdom2.input.SAXBuilder; import java.io.File; import java.io.IOException; import java.util.List; public class Envision { public static void main(String[] args){ File xmlFile = new File("D:\\example.xml"); SAXBuilder builder = new SAXBuilder(); try { Document document = (Document) builder.build(xmlFile); Element rootNode = document.getRootElement(); List books = rootNode.getChildren("book"); System.out.println("This is my book store"); for (int l = 0; l < books.size(); l++) { Element book = (Element) books.get(l); System.out.println("Name :"+book.getChildText("name")+" Author :"+book.getChildText("author")); } catch (IOException io) { } catch (JDOMException jdomex) { Here is my pom.xml file. Here Maven Shade plugin is used to add all the dependencies in the program into the runnable jar file. The launch4j creates the .exe with vender information and a nice icon too.  I have added the exe.ico in src/main/resources.  <!-- Command-line exe --> <errTitle>App Err</errTitle> <copyright>2014 envision.com</copyright> After configuring the pom.xml file just execute maven install to get the .exe file.  That's it. Check the target folder in you project folder to find the .exe.  Hope this would help you :)  Happy Coding!  "Please let me be myself....!" Have you ever heard your soul is yelling like this? After being fed up of pretending : presenting yourself as someone else to the world. Everyone of us has played this game, may be when you are with your crush, with your boss or with whom you want to impress. You feel like it's not real you that they see. Yet you don't want to change it, because you are afraid of loosing their attention. You hide yourself behind a "MASK". The reason may be different : to impress some one, not to be neglected or to hide your own emotions, but most of the time you use a mask. So do I. Sometimes we have a collection of masks to be put on based on our immediate environment. Rather to meet different masks out there. Now if you feel "this is not my story", well! that is fantastic! But to be honest it is not the case. Life is almost a masquerade party where everyone is wearing a mask and doing pretty crazy stuff. They know they are secured behind the MASK. We need this feeling of security because most of us always bother about others more than ourselves. We always hesitate what others would think about oneself, how would they react or would they laugh at us. Ultimately, your heart is getting heavy with the fear of being rejected, ignored  and criticized. That fear forces you to put on a MASK. You try to make sure that you present yourself as it suits to the society. Sometimes the society itself forces to hide our true inner selves. Expressing your true opinions on something may be destructive. In other case your views might not be compatible with others'. May be things are going wrong with you, yet you have to put a smile in your face, because no one is there to care. In each of these cases we have to be with a mask. Sometimes we use masks neither because of the fear being rejected nor the expectations of the society, but for our benefits. It's an open secret that politicians are pretending in front of the public to retain their power. Not only them but also us, in our day to day life, hide our true selves in order to have benefits. Whatever the reasons we are wearing masks, we add layers and accessories that add more credibility to the costume. Gradually the costume become heavier and heavier until it becomes to a point where you are living in someone else's life. We end up with mental and physical exhaustion. We get burnt out from the effort of trying to maintain a facade. We breakup with ourselves - most valuable relationship in this planet. At the end, we will have a life full of suffering, or at least a life not enjoyable. We  lose our authenticity. Our own thoughts, own views and ideas would be buried with our dead bodies which could have changed the world or at least our own lives. So get real. Let the world identify you as YOU. You are another master piece on this earth.
global_01_local_1_shard_00001926_processed.jsonl/61281
George Mason University George Mason University Mason George Mason University CONF 665: Special Topics in Conflict Analysis and Resolution Course Information from University Catalog Repeatable within Degree In-depth study of contemporary areas of conflict resolution practice. Fulfills elective requirement for certificate program.  Topics vary. Schedule Type: LEC Hours of Lecture or Seminar per week: 3 Hours of Lab or Studio per week: 0 Credits: 3 CONF 502 or permission of the instructor Centers and Publications
global_01_local_1_shard_00001926_processed.jsonl/61295
Artificial Intelligence is a phrase that often promotes a strong reaction in a lot of people who hear it. There are the gloom and doom prognosticators who tell us that ‘Judgement Day’, the day the intelligent machines take over and decide we are more trouble than we are worth and wipe us out is near. There are also the overly optimistic prognosticators which tell us that the day AI will take over and we will enter a golden age of humanity beyond our wildest dreams is near. Kasparov charts a course in between these two extremes using the extremely compelling example of his two matches against IBM’s Deep Blue. He won the first and lost the second which was the first time a World Chess Champion was defeated by a chess engine. Kasparov uses these matches, his preparation and the preparation the IBM team employed, to paint an interesting picture of both machine and human intelligence. The conclusion he draws is that machine and human intelligence are complimentary to each other and machine intelligence enhances human intelligence to the point where mediocre chess players using chess engines can easily defeat an International Grand Master and this has, in fact, been done. Our future, according to Kasparov, is to embrace what the machines offer us and use them to augment our human intelligence. Throughout the book Kasparov makes the point that research into AI has shown that machines are good at the types of things humans are not and vice versa. Machines can analyze millions of positions per second while humans can only go 4-5 moves in the future, for instance. On the flip side, the human mind can see what tactics are worthwhile and which are not which makes the human mind’s search far more effective. The future, according to Kasparov and backed up by real-world results in the chess world, is a synthesis of the two the end result of which is the enhancing of the human mind. If you are interested in AI, chess and the future of expert systems this is one book you’ll want to read. Leave a Reply You are commenting using your account. Log Out /  Change ) Google+ photo Twitter picture Facebook photo Connecting to %s
global_01_local_1_shard_00001926_processed.jsonl/61303
In stock Product Description Root is a medium complexity, euro/wargame game for 2-4 players.  It takes place in a fantasy setting where factions of animals battle for control of a great woodland. The factions of Root are highly asymmetrical, so you must play to your faction’s strengths • The cats are the current rulers of the wilderness.  They must police their domain while harvesting the wood they need produce workshops, lumber mills, and barracks.  They win by building new buildings and crafts. • The hawks of Eyrie were the rulers of old . They must capture as much territory as possible and build roosts before they collapse back into squabbling. • The mouse Alliance hides in the shadows, recruiting forces and hatching conspiracies. They begin slowly and build towards a dramatic late-game presence, but only if they can manage to keep the other players in check. • The Vagabond racoon plays all sides of the conflict for its own gain, while hiding a mysterious quest. Players drive the narrative, and the differences between each role is what gives Root its high level of interaction and replayability. Designer: Cole Wehrle Publisher: Leder Games Player numbers: 2-4 players Recommended Age: 10+ Game Time: 60–90 Min Board Game Geek Listing Publisher Website Root: The Riverfolk Expansion (2018) Product Contents: 2 Dice 4 Player Boards for each faction (Marquise, Eyrie, Alliance, Vagabond) 2 Counter Sheets 1 Mounted Board (6-panel) Meeples (25 Marquise, 20 Eyrie, 10 Alliance, 1 Vagabond). 107 cards • 54 game deck cards • 4 eyrie leader cards • 2 eyrie viziers cards • 15 vagabond quest cards • 3 vagabond variant cards • 4x 4 faction overview cards • 4x 1 walk-through cards • 9 expansion cards
global_01_local_1_shard_00001926_processed.jsonl/61320
I’m obsessed with finding the right tool for the job: my main source of stress is witnessing other people determinedly using inappropriate tools or processes, usually because they (incorrectly) think they don’t have time to fix it. When it comes to writing, my go-to tool for years has been Scrivener. I usually talk to people about how Scrivener is really useful, focusing on the usual key features: It keeps all your research and notes in one place! It makes it easier to edit a completed draft! It’s got a good no-distractions writing mode! It’s true that Scrivener has many excellent features for making a writer’s life easier. What I’ve become aware of only recently is that it has also made a significant contribution to making my writing better. A traditional word processor, such as Word or Google Docs, presents your text in a linear, start-to-finish fashion. It prioritises simulating pages, showing how your document will print out. This is very useful when writing letters or work-related reports. For the novel writer, or even the short story writer, the number of pages is largely irrelevant, because how your manuscript fits onto A4 paper has absolutely nothing to do with how it will appear in an ebook, paperback or hardback. The useful metric for writers is word count, which is how you know whether you’re writing a short story, a novella, a novel or a fantasy novel. Scrivener eschews visual representation of pages in favour of emphasising story structure. The somewhat abstract notion of ‘pages’ fades away, because this is your manuscript: pages only matter when it’s finished and has gone through the lengthy process of becoming a book. In Scrivener, the focus is on the structural composition of your entire story. What this means is largely up to each individual writer. I tend to split my novels into distinct parts, each of which contain multiple chapters — a fairly common design. In Scrivener this is easily represented with each part having its own folder, inside of which are individual chapters. I can jump between these chapters with a single click, make new ones or even view them all as a single, flowing manuscript. Over time this has influenced my writing, making it easier to structure the narrative. Rather than having a single, gigantic, monolithic manuscript in Word, or an unknown blank page, instead I can begin to shape the book before I’ve written the words. As I progress through the novel, I can easily see the shape of it by glancing at the contents list: I’m always anchored clearly in the story, and the chapter-based structure of Scrivener helps me to remember what I’ve already written, rather than my own text becoming increasingly alien to me as I go beyond the 100k mark. This in turn actively improves the pacing of the novel, even while creating the first draft. Judging pacing in a linear Word-style document is hard, because it’s difficult to visualise the overall shape of the story. In Scrivener, even while I’m writing a specific chapter I still have an eye on the overall pace and what’s come before. I’m continually shifting the story, based on what’s come before: it becomes very simple at a glance to know how long it’s been since a particular character last appeared, or when the next big plot beat is approaching. This is all stuff that would normally get finessed and fixed in the second draft edit, but Scrivener makes it possible to zone in on the best version of the book right from the start. Which isn’t to say that editing and multiple drafts aren’t needed — it’s just that Scrivener puts you ahead of the curve. Research, ideas and development Before I used Scrivener, my research would be scattered across various notebooks and digital files and folders. If I had an idea I’d jot it down, but usually in a place that was disconnected from the main manuscript, which then made it very easy to forget. Scrivener keeps all of your research and notes in the same project file, alongside your main manuscript. It seems like a small convenience but it’s had a big impact on my writing: by removing the inconvenience of having to remember where I’ve put all my notes, the result is that I reference them far more often. I’ll even have pertinent notes open in one Scrivener panel while I’m writing the text in another. This makes the writing process faster but it also helps me be more accurate and dig deeper into themes and character, because all my scattered thoughts are contained and presented to me as and when I need them. I write serialised, online fiction so this is especially important as it keeps me on track and avoids plot holes and inconsistencies. Ultimately, it has served to remove the inconveniences and practical irritations inherent to writing long form stories, allowing me to focus on the storytelling itself. Scrivener 3 is already out on Mac and should be appearing on PC very soon. I can’t wait to get my hands on it and dig into its new features. If you’re not already using it, that’ll be the perfect time. You can read my serialised novel, The Mechanical Crown, which is written in Scrivener, over on Wattpad. Photo by Aaron Burden on Unsplash Leave a Reply
global_01_local_1_shard_00001926_processed.jsonl/61345
What is New in CSS3 ? Cascading Style Sheet or CSS adds style to a webpage, for example, the fonts, colours, sizing, spacing. It describes how the HTML elements are rendered both on screen and paper. CSS3 is the latest version CSS. CSS3CSS3 introduced new or extended features like animation, gradients, media queries, flexboxes, shadows and transforms. Some of these features are introduced for the benefit of mobile device programming. For example, using media queries the size of screen size, resolution and orientation of the device can be determined. This will help in delivering tailored styling to tablets and smart phones. Similarly, flexboxes is a new layout mode in CSS3. Using flexboxes ensure that elements behave predictably when the page layout must accommodate different screen sizes and different display devices. CSS3 Sample Further on, I have experimented with some of the new “techniques” in my sample CSS code below like rounded corners, shadow, button with hover effect and even some simple animation. Click on Run Pen to view. You can view the HTML and CSS code by clicking on HTML or CSS tabs below and you can view the output in the Result tab. Why use CSS3? I am using CSS3 for various things during my web development: 1. For prototyping: I usually create my first prototype using pen and paper. But then I straight away move onto a simple prototype using HTML and CSS 2. For template/theme design: Although most part of WordPress is written in PHP, it is HTML and CSS that defines the finer details (like, how I want the article date to appear) 3. For simple animations and features: I will be using CSS3 to introduce some simple features and animations. More about this in my future articles. CSS3 Tutorials Check Also Website Header with HTML5 and CSS3 A Simple Website Header with HTML5 and CSS3 Leave a Reply
global_01_local_1_shard_00001926_processed.jsonl/61352
How to define intelligence? Well its been a very long time since my last blog. I have been finding the answers on how do we define intelligence? And here is what I have learnt. Its been a very long time on which the debate is going on the necessities of intelligence, but sadly, there is little sign of consensus. Lets have a brief talk on it : “AI is concerned with methods of achieving goals in situations in which the information available has a certain complex character. The methods that have to be used are related to the problem presented by the situation and are similar whether the problem solver is human, a Martian, or a computer program.” Intelligence usually means “the ability to solve hard problems”. “By ‘general intelligent action’ it seems to be the same sort of intelligence as we see in human action.In any real situation behavior appropriate to the ends of the system and adaptive to the demands of the environment can occur, within some limits of speed and complexity.” And after all these years of study, we still don’t know very much about it. There are a lot more queries than answers. We all know that a well-founded definition is usually the result, rather than the starting point, of scientific research. However, there are still reasons for us to be concerned about the definition of intelligence at the current time. Though clarifying the meaning of a concept always helps communication, this problem is especially important for AI. Without a clear idea of what intelligence is, it is very hard to say why AI is different from computer science or psychology. More importantly, the researcher in this field needs to justify his/her research plan according to such a definition. Anyone who wants to work on artificial intelligence has to face a two-phase assignment: • to choose a working definition of intelligence • to produce it in a computer. A working definition is a definition concrete enough that you can directly work with it. By accepting a working definition of intelligence, it does not mean that you really believe that it fully captures the concept “intelligence”, but that you will take it as a goal for your current research project. Therefore, the lack of a consensus on what intelligence is does not prevent each researcher from picking up a working definition of intelligence. Actually, the thing is, unless we keep one definition, we wont be able to claim that we are working on artificial intelligence. By accepting a working definition of intelligence, the most important commitments a researcher makes are on the acceptable assumptions and desired results, that helps us binding all the concrete work that follows. Before studying concrete working definitions of intelligence, we need to set up a general standard for what makes a definition better than others. Carnap meets the same problem when he tried to clarify the concept of “probability”. The task “consists in transforming a given more or less inexact concept into an exact one or, rather, in replacing the first by the second”, where the first may belong to everyday language or to a previous stage in the scientific language, and the second must be given by explicit rules for its use. According to him, the working definition, must fulfill the following requirements: 1. It is similar to the concept to be defined, as the latter’s vagueness permits. 2. It is defined in an exact form. 3. It is fruitful in the study. 4. It is simple, as the other requirements permit. All these requirements are very much reasonable and suitable for the current purpose. And let us have a look what they mean concretely to the working definition of intelligence: • Similarity: Though “intelligence” has no exact meaning in everyday language, it does have some common usages with which the working definition should agree. If we consider that a normal human beings are intelligent, but most animals and machines are either not intelligent at all or much less intelligent than human beings. • Exactness: Given the working definition, whether a system is intelligent should be clearly decidable. For this reason, intelligence cannot be defined in terms of other ill-defined concepts, such as mind, thinking, cognition, intentionality, rationality, wisdom, consciousness, and so on, though these concepts do have close relationships with intelligence. • Fruitfulness: The working definition should provide concrete point for the research based on it, for instance, what assumptions can be accepted, what phenomena can be ignored, what properties are desired, and so on. Most importantly, the working definition of intelligence should contribute to the solving of fundamental problems in AI. • Simplicity: As intelligence is surely a complex mechanism, the working definition should be simple. Theoretically, a simple definition makes it possible to explore a theory in detail; and practically a simple definition is easy to implement. For our current purpose, there is no exactly “right” or “wrong” working definition for intelligence, but there are comparative ones. When comparing proposed definitions, the four requirements may conflict. For example, one definition is more fruitful, while another is simpler, other may be exact and may be the other one is similar. In such a condition, some weighting and trade-off becomes necessary factor. However, there is no evidence showing that in general the requirements cannot be satisfied at the same time.
global_01_local_1_shard_00001926_processed.jsonl/61356
I have two SVN projects in use from another SVN repository using svn:externals. How can I have the same repository layout structure in Git? • 7 Anyone have a new answer to this in the last 4 years, or is the world of git the same today? – DougW Apr 19 '13 at 23:37 • 3 @DougW Yes, I have a new answer below: git submodule can now emulate svn:external (since March 2013). – VonC Aug 6 '13 at 18:58 up vote 122 down vote accepted Git has two approaches similar to, but not exactly equivalent to svn:externals: • Subtree merges insert the external project's code into a separate sub-directory within your repo. This has a detailed process to set up and then is very easy for other users, because it is automatically included when the repository is checked out or cloned. This can be a convenient way to include a dependency in your project. It is easy to pull changes from the other project, but complicated to submit changes back. And if the other project have to merge from your code, the project histories get merged and the two projects effectively become one. • Git submodules (manual) link to a particular commit in another project's repository, much like svn:externals with an -r argument. Submodules are easy to set up, but all users have to manage the submodules, which are not automatically included in checkouts (or clones). Although it is easy to submit changes back to the other project, doing so may cause problems if the repo has changed. Therefore it is generally not appropriate to submit changes back to a project that is under active development. • 17 FYI, it is now possible to specify specific revisions with svn:externals now (since 1.5 or 1.6 I believe?) – Nate Parsons Sep 22 '10 at 21:14 • 5 Since the links in the answer are outdated, here are some fresh ones: Subtree Merge @ Github:help; Working with submodules @ Github:help; Submodules @ Git user manual – Bart Nov 11 '11 at 10:54 • 6 FYI, git submodules can be automatically managed and commited. git creates a .gitmodules file that can/should be commited just like the .gitignore file. See [git-scm.com/book/en/Git-Tools-Submodules] for more information. – mikijov May 30 '12 at 14:47 • 4 @NateParsons It has always been possible to specify exact revision numbers with svn:externals. With revision 1.5, the syntax was changed to a more flexible format. What was added was relative URL addressing. – David W. Aug 6 '13 at 19:52 • @NateParsons but is it possible to omit revisions with git submodules... >_> – Trejkaz Sep 28 '16 at 3:22 As I mention in "Git submodule new version update", you can achieve the same SVN external feature with Git 1.8.2 submodules: This is enough for a submodule to follow a branch (as in the LATEST commit of a remote branch of a submodule upstream repo). All you need to do is a: git submodule update --remote That will update the submodule. More details are in "git submodule tracking latest". To convert an existing submodule into one tracking a branch: see all the steps in "Git submodules: Specify a branch/tag". • Can you do partial checkout like with svn:externals? – nowox Aug 2 '17 at 17:03 • @nowox Yes, you can have sparse checkout (git 1.7+ stackoverflow.com/a/2372044/6309) associated to submodules (stackoverflow.com/a/17693008/6309) – VonC Aug 2 '17 at 17:58 • unfortunately all the sparse checkout related answers never give any example :( I'll try to write a Gist example for this... – nowox Aug 3 '17 at 6:17 • There is still an issue with this. You still have to get the whole history of a repository where you only need one small part. In my case is 100kB over 2GB. I can of course use --depth but it doesn't really address the problem. – nowox Aug 3 '17 at 6:19 • @nowox It is best to ask a new question explaining exactly what your use case is: I have no idea if your 2GB repo is a submodule, or a main repo with submodule, and what exactly you need to extract from it. – VonC Aug 3 '17 at 6:24 You should look into Git submodules. It should allow almost exactly what you're looking for. • @JonathonReinhart - I made the edit – Sonny Aug 21 '14 at 17:31 For the latest version of Git I'd suggest to read about Git submodules in the official Git documentation. Your Answer
global_01_local_1_shard_00001926_processed.jsonl/61371
Göran Hongell Born in Helsinki, Göran Hongell (1902-1973) was one of the pioneers of the Finnish glass tradition. Having started out as a decorative painter, Hongell became one of Iittala’s first designers, responsible for readying existing models for serial production. Before long he was designing his own pieces, including various stackable glassware sets in the 1930s, and the now iconic streamlined ‘Aarne’ glasses in 1949. Filter products • Change view: • Two • Four Refine Your Results By: Add to Wishlist 'Aarne' aquavit glass, set of two Iittala
global_01_local_1_shard_00001926_processed.jsonl/61374
Month: February 2018 On their own growth <Today was Signing Day across the US, and one of my babies from my first year at high school (4 years ago) participated.> “Look at you, my little baby-child, all grown and mature…” “Ah, come on, Miss! I haven’t changed that much, have I?” “Oh, you’ve still got that same dimpled smile, but you have definitely matured since I taught you and actually needed to call you that full, middle-name-included-name so you understood how much trouble you were in.” “You were wrong for that.” “You deserved it – and don’t tell me it didn’t work when I needed it to…” “Name one time that you ‘needed to’ because I was too immature.” “Uh, how about the time that you performed a flying tackle on your friend who walked in 15 minutes late, slammed him to the ground and completely disrupted my classroom?” <Thinks.> “Yeah, but that was a good tackle, though, Miss, you gotta admit.”
global_01_local_1_shard_00001926_processed.jsonl/61386
Referral Links are the most popular way to track affiliate orders and credit the affiliates. Once you approve an affiliate, a unique link will be created for the affiliate consisting of your store's domain followed by tracking parameters (e.g. Refersion relies on first party tracking which means you should see SEO benefits as affiliates promote your links. Creating Page-Specific Referral Links (Deep Linking) You or your affiliates can easily create page specific links. To do this, go to Dashboard > Generate Referral Link and input the destination page. Note that this address needs to begin with the same domain address you created when signing up to Refersion, also displayed on this pop-up. After entering the destination page, select an affiliate and click Generate. Refersion tracking links also support SubIDs. SubIDs are additional custom tags that allow for additional tracking of affiliate links, and they can be any string of text. For example, an affiliate might be interested in tracking if a conversion came through a Facebook post, a Twitter post, or the header/footer/specific page on their blog. SubIDs are great for more detailed tracking. For more info on SubIDs, check out this article. Changing Referral Links Referral links can be changed in two ways, affiliates can use our integration directly from their dashboard (or a different third party URL shortener) and you can create custom Vanity URL redirects for the affiliates, with a new URL redirecting to the affiliate link. Vanity URLs exist outside Refersion, so you'd have to let your affiliates know what they are (their actual link would still show up on their affiliate portal). Please note: Unfortunately, you cannot create vanity links inside Refersion. Please use one of the three options below.  Option 1: Using our integration, affiliates can automatically convert their referral links into links right from the affiliate dashboard: Option 2: Domain Redirect You can setup a redirect from your shop to an affiliate's referral link. For example, you can create: And have that automatically redirect to: Option 3: For Shopify Users:  Learn how to create vanity affiliate links in Shopify. We also have another app, called Traffic Control, that allows you to create redirects in bulk. What's Next? Learn about the different statues for a conversion. Did this answer your question?
global_01_local_1_shard_00001926_processed.jsonl/61401
I’ve done it again folks. As always, I drank from my hot drink way too quickly. and I’ve burned my tongue. It happens without fail. Especially if I get a latte with whipped cream on top. The drink lulls me into a false sense of security as I joyfully eat some of the whipped cream, and then I get a splash of hot liquid on my tongue. I also consistently burn my tongue and/or the roof of my mouth on pizza. Why can’t I just learn to wait? So my questions are: Do you burn your mouth on food all the time like me? Are there other actions that end up hurting/hindering you that you continue to do, without thinking about it? If you don’t want to talk about those, then feel free to talk amongst yourselves!
global_01_local_1_shard_00001926_processed.jsonl/61403
Monthly Archives: February 2016 Open Ears The ability to hear God is vital to the Christian. Isaiah gives us an insight into this essential skill when he says, “He wakens me morning by morning, wakens my ear to listen like one being instructed. The Sovereign Lord has opened my ears.” The importance of hearing God in the morning is hereby identified. Hearing from God is probably the most important thing that will happen in your life today. DAILY PRAYER: Lord, I am desperate to hear Your voice. Speak to me and grace me to be obedient. DAILY READING: Isaiah 50-52 No Disappointment Through Isaiah, God declares that when He ultimately turns the fortunes of His people to show His blessing, they will not be disappointed. The operative word used is “hope.” Jesus is our Blessed Hope. Paul said if we didn’t have the hope Christ brings, we would probably be miserable.  But, we DO have this hope. DAILY PRAYER: Lord Jesus, our hope is in You. Let this hope be our anchor when life blows us around. DAILY READING: Isaiah 46-49 No Other God A strong, recurring theme of Isaiah … God declares that He is the ONLY GOD. Simply put, “There is NO OTHER!”Everything originated with God, therefore, all else is created. How are we to respond to His greatness and power? “Turn to me and be saved,” He lovingly implores. DAILY PRAYER: Lord, You are my God. You are my Savior. You are my All in All. DAILY READING: Isaiah 44,45 Fear Not! “I have redeemed you.” Isaiah has the privilege of stating the Lord’s protective declaration over His people. It’s comforting to know that in our life’s journey, regardless of the challenges we experience, we have a determined destination for eternity. God is not a man that He should lie. He said we’re safe. WE’RE SAFE in His arms. DAILY PRAYER: Lord, though the storms of life rage around us, we will not fear.  You are with us! You have promised us protection. DAILY READING: Isaiah 41-43 God’s Glory Revealed The prophet Isaiah foretold the day when, “The glory of the Lord will be revealed, and all people will see it together.” [Is. 40:5]  In that this entire passage expresses one of the Bible’s clearest revelations of the Messiah, we know the incarnation of God’s glory was Jesus, His Son. In Christ dwelt the fullness of the Godhead bodily. Look no further, nor for any other method to realize God’s power and glory.  It’s all in Jesus. DAILY PRAYER: Lord, Whom have I in heaven but You, and there is none upon earth that I desire besides You. DAILY READING: Isaiah 38-40 The Lord, Our Confidence “They that trust in the Lord shall be as Mt. Zion, which abides forever.” The elements of this life will always disappoint, but the Lord will always fulfill His promise to give His best to His people. Confidence, hope, faith … all terms that God’s people can live in and prosper. Look up; your redemption is getting close. DAILY PRAYER: Lord, we trust You today for every challenge that is placed before us. DAILY READING: Isaiah 35-37 God’s Warnings “That day” is a recurring expression in Isaiah’s prophecy. It makes sense to assume that it refers to a future divine visitation from Yahweh. Regardless, God watches the events of this world from His heavenly vantage point, keeps records, and will eventually right all wrongs, punish those who have violated His ways, and reward those whose lives pleased Him. He faithfully warns offenders through His prophets and preachers. We are wise to heed His warnings and live in the protective shelter of His Son, Christ Jesus. “In Him we [can] live, and move, and have our being.” DAILY PRAYER: Lord Jesus, You are my hope and refuge. I rest in You and will live out Your teachings with my life. DAILY READING: Isaiah 31-34
global_01_local_1_shard_00001926_processed.jsonl/61428
Vonnegut Trivia – Week of January 21, 2018 For this week’s question, here’s another opening line from a Vonnegut novel. Q: Which novel begins with the sentence: ‘Everyone now knows how to find the meaning of life within himself.”    a) The Sirens of Titan b) Cat’s Cradle C) Hocus Pocus D) Timequake Check back next week for the correct response.  The answer to the January 7th question was B–Breakfast of Champions ends with the single word, “Etc.” For more Vonnegut, here’s a 1989 interview featuring Kurt Vonnegut on The Dick Cavett Show.  Vonnegut discusses the then-current humanitarian crisis in Mozambique. Leave a Reply WordPress.com Logo Google+ photo Twitter picture Facebook photo Connecting to %s
global_01_local_1_shard_00001926_processed.jsonl/61439
Lost Causes and Signs Another Return I always go MIA for long periods of time, then come back suddenly.  I guess you’re all used to that by now.  I guess a lot has happened, and I really do need to “restaple” my heart to Christ.  It’s been a hard year, but I’m ready to stop living in fear that I’m somehow beyond God’s reach.  Depression, eating disorder, anxiety, Dad leaving me—my God is greater than the sum of my fears.  Just a stream of consciousness prayer I wrote to God today: I ask You constantly for things I can’t even comprehend, so I’m going to humble my prayers to layman’s terms.  I’m looking for Your peace, Lord, first and foremost.  I pray that You will help me to quiet the intrusive, fear-stricken voices in my head so that I may listen to You instead and come to hear the quiet words You whisper into my heart.  I trust that You’ll bring me back to Your arms, Jesus, and I promise You that I will, by the help of Your grace, let Your gentle, loving voice lead me back home.  I’m a little lost right now, Jesus, but I will not give up.  I love You more than I can understand, and, if nothing else, I will always stretch my arms toward You as You pull me back to shore.  Again, I’ll say it, Jesus: I love You.  I’ll give You my nothing if it’s all I have. Out of context, I’m sure this prayer is pretty incoherent, but in light of my faith life as of late, I feel like it says all that I’ve been wanting to say to God. After a Long Hiatus… – thelightiswhite ♥ Stream of Consciousness Prayer You taught the sun to shine so bright, to hide beneath the hills and trees at night. You made all things with a purpose, You gave us each a soul, and, Jesus, only You can make me whole. I spent a long time struggling in vain to make sense of existence based on chance, a universe without a God to tell the skies to rain, where chromosomes and DNA declare us all the same, and love is the effect of raw endorphins to the brain, and I am no creation of God’s hands. Those hands that hold and love and give, hands that curled in pain as He, by His children, was tortured and slain, so that sinners like me could live, and choose to misuse and hate and abuse and refuse His very name. We hurt Him and aggrieve Him, but He loves us all the same. Jesus, I’ll know You’ll be with me for whatever comes. Like sentinels, my soul waits for the day You’ll call me Home. Cardboard Testimonies We do this at my parish’s Confirmation retreat, and it’s one of the most powerful experiences I’ve ever known. I did it last year as a peer, and it was so overwhelmingly beautiful. The front of my piece of cardboard said: “I hated myself. Felt worthless and didn’t want to live.” When I flipped it over, it read: “He told me I was worth dying for.” St. Maria Goretti, pray for us Agnes in Agony Today is the saint’s day of one of my dearest saints, St. Maria Goretti, who along with St. Agnes, I invoke every day. She is a modern virgin martyr, a patron of chastity, teenage girls, and crime victims, and a witness and model of purity and forgiveness. Maria was eleven years old, a poor Italian farm girl, when in 1902 Alessandro Serenelli, a nineteen-year-old farm hand and neighbor, tried to rape her. Alessandro had approached Maria a number of times before seeking sexual favors, but she had always refused; he had tried to rape her at least once before. This time when she refused him, he became enraged. She fought him, imploring him not to do what he wanted to do, a mortal sin, insisting she would rather die than submit. In the end, Alessandro stabbed her eleven times. Before she died some twenty hours later, Maria forgave her… View original post 193 more words
global_01_local_1_shard_00001926_processed.jsonl/61451
My encounter with the agent from Sacha Baron Cohen’s masterpiece ‘Bruno’ Let’s talk about Lloyd of AB Management, apparently an agent for actors and writers based in Los Angeles. When I was pitching my tits off at the Pitchmart – Lloyd also happened to be at the event. Once I had finished pitching to my choice of producers, I noticed he was sitting alone. He looked miserable. So I walked over to his table and introduced myself. He asked me what I have for him. I mentioned the title, genre, logline and gave a very brief outline of the story. He asked me what happened to the Pope in my story. I told him that I didn’t know because he’s not a character that I had included in my story, therefore left to the reader’s imagination. Since the villain had taken over the Vatican, it was obvious he had obliterated everyone including the Pope, I explained. But, agent Lloyd would not accept this. He began to raise his voice and get angry, and insisted that I explain what had happened to the Pope. Then, I politely made it clear that the Pope and his posse had been exterminated by the villain and his clan. By now Lloyd was furious at me for not including the pope in the script, and said that my script (which he hadn’t read) did not make sense. I asked him to calm down and not take it too seriously, at which point he responded by demeaning my work and said that he wouldn’t pay a dime for my script. I replied that I didn’t give a toss about his opinion, and left. To cut an epic short, it turned out that Lloyd had a part in Sacha Baron Cohen’s ‘Bruno’ in which he was cast as himself – the agent. If you have seen the film you will definitely remember the scene in which Lloyd sets up a panel of casting directors only to watch Bruno do a nude weenie spinning dance act. How traumatic for Lloyd……and that’s all I have to say about that…
global_01_local_1_shard_00001926_processed.jsonl/61460
Heavy Armored Defender Knight 2 Sales Sold By : Tokegameart Heavy Armored Defender Knight. A hero guy, perfect for your next turn-based RPG games. Product Description Heavy Armored Defender Knight – 2D Character Sprite Heavy armored defender knight character sprite is a set of character animation that suitable for 2D side scrolling and top-down RPG games. Cartoon styled character, suitable for all age. Perfect for enemy or hero in your next 2D turn-based medieval RPG games. This includes 17 cool animations : 1. Dying 2. Falling Down 3. Hurt 4. Idle 5. Idle Blinking 6. Jump Loop 7. Jump Start 8. Kicking 9. Run Slashing 10. Run Throwing 11. Running 12. Slashing 13. Slashing in The Air 14. Sliding 15. Throwing 16. Throwing in The Air 17. Walking The main file includes : • .AI (“Heavy Armored Defender Knight.ai” Illustrator File) • .EPS (14 files for each character vector part) Features : • 100% vector • Customizable vector art • Editable or add your own animation using spriter. • Unity ready Notes : There are no reviews yet. Got something to discuss? You may also like…
global_01_local_1_shard_00001926_processed.jsonl/61474
Essential Tremor Causes May 24, 2018 Essential Tremor is completely separate from other health issues, and does not cause other health problems. Mostly a genetic condition, over 50 percent of sufferers of Essential Tremor get it through a genetic mutation. It is known as a familial tremor because of this, and it isn’t clear what the Essential Tremor causes really are, but we have a vague understanding of why it happens. ET is a nervous system disorder that causes rhythmic shaking in almost any part of your body. There are a few known risk factors for ET such as genetic mutation and age. Genetic mutation is caused by one of the parents giving one defective gene to their child. If you have a parent with the mutated gene, you have a 50 percent chance of developing the disorder yourself. If a genetic form of Essential Tremor is present, it is usually evident at a young age, though it may never manifest itself with the commonly known symptoms. In most families, Essential Tremor appears to be inherited in an autosomal dominant pattern, which means one copy of an altered gene in each cell is enough to cause the disorder. Essential Tremor is more common in people over 40 years of age and can worsen over time with age. Essential tremor may appear at any age but is most common in the elderly. Some studies have suggested that people with essential tremor have a higher than average risk of developing neurological conditions including Parkinson disease or sensory problems such as hearing loss, especially in individuals whose tremor appears after age 65.Tremor frequency depends on the person, but can change throughout a person’s lifespan. Tremor is usually caused by a problem in the deep parts of the brain that control all the movement of the body. Like most tremors, there isn’t a pinpointed and direct cause, but like ET there are other types of tremors that appear to be inherited and run in family lines. Tremors can happen on their own or be a symptom of other neurological problems including strokes, multiple sclerosis, traumatic brain injury and diseases like Parkinson’s. Some other potential Essential Tremor causes include mercury poisoning, anxiety and other things. Researchers are currently working on finding more causes for the particular chromosomes that may be linked to Essential Tremor causes, but no specific genes have been solidly connected. Several genes as well as environmental factors likely contribute to an individual’s likelihood of getting the complex condition.
global_01_local_1_shard_00001926_processed.jsonl/61493
22nd Sunday After Trinity – Church Back Home – November 12, 2017 Following a question by Peter as to how often he is supposed to forgive someone, Jesus goes into a parable of a king who called his servants into account. We’d call it an audit today. And, one servant was found to owe ten thousand talents, a lot of money. But he begged the king forgiveness, and the king forgave him. But the servant wanted to press severely for payment of a fellow-servant whose debt amounted to pocket change. What the king does as a result shows that we will see the level of mercy we give to others.
global_01_local_1_shard_00001926_processed.jsonl/61498
English to Urdu Meaning :: spite Spite : - باوجودspitedکی spitefulspitesspiting Show English Meaning Noun(1) feeling a need to see others suffer(2) malevolence by virtue of being malicious or spiteful or nasty Verb(1) hurt the feelings of Show Examples (1) Cheating, boasting, malice and spite - my sons are blessedly free from all of these.(2) She couldn't care less for Charles Hamilton and did it only to spite Ashley.(3) Smoking is much more dangerous than eating genetically modified organisms, therefore they must just be doing it to spite the Americans.(4) Luke has never done anything to hurt me or spite me, to anger me or make me regret myself.(5) Yeah, because Henry wonders whether people would like his dad to spite him.(6) I imagine Andrew Sullivan's mailbox is full of just such spite as much for his Catholicism and for being gay.(7) I am going to be incredibly self-indulgent that day and light one hundred candles just to spite you.(8) It would have been easier if she left him with harsh words and eyes full of spite and loathing.(9) Clara said the last word with as much spite and disgust as she could conjure.(10) The answer appears to be that he hates Frank Lautenburg so much that he will cost his party the election to spite him.(11) Keating deserves every bit of spite and venom directed his way.(12) he'd think I was saying it out of spite(13) he put the house up for sale to spite his family(14) It encourages spite and malice, and suggests that the Church of England has sex on the brain.(15) The banality of grey, prison like walls high-rising above their heads was a spite to their very faces.(16) Unless people are petty enough to not vote for Shayne to spite Louis, he'll be safe. Related Words (1) in spite of :: کے باوجود 1. malice :: 3. upset :: 4. wound :: Different Forms spite, spited, spiteful, spites, spiting English to Urdu Dictionary: spite Meaning and definitions of spite, translation in Urdu language for spite with similar and opposite words. Also find spoken pronunciation of spite in Urdu and in English language. Tags for the entry "spite" What spite means in Urdu, spite meaning in Urdu, spite definition, examples and pronunciation of spite in Urdu language. English to Urdu Dictionary commonly confused words form of verbs Words by Category Topic Wise Words Learn 3000+ Common Words Learn Common GRE Words Learn Words Everyday Most Searched Words Word of the day ● Asymmetrical غیر متناسب, غیر متوازن, مختلف, , فاسد Search History Any word you search will appear here Your Favorite Words
global_01_local_1_shard_00001926_processed.jsonl/61502
Have Your Project Endorsed! The Valletta 2018 Foundation is responsible for Valletta’s journey towards the title of the European Capital of Culture in Malta in 2018.  The Foundation was responsible for the bidding process starting with the pre-selection phase in January 2012, the final selection in October 2012 and the official declaration by the Council of Ministers in May 2013. The Valletta 2018 Foundation has strong support from a range of important stakeholders and a sound, transparent governance structure: both aspects are essential to seeing Valletta 2018 become a successful and sustainable reality. In 2014 the Valletta 2018 Foundation continued its preparations for the title of European Capital of Culture 2018 by expanding its cultural programme, strengthening its administrative structures, increasing its efforts in communicating and engaging with the public, and developing its research programme. Through the years, a number of third party projects have been featuring in the Cultural Programme alongside other bidbook projects. This document provides guidance about the endorsement process and terms and conditions. What is a Valletta 2018 Foundation Endorsement? Endorsement is a process whereby third party producers/collaborators can gain a mark of quality from the Valletta 2018 Foundation for their artistic/entrepreneurial/cultural projects. Endorsement is for the shared benefit of the Foundation, the service providers and stakeholders, with no financial or legal obligation involved. This is achieved by offering quality assurance of the projects to all involved. In granting endorsement and release of its “Endorsed by Valletta 2018” logo, the Valletta 2018 Foundation is confirming that the content and format of the endorsed projects are appropriate to the subject matter and intended audience. The Endorsement Program will be exclusively administered and coordinated by the Valletta 2018 Foundation. Benefits of Endorsement In gaining endorsement from the Valletta 2018 Foundation the service provider will benefit from: 1. Free listing in the Valletta 2018 Social Media platforms 2. Permission to use the ENDORSED BY Valletta 2018 logo (Available in B&W here and in colour here) 3. Letter of endorsement from the Valletta 2018 Foundation 4. A mark of quality assurance that can be evidenced in marketing/advertising campaigns for the duration of the agreed endorsement period 5. Added value for participants in the currency of the experience due to the Valletta 2018 endorsement logo, a recognised mark of quality that carries a European status 6. A historic post-2018 legacy which results from the association to the ‘official’ Valletta 2018 Cultural Programme Endorsement Criteria An endorsement means any communications message (including verbal statements, demonstrations, or depictions of the name, signature, likeness or other identifying personal characteristics of the Valletta 2018 Foundation) which audiences are likely to believe reflects the opinions, beliefs, findings, or experiences of the Foundation, even if the views expressed by that producer are identical to those of the Foundation. To be considered for endorsement by the Valletta 2018 Foundation, the applicant must demonstrate that the proposed project meets the following required criteria and must include any of the recommended goals or characteristics outlined below. Priority will be given to projects that meet multiple goals. Endorsed projects will have the ability to generate community awareness and encourage participation in the European Capital of Culture ethos and the Valletta 2018 Capital of Culture celebrations. They need to deliver contemporary content that is in-line with evidence base, best practice and guidelines and through a clear non-partisan nature that is inclusive and accessible to the community. Projects must target all or most of the criteria that are fundamental for the Foundation’s goals namely: 1. City and Citizenship • foster the participation of the citizens living in Malta, and its Euro-Med context to raise their interest as well as the interest of citizens from all over the globe; 1. European Dimension • foster cooperation between cultural operators, artists and cities from the involved Member States and other countries in any cultural sector; • highlight the richness of cultural diversity in Europe; • bring the common aspects of European cultures to the fore 1. Legacy • implement a long-term strategy capable of generating sustainable results related to cultural, economic and social aspects; 1. Quality • a clear and coherent artistic vision and strategy for the cultural programme; • the involvement of local artists and cultural organisations in the conception and implementation of the cultural programme; • the overall artistic quality through the diverse disciplines involved 1. Innovation • the capacity to combine local cultural heritage and traditional art forms with new, innovative and experimental cultural expressions. • the range and diversity of the activities proposed and their overall artistic innovation; The Valletta 2018 Foundation requires that consideration of the following elements must be explicitly demonstrated through a portfolio of works. 1. The content of the endorsed project and/or activity must contribute to the development of the audience reached. Thus its artistic, academic and/or cultural rigour must be appropriate for its intended audience, and should be clearly defined. 2. Whilst there is no stipulation regarding persons involved in development of course material, the Valletta 2018 Foundation requires evidence that the endorsed content has been produced in collaboration with or peer reviewed by an appropriate curator/ Such process MUST be undertaken prior to the submission of application for the Valletta 2018 Foundation endorsement. 3. The production team behind the endorsed project must be appropriately qualified and/or experienced as determined by the Valletta 2018 Foundation on receipt of Curriculum Vitaes. The Valletta 2018 Foundation will seek assurance that those involved in project delivery have appropriate teaching or training experience and/or qualifications. Duration of endorsement 1. An endorsement is valid for a period discussed and agreed between the proposing individual or organisation and the Valletta 2018 Foundation. In the case that the project is longer than one year then endorsement is valid for a period of one year unless otherwise agreed. When the endorsement has expired and if there are no changes to the content or delivery of the project, the endorsement will be confirmed. Otherwise, the Valletta 2018 Foundation will re-examine the changes and act accordingly. 2. If endorsement is not confirmed after that period, all references to Valletta 2018 Foundation endorsement must be immediately removed from promotional or projects material. Use of Endorsement Logo All printed and internet material relating to an endorsed project must be approved by the Valletta 2018 Foundation and must feature the “Endorsed by Valletta 2018” logo. The “Endorsed by Valletta 2018” logo must be placed at the bottom of the artwork along with other logos of supporters/sponsors of the event, making sure to have enough clear space as required. The “Endorsed by Valletta 2018” logos are available for download from https://valletta2018.org/media Any artwork containing the “Endorsed by Valletta 2018 logo” must be approved by Valletta 2018 by sending an email to [email protected] prior to publishing. Promotion of the Valletta 2018 brand Events related to an endorsed project should feature the Valletta 2018 brand. This can be done by carrying out one or more of the following: • Displaying one or more pop up banners provided by the Foundation • Include a projection of the Valletta 2018 logo at the beginning and end of a presentation • Delivering Valletta 2018 branded material or merchandising to participants • Mentioning the Valletta 2018 endorsement during introductory and concluding statements • Hosting the Valletta 2018 information booth Terms and Conditions The Valletta 2018 Foundation is not responsible for the delivery of any part of the endorsed activity. The Valletta 2018 Foundation accepts no responsibility for how the content of the endorsed activity might be interpreted by the individual(s) participants and how the individual(s) may apply the knowledge/experience gained. Documentation of endorsed projects by the Valletta 2018 Foundation must clearly state endorsement by the use of the endorsement logo. Endorsement applies only to the project and not to any individual who participates in it. The Valletta 2018 Foundation reserves the right at all times and under any circumstances to refuse or remove endorsement of a project. Valletta 2018 Foundation reserves the right to withdraw endorsement in response to: • participant feedback, professional body or specialist group concerns • association to or endorsement by any other organisation that would somehow hinder the purpose of this agreement • any significant changes to content; • any significant changes to the delivery format; • failure to disclose significant changes to the Valletta 2018 Foundation; • misuse of Valletta 2018 Foundation endorsement logo • Concerns raised regarding the programme / evidence base. A judgement will be taken only after full investigation by the Valletta 2018 Foundation. The Valletta 2018 Foundation will not be required to give a reason for such a withdrawal. Use of the Valletta 2018 logo would be suspended for the duration of the investigation. Endorsement only applies to those projects that are granted such preference by the Valletta 2018 Foundation. No other individual/organisation may use the Valletta 2018 Foundation logo or any other related communication without written permission by the same. Environmental Policy The Valletta 2018 Foundation has drawn up an Environmental Policy that will be applied to all of the projects it is creating and managing. For projects endorsed by the Foundation, while there is no contractual obligation to following such a policy, it is expected that any staff working on the project should carry out their work in an environmentally friendly manner. Deadline for Applications To ensure a smooth process, applications must be submitted within two months before the starting date of the project to [email protected]. Preferably to make the most out of the Endorsement, applicants should submit their applications at least one month before the publication of any marketing material. Applications received after these specifications will be classed as ‘late’ applications, and may not be processed. Further Information If you have any queries regarding the endorsement process or your application, please contact us at: Valletta 2018 Foundation The Exchange Buildings Republic Street Valletta VLT 1117 Tel: +356 2124 2018 Email: [email protected]  Endorse my Project! Download application form.
global_01_local_1_shard_00001926_processed.jsonl/61546
On Yahoo! Slurp, Googlebot, and msnbot It seems that the sites that I can view the statistics for are getting hit fairly hard by Yahoo!. For The Framing Business, over 54% of bot traffic comes from Yahoo!. Over 24% comes from Google, and almost 6.8% from MSN. However, if you look at where people are coming from, only 4% are coming from Yahoo! search. Over 30% are coming from the various Google search pages, and almost 1.2% are coming from MSN. Obviously, if bandwidth is getting tight, we know who to block. Of course, if bandwidth is available, there's little reason to block these bots, even if they are sucking quite a bit (of bandwidth). For StrivingLife.net, it's very similar to this. However, bot crawling is pretty even (although msnbot is crawling quite a bit more than the other two). However, if I look at where they are coming from, it's most definitely Google (by over 100 times for this blog specifically). If you're a site owner who stumbled upon this post, what kind of traffic are you seeing from the big three bots, and from their search portals? Is crawling versus referring quite different, or is it pretty comparable?
global_01_local_1_shard_00001926_processed.jsonl/61562
Dismiss Notice Message Notification to Cell Phones Message Notification to Cell Phones 5 5 2votes 5/5, 2 votes 1. Daniel Jaskulski Jr Apr 19, 2017 Likes Received: It would be great if we could enable message notification to cell numbers and not just email. You wont beleive how many customers still want that feature.
global_01_local_1_shard_00001926_processed.jsonl/61577
ASI to Offer Executive Books, Data Security Whitepaper & Pop-Out Phone Grips at ASAE Tech Conference December 12-13 Alexandria, VA (December 7, 2017) — Advanced Solutions International (ASI), a leading global provider of software and services for associations and not-for-profits, today announced that it will demonstrate the newest iMIS 20 Engagement Management System (EMS)™ features in booth #513 at the ASAE Technology Conference & Expo on December 12-13 in National Harbor, Maryland. See ASI's full schedule of events at ASAE Technology Conference & Expo December 12-13, 2017 Gaylord National Resort and Convention Center National Harbor, MD ASI Booth: #513 Visitors to the ASI booth will see the latest iMIS 20 features that can manage member data, events, product sales, certification programs, email marketing, online payments, websites and more in one cloud-based, mobile ready and PCI-validated system. They'll also receive a complimentary copy of the third edition of ASI's book, The Association Exec's Guide to Improving Organizational Performance, that's just been updated with new data and hands-on tools to help associations meet technological, economic and operational challenges. Copies of ASI's Data Security Best Practices and Strategies whitepaper and fun new pop-out cell phone grips will also be offered in the booth. About ASI
global_01_local_1_shard_00001926_processed.jsonl/61587
ACADEMICS   /   Study Abroad   /   High School Programs   /   Greek Summer   /   Frequently Asked Questions Greek Summer Frequently Asked Questions Q. What are the exact dates of Greek Summer? A. The exact dates of Greek Summer vary from year to year. The trip runs from the end of June through the end of July or early August. The dates for Greek Summer 2019 are June 28th to July 28th.  Dates and deadlines are subject to change. There is a rolling admissions process so applying early is recommended.  Students ages 15-17 are eligible. Q. What is the cost for the program?  A. The cost of the program is $5,850. Airfare is additional. Applicants are required to submit a $1,600 deposit within seven business days of acceptance into the program.  Q. What are the age requirements for Greek Summer? A. Greek Summer is limited to high school students who have completed grades 9, 10, 11 or 12 and are between the ages of 15 and 17 years old. Students who have recently completed grade 9 must have had their 15th birthday by June 20th.  Q. Can people from outside the United States attend the program? A. Yes. Q. Do I need to speak Greek to participate in the program? A. No, you do not need to speak Greek. Most of the students on the program have little or no experience with the Greek language. Q. Is there anybody in my area with whom I could speak about the program? A. Greek Summer has nearly 2,000 alumni across the United States, and many are eager to speak about their experiences. Contact us at to find Greek Summer alumni in your area. Understand a new culture by living it Learn by doing … Gain new perspectives © 2017 American Farm School
global_01_local_1_shard_00001926_processed.jsonl/61596
Back to top What is Boat Insurance? Updated: April 2018 Boat insurance may help cover a motorboat, sailboat or personal watercraft if it's stolen or damaged. It may also help protect you if you accidentally injure someone or damage their property with your boat. Multiple white sailboats on blue water. Here's a look at the types of coverage often included in a typical boat insurance policy. Property coverage Boat insurance helps protect a boat (and other watercraft) from certain risks. For instance, property coverage may help pay to repair your boat after it's damaged in an accident, or help pay to replace it in the event that it's stolen, the Insurance Information Institute (III) explains. This coverage usually helps protect a boat whether it's on water or land. Liability coverage If you cause an accident that damages another person's boat or someone else's property or injures someone who is not on your boat, liability coverage may help pay for expenses you incur as a result. Medical payments coverage Suppose you or your passengers are injured after an accident on your boat. Medical payments coverage may help pay for resulting expenses such as hospital bills, medications or X-rays. Uninsured watercraft coverage What would happen if an uninsured boater collides with your watercraft and you or one of your passengers are injured? The uninsured watercraft coverage on your boat insurance policy may help pay for the resulting expenses. A man talking on a phone. You can count on a local agent. Find an agent It's a good idea to consider whether your boat insurance provides the protection that best fits your needs. The III suggests asking your agent about coverage for things like boat trailers, accessories, special equipment (fishing gear, for instance) and towing coverage. You may also want to look into whether your policy includes coverage to help pay for the cost of fuel spills or wreckage removal in the event of an accident on the water. As with any insurance, coverage limits will apply. Limits are the maximum amount your policy will pay out after a covered loss. You'll typically find that each type of coverage within a policy has its own limit. Read your policy to learn what your coverage limits are and talk with your local agent to learn whether you can adjust those limits based on your needs. If you do need to file a claim, you'll likely need to pay your deductible before your insurance kicks in to help pay for the loss. Click on button to view infographic. With a basic knowledge of how boat insurance may help protect you, your boat and others, you can set sail with the peace of mind that comes with knowing you have certain safeguards in place, just in case. Quiz: How Much Do You Know About Boat Insurance? Question 1 of 5 You answered out of correct! Start over Need help with boat insurance? Ask an agent! Or, learn more about boat insurance on Tools & Resources. Related Resources: ECC Monitor: OK
global_01_local_1_shard_00001926_processed.jsonl/61604
Facebook buying WhatsApp for $16 billion Facebook has today announced that it will be buying the popular cross-platform social messaging platform Whatsapp for around $16 billion. The deal was announced today as both parties reached an agreement: Whatsapp receives $4 billion in cash and $12 billion in Facebook stock. If, for whatever reason the deal falls through, Facebook have agreed to pay a fee of $2 billion in a mixture between cash and stock to Whatsapp. Facebook was interested in the social messaging platform due to the incredible amount of growth that the service received, with figures touting that Whatsapp was adding around a million users per month to its platform. Whatsapp will continue to operate as an independant company and keep their own branding, but will have Facebook members on the board. With Facebook messenger a big part of their service, it makes sense that eventually Whatsapp will be obsorbed into their own platform.
global_01_local_1_shard_00001926_processed.jsonl/61697
The cows in Stella Gibbons's immortal 'Cold Comfort Farm' are named Graceless, Aimless, Feckless and Pointless, and that more or less is the verdict on 'Ocean's Kingdom,' the wildly hyped and wildly uninteresting collaboration between Peter Martins and Paul McCartney. Robert Gottlieb Author Profession: Writer Nationality: American Born: April 29, 1931 Find on Amazon: Robert Gottlieb Cite this Page: Citation Quotes to Explore
global_01_local_1_shard_00001926_processed.jsonl/61710
Vermont Democrats made history Tuesday night by being the first major party to nominate a transgender individual to represent them in the race for governor. Christine Hallquist of Hyde Parke will challenge incumbent Gov. Phil Scott in the 2018 election. But this is not the first time that Vermonters have been national frontrunners in challenging social norms. “Vermont has a history of being first and being on the right side of history,” said Alex MacLean, the campaign manager for former Democratic Gov. Peter Shumlin in 2010 and 2012, on Tuesday evening as votes were being counted. “And I think we’ve done it again.” On the other hand: The not so progressive history of Vermont politics Here’s a list of other firsts: 1777: Vermont leads colonies in abolishing adult slavery Before it was a state in the union, Vermont was the first of the colonial territories to abolish adult slavery. While this didn't always translate into actual practice, it was written into the state's constitution adopted the same year. Some historians have said that the slavery section of Vermont's constitution had loopholes since it didn't apply to children. Vermont officially became a state in 1791.  1823: Middlebury College's Twilight becomes first African American to graduate from U.S. university  Alexander Twilight of Corinth became the first African American to graduate from an American university — in this case, Middlebury College. The Barre-Montpelier Times Argus has reported that Twilight's admission to the college appeared to be a mistake since it didn't know he was of mixed race when he applied. Twilight also later became the nation's first African American state legislator. 1940: Vermonter receives Social Security check No. 00-000-001 Vermonter Ida May Fuller received the very first Social Security check, numbered 00-000-001. Fuller was born near Ludlow and attended school in with future President Calvin Coolidge. She worked as a legal secretary and schoolteacher. She filed her historic claim while running errands in Rutland shortly after retiring at age 65. Her first check was for $22.54. She lived to be 100, dying in Brattleboro in 1975, and collected $22,888.92 in benefits in her lifetime, according to the Social Security Administration Historian’s Office. She had worked for three years while the Social Security was in effect, and paid $24.75 into the program. 2000: Vermont becomes first to legalize civil unions for same-sex couples  Vermont was the first state to legalize civil unions for same-sex couples. In 2009, it was the 5th state to legalize same-sex marriage, but was the first to do so through legislation rather than a court ruling, according to the Vermont Secretary of State's office. 2017: Democrat Faisail Gill believed to be first Muslim in U.S. to lead state political party Faisail Gill, the now-former leader of the state's Democratic party, was believed to be the first Muslim in the U.S. to lead a state political party. Nadeem Mazen, the founder of a Massachusetts-based organization working to get Muslims and minorities involved in community organizing and civic life, was unable to confirm that Gill is the first state party leader but said it's "almost certainly true."  2018: Vermont becomes first to legalize recreational marijuana use through legislature The Green Mountain State became the first in the country to legalize recreational use of marijuana through its legislature (as opposed to a referendum, or popular ballot initiative).Vermont’s law sprang from a hard-won consensus around possession of small amounts of weed, but legislators failed to enact any regulation of sales. Just over a century earlier, in 1915, Vermont pioneered the demonization of cannabis when it joined a handful of states to outlaw the plant. Contributing: Will DiGravio, Free Press Staff Writer.  Want to support local journalism? Download our app or subscribe to the Burlington Free Press. Read or Share this story:
global_01_local_1_shard_00001926_processed.jsonl/61742
Winners announced for 2009 Friends of Lulu Awards The winners were announced this morning for the 2009 Friends of Lulu Awards, which recognize "the people and projects that helped to open eyes and minds to the amazing comic and cartooning work by and/or about women." The winners are: Lulu of the Year: Danielle Corsetto for Girls with Slingshots Woman of Distinction: Joanne Carter Siegel Female Comic Creator’s Hall of Fame: Gail Simone Brief biographies of each of the winners can be found here. Marvel Will Finally Answer a Decades-Long Spider-Man Mystery More in Comics
global_01_local_1_shard_00001926_processed.jsonl/61770
You can now bring your crew to ChefsFeed.  You might notice there's a shiny new button on your app’s homescreen. When you open that sucker up, you'll see a skull with a plus sign in the top left corner—that allows you to send a ChefsFeed homing signal to all your fellow eaters. You’ve got until April 1 to invite as many friends as you can, and here's what you get out of it: Initiate five new people into the ChefsFeed universe, and you’ve got a custom bandana coming your way; sweet talk 25 people onto this party bus and we’ll ship you one of the softest damn hoodies known to mankind (the ones you’ve seen chefs sporting all over Internet Kingdom); the one crazy-eyed fanatic among you who refers the highest number of new users will get all that AND we shall bequeath you 100 bucks. Put it towards an extravagant blow-out at a restaurant you’ve had your eye on, OR spend it on a mountain of wings. No judgment.   Ghost hoodies! Now for some rules and stuff: new people have to be, you guessed it, new. Once you send them an invite, they must officially create and log into their account in the app for it to help you on your path to glory. For a complete list of rules, see: Referral Program Terms and Conditions. Fine print stuff: we can only ship to US, UK and Canada. Womp.  Click here to get that swag rolling!
global_01_local_1_shard_00001926_processed.jsonl/61772
Online Dermatology Solar radiation reaching the Earth’s floor mainly comprises of seen, infrared and ultraviolet gentle. Often, I would suggest glycolic acid peels, however you talked about that you are already on retinal and your pores and skin is sensitive, so adding the rest at this time might do more hurt than good. In case your skin is too sensitive for these creams, hyaluronic acid paired with the drug diclofenac (Solaraze) can deal with AK. Then, seemingly in a single day, the lack of care starts to point out up in the type of uneven pigmentation, sunspots, and wrinkles. Apply from mid-shaft to ends earlier than blow-drying, or massage a few drops into dry hair earlier than shampooing. They can additionally break down your skin’s elastin and collagen, leading to sagging pores and skin and uneven pores and skin tone. We expect it’s worth checking out this Booster Restore , an oil that may be added to your favourite moisturizer, primer, mask, or basis for an added burst of rejuvenation. I’ve tried Future Derm’s retinol product and I beloved it. I very slowly increased the application frequency and only used a tiny bit every time and did not expertise any irritations plus the product sunk in virtually instantly.Sun Damage skin care Exposure to UVA rays can increase the appearance of pores and skin discolorations and dark spots. In the final 10 years, skin-care breakthroughs have made it attainable to drag that off, repairing pores and skin’s most weak areas with targeted merchandise and coverings. If you’ve observed that the solar has wreaked havoc in your face, slather this dermatologist-authorised cream on before bed. Formulated with everything from free radical-fighting antioxidants to collagen peptides, DNA-restoring enzymes, and potent repair boosters, one of the best breakthroughs worth investing in are rounded up here. Categories: Sun Damage Tags: ,
global_01_local_1_shard_00001926_processed.jsonl/61805
Michael Thomas Michael Lane Thomas, also known as the .NET Cowboy for his sometimes untamed, Wild West-style passion for .NET, has been a fixture in the development community for many years. A speaker at professional, academic, and Microsoft-internal technical conferences, Michael has been a primary contributor to 26 books, including a multi-year stint as .NET Series Editor for Wiley's/Hungry Minds, going back to the Beta days of .NET. Michael has spent time as industry analyst, commentator, and co-host of a weekly radio talk show. As an exam-junkie, Michael is currently the eighth most certified MCP in the world, passing a total of 62 exams to date. Michael is currently a Developer Evangelist for Microsoft and greatly enjoys exploring the Alpha bits for Whidbey as a Microsoft VS.NET Insider, and his role of daddy to his 1-year-old son Noah, also known as "Mr. Pinchy Cheeks." Contact Information:
global_01_local_1_shard_00001926_processed.jsonl/61814
pdf creation - Visual Studio 4/5/6 / Planatech Solutions 1. $$$ | Buy
global_01_local_1_shard_00001926_processed.jsonl/61824
 Pandoras Box > User Interface - Master > Tabs Overview > Configuration > Unit Management Unit Management Navigation:  Pandoras Box > User Interface - Master > Tabs Overview > Configuration > Unit Management prev main next Unit Management prev main next The section "Unit Management" in the Global category from the Configuration tab enables you to set up on which units Pandoras Box 3D space is based on. Since version 6, the pixel-oriented workflow is the new default and supersedes working with generic units. Of course, you can still choose between both workflows. When you create a new project, the Startup Dialog offers the check box "Pixel-oriented workflow" which does the same as using the Configuration settings described below. Pandoras Box allows you to work in a 3D space based on so called 3D units or generic units (GU). The result you see on your display is a rendering of the 3D space done in two steps, so called render passes. First a 3D compositing and then a 3D output space is created. The chapter "Video Processing Pipeline" describes the Composition and Output Pass in more detail. The final render output is based on pixel units. Depending on the resolution set up in the Display driver, the pixel width and height changes. Now, there are different ways how to translate a 3D unit to a pixel and Pandoras Box offers two options in the drop-down list "Unit Translation Mode". Unit Translation Mode configuration_unit-management_01When positioning and sizing elements based on pixel values it is helpful to ensure that the entire system globally uses only one fixed relationship for translating between 3D units and pixels. This is for example of interest when displays with different resolution form one "pixel space", one large screen. Or, when you like to work with pixel values in general e.g. to be able to shift the layer exactly 512px. This form of unit translation is the default, the drop-down list "Unit Translation Mode" is set to "Use Fixed Relationship". In this mode, a distance in pixels always results in the same distance in 3D units independent of the render pass resolution. If you need to change the translation mode in an existing project or the mapping, please follow the next steps. After selecting the fixed relationship, enter how many 3D units should be mapped on 1000 pixel. Per default the Output Pass uses the same translation factor and should be changed only if needed. If your Clients are already connected to the Master system, you may click the "Init with Resolution" button. This opens a dialog that lists all output resolutions from the Clients. Choose one resolution and Pandoras calculates the translation factor automatically. As a result, the cameras of each site will adopt to new default values. Whilst the Z-position stays at -25units, the FOV (field of view) changes to a value according to your translation factor. To ensure consistent translation of units, leaving the cameras in the default state is strongly recommended! When choosing the option "Map 16 3D Units to each Resolution", Pandoras Box sizes the 3D spaces so that 16 3D units exactly match the pixel width of each render pass. This is achieved by applying specific parameters to the camera; the XYZ-position equals 0,0,-25 and the opening angle, the field of view, is 35.489°. To give you an example, no matter whether your display / render pass resolution is 1024px or 1920px, moving a layer with 8 units has the same result in both displays. If the layer's center was in the middle of the screen, it will now be at the monitor's edge. In other words, different render passes may map same 3D unit distances to different pixel distances. In the example, 16 units were mapped to the 1024px as well as the 1920px output, hence 8 units is half the width of both monitors. 16units / 1024px = 0.15625 units / px = 15.625 units / 1000 px 16units / 1920px = 0.83333 units / px = 8.333 units / 1000 px Origin Settings "Origin Settings" are further options that become available when working with a fixed relationship. Layers and other devices (Camera, Output) can be positioned differently in regards to the origin of the coordinate system. configuration_unit-management_preview_01In the left image you see that the layer's and the camera's center match the origin of the coordinate system XYZ=0,0,0. This is the default in both Unit Translation Modes. configuration_unit-management_preview_02In this example the "Compositing space" is left in the center (check box not ticked) but the "Default Layer Mesh" is activated so that the 0,0,0 origin is in the layer's upper left corner. configuration_unit-management_preview_03At last, in this example the upper left corner of the "Compositing Space" and the "Default Layer Mesh" match the 0,0,0 origin. Note that the default values for the camera's XY-position have changed. Parameter Value Readout configuration_unit-management_value-readout-- Pixel (where applicable) This option is available, if the Unit Translation Mode is set to "Use Fixed Relationship" and activated per default. As some parameters (like Position and most Camera parameters) are then based on pixel values, you can also choose to see and enter exact pixel numbers, e.g. move right by 50 pixels. With a disabled check box, those parameters are based on generic units. In Pandoras Box, the positive X-axis points to the right direction and the Y-axis to the top. If you like to have the Y-axis pointing down, make sure, "Invert Y-Axis" is enabled. Both options can also be found in the section "Devices / Parameters" where you can also choose to see percentage values. The last option "Interpret Automation Param Input as Pixel Values" is of interest when you remote control Pandoras Box via the offered SDK or via Widget Designer. When you send values with the type Double to "pixel" parameters in PB, they can be interpreted directly as pixel values when the check box is activated, meaning that they are not modified (again). When the option is deactivated, PB assumes, the input values are generic units and translates them to pixels according to the translation factor. For example, sending "4.5" results in 540px taking the default translation factor 8.333 GU per 1000px. Reset to defaults configuration_unit-management_03At last, the buttons under "Reset to defaults" ensure that your workflow is based on correct settings. When changing from one translation mode to another a pop-up dialog informs you that some settings should be adjusted and offer a button do do this automatically. However you can still apply the recommended setup manually using these buttons. "Set All Layer Sizing Modes To 'Media Pixel Size' " changes the Sizing Mode from a Layer. Please see further information in the Layer Inspector. "Reset All Cameras" resets all (active) parameters from the Camera Layer. As explained above, the translation modes set different default parameters for the cameras. In case you have already changed some parameters, please click this button (and adopt your programming). "Reset All Layer Z-positions" resets all (active) Z-position parameters from all Video and Graphic Layers. If you work with "Fixed Relationship" and  the pixel-oriented workflow the pixel accuracy is only achieved when leaving the layers on Z-position 0. All these options are already prepared for you in case you open a new project and click the check-box "Pixel-Oriented Workflow" in the Startup Dialog. If you change to a generic unit-based workflow, there is only one button changing the Sizing Mode from a Layer (explained here: Layer Inspector): "Set All Layer Sizing Modes To 'Fit Larger Side' ".
global_01_local_1_shard_00001926_processed.jsonl/61844
What is Tether? What is Tether? Tether is a Cryptocurrency, based on blockchain technology, which was created to have a virtual equivalence with the U.S. dollar, so its price will always be the same and this has influenced what many call "crypto-dollar". Tether is recommended for those who do not want to speculate with Cryptocurrencies and want to use Cryptocurrencies for businesses without going to the traditional banking system. Tether has many advantages, for example, it allows to convert the digital currency into cash; It uses the Blockchain technology that guarantees security through the Omni protocol, which is an open source software that interacts with Blockchains to allow the issuance and exchange of Tethers tokens; It can be bought and sold at different Exchange sites. Tether's advantages Another interesting advantage of Tether is that it’s always backed by 1 by 1, which is the equivalent of the traditional currency existing in the company's reserves, therefore, if it has 500 USDT in its possession then it has 500 USD. This linkage to Tether's reserves means that users can use it as a fiduciary currency in the world of cryptography. For traders this Cryptocurrency is very useful, because it allows them to do business without worrying about volatility; In addition, the Tether stands out for its transparency, as the company reserves are made public every day, thus ensuring that all tethers in circulation coincide with existing reserves. These are not the only incentives, because Tether offers individual users and traders the ability to move the money between the bags and purses, without any need of converting it into Bitcoin first. In addition, these persons guards their own funds and does not have to go through the exchange houses. The company that issued the tokens, Tether Limited, has offices in Hong Kong, Santa Monica (California) and the east coast of the United States. This company is also responsible for the maintenance of the Web site that saves and transfers the Tether tokens, as well as the integration of Tether with third parties. To date, Tether is among the top 20 Cryptocurrencies for market capitalization. It has a daily operation volume of 2.4 billion dollars and a global market cap of 2.2 billion dollars. It is becoming increasingly common for online stores and the world's largest companies to accept the Bitcoin and other Cryptocurrencies as a method of payment. Do you want to invest in cryptocurrencies? Risk warnings Changelly 240x400 Avenida Samuel Lewis, Torre Omega, suite 5D. General: [email protected] Customers: [email protected] Support: [email protected] Italy: +39 (06) 99335786 Spain: +34 (93) 1845787 Panama: +507 8327893 Panama: +507 8339512 United Kingdom: +44 (203) 6951776 United States: +1 (305) 3402627 Switzerland: +41 (91) 2280356 +507 8339512 Customer service from
global_01_local_1_shard_00001926_processed.jsonl/61892
XML Sitemap URLPriorityChange frequencyLast modified (GMT) https://www.dg-index.jp/archives/56260%Monthly2017-01-05 09:54
global_01_local_1_shard_00001926_processed.jsonl/61896
verb (used with object), trans·lat·ed, trans·lat·ing. 1. to turn from one language into another or from a foreign language into one's own: to translate Spanish. 2. to change the form, condition, nature, etc., of; transform; convert: to translate wishes into deeds. 3. to explain in terms that can be more easily understood; interpret. 5. Mechanics. to cause (a body) to move without rotation or angular displacement; subject to translation. 6. Computers. to convert (a program, data, code, etc.) from one form to another: to translate a FORTRAN program into assembly language. 7. Telegraphy. to retransmit or forward (a message), as by a relay. 8. Ecclesiastical. 9. to convey or remove to heaven without natural death. 10. Mathematics. to perform a translation on (a set, function, etc.). 11. to express the value of (a currency) in a foreign currency by applying the exchange rate. 12. to exalt in spiritual or emotional ecstasy; enrapture. verb (used without object), trans·lat·ed, trans·lat·ing. 1. to provide or make a translation; act as translator. 2. to admit of translation: The Greek expression does not translate easily into English. Origin of translate Related formstrans·lat·a·ble, adjectivetrans·lat·a·bil·i·ty, trans·lat·a·ble·ness, nounhalf-trans·lat·ed, adjectivein·ter·trans·lat·a·ble, adjectivepre·trans·late, verb (used with object), pre·trans·lat·ed, pre·trans·lat··trans·late, verb (used with object), re·trans·lat·ed, re·trans·lat·ing.un·trans·lat·a·bil·i·ty, nounun·trans·lat·a·ble, adjectiveun·trans·lat·ed, adjectivewell-trans·lat·ed, adjective Can be confusedtranslate transliterate Unabridged Based on the Random House Unabridged Dictionary, © Random House, Inc. 2018 Examples from the Web for untranslated Historical Examples of untranslated British Dictionary definitions for untranslated 1. not having been expressed or written down in another language or dialect 1. to express or be capable of being expressed in another language or dialecthe translated Shakespeare into Afrikaans; his books translate well 2. (intr) to act as translator 3. (tr) to express or explain in simple or less technical language 4. (tr) to interpret or infer the significance of (gestures, symbols, etc) 5. (tr) to transform or convertto translate hope into reality 6. (tr; usually passive) biochem to transform the molecular structure of (messenger RNA) into a polypeptide chain by means of the information stored in the genetic codeSee also transcribe (def. 7) 7. to move or carry from one place or position to another 8. (tr) 9. (tr) RC Church to transfer (the body or the relics of a saint) from one resting place to another 10. (tr) theol to transfer (a person) from one place or plane of existence to another, as from earth to heaven 11. maths physics to move (a figure or body) laterally, without rotation, dilation, or angular displacement 12. (intr) (of an aircraft, missile, etc) to fly or move from one position to another 13. (tr) archaic to bring to a state of spiritual or emotional ecstasy Derived Formstranslatable, adjectivetranslatability, noun Word Origin for translate Word Origin and History for untranslated Online Etymology Dictionary, © 2010 Douglas Harper untranslated in Medicine [trăns-lāt, trănz-, trănslāt′, trănz-] 1. To render in another language. 2. To put into simpler terms; explain or interpret. 3. To subject mRNA to translation. Related formstrans•lat′a•bili•ty n.trans•lata•ble adj.
global_01_local_1_shard_00001926_processed.jsonl/61897
XML Sitemap Index URL of sub-sitemapLast modified (GMT) https://www.diehaarstation.de/sitemap-misc.html2018-05-30 08:44 https://www.diehaarstation.de/sitemap-pt-page-2014-11.html2018-05-30 08:44
global_01_local_1_shard_00001926_processed.jsonl/61922
German artists reveal truth behind neo-Nazi doxing campaign | News | DW | 06.12.2018 1. Inhalt 2. Navigation 3. Weitere Inhalte 4. Metanavigation 5. Suche 6. Choose from 30 Languages German artists reveal truth behind neo-Nazi doxing campaign A group of artists said it created a website for people to denounce their neo-Nazi acquaintances. But the doxing website was actually a so-called honeypot engineered to get members of the far-right to reveal themselves. German activists took down a controversial website on Wednesday after revealing that it was a honeypot trap to encourage neo-Nazis to reveal themselves. A group of artists known as the Center for Political Beauty (ZPS) had previously said that their "Special Commission Chemnitz" campaign was an effort to get people to dox friends, neighbors or colleagues they saw in pictures from far-right demonstrations that took place in the city of Chemnitz earlier this year. The ZPS also promised monetary rewards for the denunciations. But now the group has revealed that the true nature of the website was to get far-right supporters to supply their own names, which they did by using the site's search function. 'Thank you, dear Nazis' "Thank you, dear Nazis," ZPS wrote on the site, explaining in detail how they used the searches to discover the identities of at least 1,500 participants in the violent anti-immigrant protests in Chemnitz. "This is the most relevant set of data on right-wing extremism that currently exists in Germany," ZPS founder Philipp Ruch told German news agency EPD. ZPS said it recorded search terms users entered, usually a person's first and last name, to create a list of potential far-right sympathizers. The average visitor searched for the names of 6.7 friends, according to ZPS. By comparing the names a user searched for with a list of names of people thought to sympathize with the far-right, ZPS expanded the network of potential far-right supporters. "We built a website with one single goal: To get you to deliver your entire network yourselves without even noticing it," ZPS wrote on its website. "Using the search function you shared more with us than publicly available sources ever could have revealed." ZPS' initial announcement of the website prompted a flurry of controversy about privacy rights online, and Germany's data protection commissioner's office said it was looking into whether the site was acting within legal limits. The German Arts Council also called the site "problematic." Shortly after the launch of the site, the ZPS said that an angry mob stormed their office in Chemnitz, and when the police came, they confiscated the group's posters of far-right demonstrators. DW recommends WWW links
global_01_local_1_shard_00001926_processed.jsonl/61933
One Direction And Justin Bieber Singing 'Joy To The World' Is Everything (Video) Joy to the world, James Corden's SUV full of celebrities has come. If you aren't in the holiday spirit yet, this mashup of celebrities karaoke-ing to “Joy to the World” in James Corden's car will get you there faster than you can say, “How is Jimmy Fallon still on air?” Iggy Azalea raps, Stevie Wonder belts and Jason Derulo does one of those harmony things at the end where you move your hand up and down like a sound monitor while you make an “OOOOOoooooOOOOOOooooOOOOoooooo” tone that sounds like audio cashmere. Justin Bieber, One Direction and Reggie Watts round out this holly jolly, must-see, soon-to-be viral video. Honestly, what else could you ask Santa for this year than that? Happy Web-friendly, shareable content holiday, everybody!
global_01_local_1_shard_00001926_processed.jsonl/61963
5 Keyboard Shortcuts for Rows and Columns in Excel Bottom line: Learn some of my favorite keyboard shortcuts when working with rows and columns in Excel. Skill level: Easy Keyboard Shortcuts for Rows and Columns in Excel Whether you are creating a simple list of names or building a complex financial model, you probably make a lot of changes to the rows and columns in the spreadsheet.  Tasks like adding/deleting rows, adjusting column widths, and creating outline groups are very common when working with the grid. This post contains some of my favorite shortcuts that will save you time every day. #1 –  Select Entire Row or Column Shift+Space is the keyboard shortcut to select an entire row. Ctrl+Space is the keyboard shortcut to select an entire column. Select Entire Row or Column in Excel Keyboard Shortcuts The keyboard shortcuts by themselves don't do much.  However, they are the starting point for performing a lot of other actions where you first need to select the entire row or column.  This includes tasks like deleting rows, grouping columns, etc. These shortcuts also work for selecting the entire row or column inside an Excel Table. Keyboard Shortcut to Select Rows or Columns in Excel Table When you press the Shift+Space shortcut the first time it will select the entire row within the Table.  Press Shift+Space a second time and it will select the entire row in the worksheet. The same works for columns.  Ctrl+Space will select the column of data in the Table.  Pressing the keyboard shortcut a second time will include the column header of the Table in the selection.  Pressing Ctrl+Space a third time will select the entire column in the worksheet. You can select multiple rows or columns by holding Shift and pressing the Arrow Keys multiple times. Select Multiple Columns with Shift Plus Arrow Keys #2 – Insert or Delete Rows or Columns There are a few ways to quickly delete rows and columns in Excel. If you have the rows or columns selected, then the following keyboard shortcuts will quickly add or delete all selected rows or columns. Ctrl++ (plus character) is the keyboard shortcut to insert rows or columns.  If you are using a laptop keyboard you can press Ctrl+Shift+= (equal sign). Insert Entire Row in Excel Ctrl+- (minus character) is the keyboard shortcut to delete rows or columns. Delete Selected Column in Excel So for the above shortcuts to work you will first need to select the entire row or column, which can be done with the Shift+Space or Ctrl+Space shortcuts explained in #1. If you do not have the entire row or column selected then you will be presented with the Insert or Delete Menus after pressing Ctrl++ or Ctrl+-. Insert or Delete Menu Appears When Entire Row or Column is Not Selected You can then press the up or down arrow keys to make your selection from the menu and hit Enter.  For me it is easier to first select the entire row or column, then press Ctrl++ or Ctrl+-. So, the entire keyboard shortcut to delete a column would be Ctrl+Space, Ctrl+-.  You could also use the keyboard shortcut Alt+H+D+C to delete columns and Alt+H+D+R to delete rows.  There are lots of ways to do a simple task… 🙂 #3 – AutoFit Column Width There are also a lot of different ways to AutoFit column widths.  AutoFit means that the width of the column will be adjusted to fit the contents of the cell. You can use the mouse and double-click when you hover the cursor between columns when you see the resize column cursor. Double-Click Resize Column Cursor to Autofit Column The problem with this is that you might just want to resize the column for the date in cell A4, instead of the big long title in cell A1.  To accomplish this you can use the AutoFit Column Width button.  It is located on the Home tab of the Ribbon in the Format menu. AutoFit Column Width Button in Excel The AutoFit Column Width button bases the width of the column on the cells you have selected.  In the image above I have cell A4 selected.  So the column width will be adjusted to fit the contents of A4, as shown in the results below. AutoFit Column Width Button Resizes Column Based on Selected Cell Contents Alt,H,O,I is the keyboard shortcut for the AutoFit Column Width button.  This is one I use a lot to get my reports looking shiny. 🙂 Alt,H,O,A is the keyboard shortcut to AutoFit Row Height.  It doesn't work exactly the same as column width, and will only adjust the row height to the tallest cell in the entire row. #3.5 – Manually Adjust Row or Column Width The column width or row height windows can be opened with keyboard shortcuts as well. Alt,O,R,E is the keyboard shortcut to open the Row Height window. Alt,O,C,W is the keyboard shortcut to open the Column Width window. Excel Keyboard Shortcuts for Row Height and Column Width The row height or column width will be applied to the rows or columns of all the cells that are currently selected. These are old shortcuts from Excel 2003, but they still work in the modern versions of Excel. #4 – Hide or Unhide Rows or Columns There are several dedicated keyboard shortcuts to hide and unhide rows and columns. • Ctrl+9 to Hide Rows • Ctrl+0 (zero) to Hide Columns • Ctrl+Shift+( to Unhide Rows • Ctrl+Shift+) to Unhide Columns – If this doesn't work for you try Alt,O,C,U (old Excel 2003 shortcut that still works).  You can also modify a Windows setting to prevent the conflict with this shortcut.  See the comment from Pablo Baez on Oct 5, 2015 below for further instructions.  Thanks Pablo! 🙂 The buttons are also located on the Format menu on the Home tab of the Ribbon.  You can hover over any of the items in the menu and the keyboard shortcut will display in the screentip (see screenshot below). Keyboard Shortcuts to Hide and Unhide Rows and Columns in Excel The trick with getting these shortcuts to work is to have the proper cells selected first. To hide rows or columns you just need to select cells in the rows or columns you want to hide, then press the Ctrl+9 or Ctrl+Shift+( shortcut. To unhide rows or columns you first need to select the cells that surround the rows or columns you want to unhide.  In the screenshot below I want to unhide rows 3 & 4.  I first select cell B2:B5, cells that surround or cover the hidden rows, then press Ctrl+Shift+( to unhide the rows. Select Cells That Surround Hidden Rows or Columns Before Unhiding The same technique works to unhide columns. #5 – Group or Ungroup Rows or Columns Row and Column groupings are a great way to quickly hide and unhide columns and rows. Shift+Alt+Right Arrow is the shortcut to group rows or columns. Shift+Alt+Left Arrow is the shortcut to ungroup. Again, the trick here is to select the entire rows or columns you want to group/ungroup first.  Otherwise you will be presented with the Group or Ungroup menu. Keyboard Shortcut to Ungroup Rows or Columns in Excel Alt,A,U,C is the keyboard shortcut to remove all the row and columns groups on the sheet.  This is the same as pressing the Clear Outline button on the Ungroup menu of the Data tab on the Ribbon. *Bonus funny: At some point when using the group/ungroup shortcuts, you will accidentally press Ctrl+Alt+Right Arrow.  This is a Windows shortcut that orientates the entire screen to the right.  I call it “neck ache view”.  To get it back to normal press Ctrl+Alt+Up Arrow. If your co-worker or boss accidentally leaves their computer unlocked and you want to play a joke on them, press Ctrl+Alt+Down Arrow.  This will turn their screen upside down.  Don't forget to record a video of their WTF reaction… 🙂 What Are Your Favorites? There are a ton of keyboard shortcuts for working with rows and columns.  The above are some of my favorites that I use everyday.  What are some of your favorites?  Please leave a comment below.  Thanks! 🙂 Please share Jon Acampora Click Here to Leave a Comment Below 123 comments Madey - December 5, 2018 What is the shortcut for hiding and unhiding the columns and raws in windows 10. thanks in advance. Jeff Carno - November 15, 2018 Is there a shortcut to hide the rows after using Shift-Alt-Right arrow to group them? Kalum - August 29, 2018 Hi Jon, Thanks Dear, I am always using shortcut key when working in the excell work book, so this shortcut key are very useful for me. Zane Mallinson - September 15, 2018 comprar followers - September 30, 2018 comprar reproducciones Mar - October 30, 2018 Tienes cuenta de youtube? lasha - August 21, 2018 Thanks. Really useful. Muhammad Saim Riaz - August 9, 2018 Great….. The shortcuts are worth remembering. Jegan - March 14, 2018 please share vlookup function in my email., i dont know anything else Steve - March 8, 2018 Thanks Jon, most useful. The Ctrl+9 & 0 don’t appear to work in Excel 365, however Ctrl+O, etc works fine. Velks - April 25, 2018 Nice and useful tips John, Thanks…I too love keyboard shortcuts. Hi Steve, Below are the steps for higher versions of Windows or Excel set up issues for the same with little changes mentioned already by Pablo for older versions. 1. Click Start, and then click Control Panel. 2. Double-click Language. 3. Click Advanced Settings 4. Click Change language bar hot keys under Switching input methods 5. Click Advanced Key Settings tab, and select Between input languages. 6. Click Change Key Sequence… 7. For Switch Keyboard Layout, select Not Assigned. 8.Click OK to close each dialog box. I hope this will solve your issue. tasneem - February 23, 2018 how can i switch two raw (places) by using excel short cut with out using cut and paste. Ma Lae - January 28, 2018 Sir Jon Acampora, Thanks for your sharing. amit - April 25, 2018 kindly share the excel short keys in my mail ID Jamil Rehman - January 13, 2018 Thanks Dear for such a informative knowledge. I usually use short keys to insert or delete row and columns. But i found you by searching how to hide rows and columns with short keys. Thanks for your sharing. Peter - November 30, 2017 Hi Jon, Is there an Excel shortcut for the vlookup function? I’ve searched online and cant find anything and then remember you’ve answered a question of mine before, in the above thread. Leave a Reply:
global_01_local_1_shard_00001926_processed.jsonl/61970
import from excel / i want to import some rows (cells) from excel , but without repeating the same row , i used this code its working very gut ,, how i can check before import that some record already exist !! hier my code on click Dim oXL As Object Dim oWs As Object Dim rs As DAO.Recordset Dim i As Integer Set oXL = CreateObject("Excel.Application") Set oWs = oXL.Workbooks.Open("C:\Users\User1\Desktop\booking.xlsx") Set rs = CurrentDb.OpenRecordset("booking") zeile = 2 Dim Ende As Long Ende = 0 Do Until oWs.workSheets("booking").Cells(Ende + 2, 2) = "" Ende = Ende + 1 With rs For i = 1 To Ende .Fields("Date of Tour") = oWs.workSheets("booking").Cells(zeile, 6) .Fields("Pickup Time") = oWs.workSheets("booking").Cells(zeile, 7) .Fields("name of account") = oWs.workSheets("booking").Cells(zeile, 3) .Fields("Tour") = oWs.workSheets("booking").Cells(zeile, 5) .Fields("Pax AD") = oWs.workSheets("booking").Cells(zeile, 8) .Fields("Pax ch") = oWs.workSheets("booking").Cells(zeile, 9) .Fields("cost AD") = oWs.workSheets("booking").Cells(zeile, 10) .Fields("cost ch") = oWs.workSheets("booking").Cells(zeile, 11) .Fields("sale AD") = oWs.workSheets("booking").Cells(zeile, 12) .Fields("sale ch") = oWs.workSheets("booking").Cells(zeile, 13) .Fields("PaymentMeth") = oWs.workSheets("booking").Cells(zeile, 15) .Fields("Driver") = oWs.workSheets("booking").Cells(zeile, 16) .Fields("Guide") = oWs.workSheets("booking").Cells(zeile, 17) .Fields("TotalCollect") = oWs.workSheets("booking").Cells(zeile, 18) .Fields("bookedby") = oWs.workSheets("booking").Cells(zeile, 20) .Fields("mob") = oWs.workSheets("booking").Cells(zeile, 21) .Fields("pickupfrom") = oWs.workSheets("booking").Cells(zeile, 22) .Fields("Guset Name") = oWs.workSheets("booking").Cells(zeile, 23) .Fields("others") = oWs.workSheets("booking").Cells(zeile, 24) .Fields("provider") = oWs.workSheets("booking").Cells(zeile, 26) .Fields("language") = oWs.workSheets("booking").Cells(zeile, 28) .Fields("e mail") = oWs.workSheets("booking").Cells(zeile, 29) zeile = zeile + 1 Next i End With Set rs = Nothing Set oWs = Nothing Set oXL = Nothing Open in new window Who is Participating? I wear a lot of hats... You can import your worksheet to a temporary "staging" table (just a table in your Access db that won't be used for anything else). Once you've imported, you can then use standard SQL to determine if you have duplicate data, which is much simpler than do a row-by-row comparison. You can use TransferSpreadsheet to import you Excel workbook: DoCmd.TransferSpreadsheet acImport, acSpreadsheetTypeExcel9, "StagingTableName", "Path to your spreadsheet", True See this for more info on TransferSpreadsheet: After getting the data into that table, you can then determine which rows need to be moved over to your "live" tables. I'm not sure how you're determining that, but for example if your "staging" table has a column named "CustomerNumber", and you want to be sure that you don't add any records with a pre-existing CustomerNumber, you could do this: CurrentDB.Execute "INSERT INTO YourLiveTable(Col1, Col2, Col3) SELECT Col1, Col2, Col3 FROM YourStagingTable LEFT OUTER JOIN YourLiveTable ON YourStagingTable.CustomerNumber=YourLiveTable.CustomerNumber WHERE YourLiveTable.CustomerNumber IS NOT NULL" This would only insert those records where the JOIN expression above does not find a "match" in YourLiveTable. Of course, your validation may be more involved than that, but you can add multiple fields to the JOIN expression, or multiple criteria to the WHERE clause, etc. Note you may also be able to simply Link the Excel workbook instead of Importing it. If that's the case, you just link it and do NOT import it to the staging table. Experts Exchange Solution brought to you by Your issues matter to us. Start your 7-day free trial Adding a unique index to your table will prevent duplicates.  You should trap for the duplicate key error(s) and keep on adding new records. Why not get rid of the duplicate rows before import.  Make a copy of the worksheet.  Use Data, Remove Duplicates.. Martin LissOlder than dirtCommented: Microsoft Excel From novice to tech pro — start learning today.
global_01_local_1_shard_00001926_processed.jsonl/61976
Introduction to ArcGIS Pro Introduction to ArcGIS Pro Course Number: ERT 806.20 ArcGIS Pro is the newest desktop GIS application from Esri and is quickly becoming the primary GIS application. This powerful software allows users to create amazing maps in both 2D and 3D.  You willl receive the "how-to’s" for the most commonly performed tasks, including: • editing • adding layers • adjusting symbology • control labeling  You will learn how to use ArcGIS Pro to answer real-world questions in the workplace through practical project workflows and exercises. Knowledge of ArcGIS, courses or experience. Applicable Certificates No Applicable Certificates close [x] Submit Feedback Add Supporting Files
global_01_local_1_shard_00001926_processed.jsonl/61984
Florida Personal Injury Firm What You Should Know Personal Injury Law: The Mystery of Pain and Suffering Damages, Part 1 As a plaintiff’s personal injury lawyer, part of my job is collecting compensation for the pain and suffering of injured negligence victims. That can be a challenge, because there is no formula or guidebook for placing a value on someone’s pain and suffering. When I select a jury, I always ask how the jurors feel about awarding money for a negligence victim’s pain and suffering. Inevitably, I get some jurors who are dead set against awarding money for pain and suffering. I often wonder how those same jurors would feel if they had to suffer at the hands of another’s negligence. Would they just shrug it off and let the negligent person or their insurance company off without paying a penny? What if the injury was serious and life altering? I have seen many, many occasions where people who were formerly against lawsuits suddenly have a change of heart when they get injured at the hands of another. How do we, as trial attorneys, persuade jurors to award a fair amount for our clients’ pain and suffering? One strategy I suggest is to present the following scenario to the jury: When evaluating a plaintiff’s pain and suffering, it is important to remember that the plaintiff was never given a choice; this injury was thrust upon the plaintiff by the negligence of the defendant. The value of pain and suffering essentially comes down to the value of time. Life all comes down to a series of moments. Some moments are painful, others are joyful. When someone comes along and by their negligence takes away some of those joyful moments and replaces them with painful moments, they have taken away something of value, something the victim will never get back. The victim has lost the time he could have been enjoying, and has endured time suffering he would not have otherwise had to endure. When people negotiate the value of their time, they usually don’t wait until after their time is lost to place a value on it. A football player negotiates millions of dollars a year for his time before he goes out onto a football field and risks life and limb. People negotiate the value of their salaries or hourly wages before they work for even a minute. So, when measuring the true value of a plaintiff’s suffering, it’s important to think about what the plaintiff would have accepted to endure the pain and suffering, if he or she had been given a choice. How much do you believe the plaintiff, if he was a reasonable person, would have taken to endure the pain and suffering he endured if he had been given a choice? This question is very powerful for two reasons. First, without breaking the golden rule (asking the jurors to consider how much they would accept as fair compensation), it gets the jury thinking about how much they would accept. Secondly, it sets the plaintiff’s attorney up for the next step in the process, which is breaking down the pain and suffering to different elements and anchoring an element of the claim to each of the various elements of pain and suffering damages. More on that in my next blog on the Mystery of Pain and Suffering Damages.
global_01_local_1_shard_00001926_processed.jsonl/61993
Skip to main content No matter how fast you run, the past will catch up with you Run for Your Life Els Beerten Noor is eighteen and is running a marathon. Everyone says she is mad, but Noor is stubborn and she keeps on training. Running is her only way to survive. A childhood trauma has turned her into an introverted teen who is always on the run – from herself. As she runs the marathon, the reader slowly finds out, through flashbacks, just how serious this childhood trauma was: her friend Linda died when they were playing a dangerous game. The marathon is the mountain she must climb, the ultimate challenge to help her overcome her grief. A rich, heart-warming and touching story De Leeswelp Within the tight framework of the marathon and its slowly passing miles, Els Beerten sends Noor back and forth through time. The more miles she runs, the deeper she descends into herself and the more space she creates for her past. This switching between past and present operates on many levels, as Noor keeps trying to get closer to people, but then closes herself off. ‘Run for Your Life’ is a layered novel that remains with the reader because of the universal emotions it depicts: overcoming your fear of life, shaking off your childhood, and desperately seeking recognition and friendship. The marathon scenes propel the story towards its conclusion and Beerten admirably succeeds in maintaining the tension De Standaard
global_01_local_1_shard_00001926_processed.jsonl/62000
Here's a pop quiz: Name the industries you avoid investing in, and explain why. For me it's always been airlines. Too much debt, fixed costs too high, and historically airlines have never made much money (with the notable exception of Southwest). In the retail business I have never liked the shoe segment. It's a very fragmented industry with everyone from discounters like Wal-Mart (NYSE:WMT) to department stores like J.C. Penney (NYSE:JCP) and apparel sellers like Kohl's (NYSE:KSS) battling for market share. And there aren't any dominant companies in the shoe business, companies that allow you to identify what best in class looks like, to benchmark everyone else against. But I may be starting to change my tune. DSW, Inc.  (NYSE:DSW) reported year-end results yesterday, and the numbers are enough to make even a die-hard shoe business skeptic like me stand up and take notice. Try these full-year 2006 numbers on for size: total sales grew 11.8% and gross profit increased 16% as gross margin improved by 100 basis points. Selling, general, and administrative expenses were only up 8.2% and improved by 70 basis points, although last year included an unusual charge of 60 basis points. Operating profit climbed 43.6%, and diluted EPS jumped 48%. If those numbers don't impress you, how about this: Inventory per square foot (the often overlooked retail metric) was down 3% year over year. This marks the sixth straight quarter of improved inventory productivity for the company. I'm starting to take a shine to the shoe business, at least a bit. For 2007 the company expects to grow EPS 10% to 14%, add 30 new stores, double capital investments from $42 million to $80 million, and still add to its already-healthy cash balance. I particularly enjoyed the portion of the conference call that discussed improving the interest yield on its excess cash. Not a lot of retailers have a surplus of the green stuff lying around. From a strategic perspective, I'm impressed with DSW's approach to growth. While it operates 228 stand-alone retail stores, it also runs 364 leased shoe departments in other retailers, including all 267 Stein Mart (NASDAQ:SMRT) stores. This approach reduces the capital required to grow, and leverages merchandising and distribution infrastructure. Last week I wrote about a big hop in earnings at Shoe Carnival. Earlier this month, fellow Fool Ryan Fuhrmann covered Payless stomping on analyst expectations. Brown Shoe (NYSE:BWS) has been on a pretty respectable run. Including DSW, all four of these companies have nearly doubled their stock prices the past two years. Does this mean you should trade in your not-yet-discovered 40-bagger for a pair of leopard skin pumps, or even loafers? I'm still not ready to jump into this retail sector with both feet (long-held perceptions are hard to break). But clearly DSW and the rest of the shoe "pack" are getting their act together. Take a walk on the wild side. Check out more shoe mania below: Wal-Mart is a Motley Fool Inside Value recommendation. We'll give you a free 30-day trial of this market-beating newsletter if you simply click here. Fool contributor Timothy M. Otte surveys the retail scene from Dallas, where he's usually spotted about town in a pair of old sandals. He owns stock in Wal-Mart, but none of the other companies mentioned in this article. The Fool has a disclosure policy.
global_01_local_1_shard_00001926_processed.jsonl/62001
There's good news on the horizon for the 80% of workers who expect their 401(k) accounts to serve as their most important source of retirement income. Contribution limits for 401(k) accounts are rising in 2018, so you can sock away more tax-free cash for your retirement.  Whether you're currently maxing out your 401(k) contributions or are looking for ways to save more in the New Year, you can find out everything you need to know right here about the new rules for 401(k) investments in 2018.  401(k) letters next to a piggy bank Image source: Getty Images. What are the maximum 401(k) contribution limits for 2018? Matching contributions from employers, non-elective contributions, and allocations of forfeitures are not counted in the $18,500 or in the $24,500 maximum elective contributions. However, there is a total limit for all contributions from all sources. This limit is also rising $1,000 in 2018 compared with 2017. The new maximum contribution from all sources will be $55,000 for workers under 50, and $61,000 for workers 50 and over.  Contribution Type 2018 Limit 2017 Limit Elective contributions for all eligible workers Catch-up contributions for workers 50 & over No Change Total elective contributions for workers 50 & over Defined contribution maximum from all sources for workers under 50 Defined contribution maximum from all sources for workers 50 & over Data Source: IRS How much additional tax savings will you realize? Investing with pre-tax funds provides substantial savings for taxpayers. The extent of your tax savings depends on your tax bracket. For taxpayers previously maxing out their 401(k) accounts who can now contribute an additional $500 in pre-tax funds, this chart shows the additional tax savings based on individual tax brackets.  Tax Bracket Additional tax savings from increasing 401(k) contributions by $500 Total tax savings from $18,500 in 401(k) contributions Total cost of $18,500 401(k) contribution after tax breaks Calculations by author. You do not need to itemize your deductions to get tax breaks for 401(k) contributions. When you make contributions to your 401(k), your elective contributions are deducted directly from your paycheck. When your employer sends a W-2 at the end of the year declaring your wages, 401(k) contributions deducted from your paycheck will not be included as taxable income.  However, it's important to remember that while you won't be taxed on money invested in your 401(k), you'll be taxed on withdrawals as a senior.  How much is an extra $500 in 401(k) contributions worth? Increasing contributions by $500 may not seem like it will make much of a difference, but compound interest works its magic on the money invested in your 401(k). If you increase your 401(k) contributions by the extra $500 you're allowed to contribute in 2018 and you earn 7% on invested funds, consider how much this $500 could turn into by the time you turn 65, depending upon your age in 2018.  Age in 2018 Value of $500 in 401(k) contributions by age 65 Calculations by author. With the median retirement account balance for households between ages 50 and 55 at just $8,000, individuals who start maxing out their 401(k) early on could make more just from this extra $500 investment made in 2018 than the total savings of a substantial number of pre-retirees. What if you don't hit the 2018 401(k) contribution limits?  The average American with a 401(k) contributes about 6.2% of their income, according to Vanguard's 2017 How America Saves report, which means you're not alone if you're not contributing the maximum permitted.  But the more you can contribute, the more secure your retirement will be. To painlessly increase contributions, immediately divert any raises toward extra 401(k) contributions so you don't get used to living on the extra income.  Emulating the habits of retirement super savers who contribute at least 90% of the annual 401(k) contribution limit could also help you boost contributions dramatically. These habits include driving an older vehicle, living in a modest home, skipping out on fancy vacations, and putting in extra hours on the job.  If you're not ready to go that far but you're living in one of the 60% of households without a budget, consider creating one so you can find wasted money to redirect to 401(k) contributions.   What if you do hit the 2018 401(k) contribution limits?   If you're already maxing out your 401(k), you can invest in other tax-advantaged retirement accounts, including a traditional or a Roth IRA, which have combined contribution limits of $5,500 for most workers in 2018. There's also an additional $1,000 catch-up contribution for workers 50 and over for IRA accounts, so older employees who max out IRAs could save up to $6,500.  Traditional IRAs allow you to invest with pre-tax funds, while Roth IRAs are investments made with after-tax dollars that allow you to withdraw money tax-free as a senior. There are income limits to take advantage of the tax benefits of both traditional and Roth IRAs. Here they are for 2018. Traditional IRA Roth IRA Tax Filing Status Tax deductions for IRA contributions are reduced if your income exceeds this amount Tax deductions for IRA contributions are no longer available if income exceeds this amount Tax deductions for Roth IRA contributions are reduced if your income is above this amount Tax deductions for Roth IRA contributions are eliminated if your income is above this amount Married filing jointly or qualifying widow or widower if you're covered by a retirement plan at work Married filing jointly if your spouse is covered by an employer plan but you aren't Married filing separately if you lived with your spouse at any time during the year and either spouse is covered by a workplace retirement plan Data source: IRS. Making your 401(k) contributions count Whether you're maxing out a 401(k) or are just starting to step up your contributions, make sure the money you're investing is working for you. This means making sure your portfolio has the right risk allocation, that you aren't paying high fees, and that you aren't making mistakes like borrowing from your 401(k) By investing as much as you can up to the 2018 401(k) contribution limits and making smart investment choices, your 401(k) will hopefully become the primary source of retirement income that you're likely hoping it will be.  The Motley Fool has a disclosure policy.
global_01_local_1_shard_00001926_processed.jsonl/62009
FractionCalculator .pro Convert 0.417 to a fraction Welcome! Here is the answer to the question: Convert 0.417 to a fraction or what is 0.417 as a fraction. Use the decimal to fraction converter/calculator below to write any decimal number as a fraction. Decimal to Fraction Converter Enter a decimal value:  Ex.: 0.625, 0.75, .875, etc. Equivalent fraction: Decimal to fraction Explained: Sample decimal to fraction conversions
global_01_local_1_shard_00001926_processed.jsonl/62029
Jump to content • Advertisement This topic is now archived and is closed to further replies. Problem with overloaded operators... Recommended Posts Hi @all, I guess for most of you my problem isn''t a problem at all... but look... I''ve got a class like this : class StringClass public : StringClass operator+(StringClass *CString); StringClass operator+=(StringClass *CString); //add some stuff here I''d like to have both operators, + and += in my class, because it would make it much easier for me to work with those Strings. Because the += operator is very similar to the + operator, I tried to use the + operator in my += operator''s function instead of copying all the code. StringClass operator+(StringClass *CString) StringClass CNewString; //plz imagine a huge piece of code here return CNewString; StringClass operator+=(StringClass *CString) StringClass CNewString; CNewString = this + CString; //doesn''t work... :( return CNewString; But, somehow it doesn''t work! My compiler tells me that there''s no global binary operator "+" defined... how can I make this one work? Share this post Link to post Share on other sites • Advertisement Important Information Sign me up!
global_01_local_1_shard_00001926_processed.jsonl/62030
BattleFeild Vietnam Clan ::DoC:: 1 reply Please wait... The Internet ends at GF 50 XP 12th January 2005 0 Uploads 154 Posts 0 Threads #1 13 years ago Hello There, This is the DoC Clan "Defiant Objective Clan"! We are a Battlefield Vietnam Clan. We Are Recruting players like you.... I mean you. We don't give a ***** how good you are, as long as you work as a team! This clan is just startin up, we got our brand new server up, and TeamSpeak server up. If You got any questions post them here or send me a E-Mail.... [email][/email] ....!Only 3 people in our clan so far so join. Love kids and mature people. I am the kind of clan leader that wants to have fun! Flubber and Bishop is your leader..... Age to join is 14+ male/Female. Hope to see you in my clan. Teamspeak ip: and password is snip3r Remeber Flubbers E-mail [email=""][/email] :flag: Can't Touch This 50 XP 12th November 2004 0 Uploads 518 Posts 0 Threads #2 13 years ago b4 recruiting for your clan u should first learn to spell Battlefield silly :beer:
global_01_local_1_shard_00001926_processed.jsonl/62049
People Who Did Not Get Rich And/Or Famous Coding Want You To Code If you find this new batch of PSAs hilariously awkward, you're in good company. I literally yelped when I heard Nas say "I just did 15 lines of code." There are more than a dozen new videos from the everyone can code organisation, and they're all just a wonderful gift. There's absolutely no problem with empowering people to learn a new technology. But these clips are presented completely without context and are mostly about 20 or so seconds long. That, and while these people are rich and famous, they are 100 per cent not rich and famous from their ability to write an app or build a website. Here's actress Angela Bassett touting her coding skills. So yes, the enthusiasm is great, although it's a little hard to believe that these busy famous folks are spending a ton of free time type type typing away in the coding cave. At least is expressing the point is that anyone, seriously anyone, even Kelso, can code. [YouTube] Trending Stories Right Now
global_01_local_1_shard_00001926_processed.jsonl/62062
The Butterfly That Returned - by Serene Martin Get it now for 0.99! The Butterfly That Returned is a collection of 33 heartfelt messages that celebrates that beaded crystal of inspiration that followed you everywhere you went since you were born. Sometimes, it stayed in the background while you busily went about your day. Sometimes, it came out from the shadows and burned so brightly that your heart ached with joy. Very often, it was quickly forgotten. Today, it returns to you and reminds you it is yours to keep. The messages in this book are an imprint of a voice that doesn't use words. A voice we all have. Written for all who are searching. Share this:
global_01_local_1_shard_00001926_processed.jsonl/62089
The Archaeology of Southeastern Balochistan The Archaeology of Southeastern Balochistan An exciting look at the western side of the ancient Indus civilization, where new cultures and surprises await that indicate how well developed surrounding cultures to the major Indus areas were. Baluchistan is Pakistan's largest province (1). It is marked by a rugged, highly differentiated environment with many different habitats (2). The Makran Range in the south divides the interior from the coastal plain. A number of successive mountain chains run from the Arabian Sea to the Hindukush, and form a barrier towards the fertile Indus plain in the east. These mountains enclose interior highland basins and deserts and are intersected by many river valleys (3,4). Kanrach Valley, BalochistanSoutheastern Balochistan is characterized by narrow river valleys which only occasionally provide space for alluviation, and thus agriculture. The catchment areas are smaller and, due to the high gradient of the tributaries, the seasonal floods are often destructive and wash away the soil (5). In such a harsh and barren environment, irrigation through channels, qanats, or seasonal flooding is an essential prerequisite for settlement (6). It thus developed early as an essential measure for the production of crops required by a growing population. The rising number of settlements from the beginning of settled life in the 6th millennium through the mid-third millennium BC witnesses the success of food production through farming and pastoralism. Pioneering archaeological fieldwork in this region was carried out by the great explorer Sir Aurel Stein, Hargreaves, W.Fairservis, B. de Cardi, J.-M. Casal, G.Dales, the Dept. of Archaeology and Museums, Karachi, and a couple of other explorers. The French excavations at Mehrgarh, Nausharo and Pirak in the Kachhi plain revealed a long cultural sequence from the Neolithic Period through the Iron Age. While another French Mission resumed work in Makran after a 30 year long gap in the late 1980's, southeastern Balochistan had remained a "white spot" on the archaeological landscape. In winter 1996-7, the Joint German-Pakistani Archaeological Mission to Kalat was founded to re-open work in this area. Kanrach Valley, Balochistan To date, three seasons of exploration were carried out in the plain of Las Bela, in the Kanrach and the Greater Hab (Hab, Saruna, Bahlol, Loi, Talanga) River valleys, and long the eastern foot of the Kirthar Range, covering altogether about 1900 square kilometers. As a result of this work, more than 300 archaeological sites were discovered and documented (7,8, 9). Many of them were threatened by destruction. The large number of prehistoric settlements, the size and sophisticated lay-out of some of them came as a surprise: nowadays the area is barren and inhabited by a few people. Interestingly, the sites indicate that a development from village to town and then to camp, and from agriculture to migratory pastoralism took place.
global_01_local_1_shard_00001926_processed.jsonl/62095
This at-home workout is a fun, easy way to get in a strength-based sweat session—no matter how busy you are. December 07, 2018 Alicia Archer, a dancer and an Equinox fitness instructor in New York City, created this workout specifically for this time of year. "You're pulled in so many different directions during the holidays," she says. "Barre won't add more stress to your life." The ballet-inspired moves focus on small motions and high reps, to help sculpt and strengthen. Single-Leg Push-Up Start on all fours with your right leg lifted and extended out behind you. Point your toes and bend your elbows, lowering your chest to the ground while raising your right leg even higher. Push back up and then repeat, maintaining a straight line from shoulder to heel. Side Plank with Dip & Leg Lift Lie on your left side with your legs stacked. Bend your left leg back, prop yourself up on your left forearm, and raise your right arm straight up in a slow, controlled manner. Pull your abs in and lift your hips, then raise your left knee up toward your chest. Hold for 2 seconds, then lower back down and repeat. Plié with Front Arm Lift & Pull Stand with your feet about 3 feet apart and turned out. Hold your palms in front of your thighs with a 3-lb. dumbbell in each hand. Bend your knees, push your hips back, and lower down as you raise your arms up to shoulder height. Bend your arms, pulling the weights back as you rise onto tiptoes. Lower back down and repeat, keeping your abs engaged. Plié with High Arm Biceps Curl Stand with your heels together and feet turned out, knees bent slightly. With a 3-lb. weight in each hand, raise your arms out to the sides, palms face-up. Keeping your back flat, rise onto your tiptoes, pulling your elbows into your sides. Return your arms out to your sides, bend your elbows and curl the weights toward your shoulders. Return your arms to your sides again and repeat the sequence. Triceps Lift & Extension Stand with your feet hip-width apart. Holding a 3-lb. weight in each hand with palms facing forward, hinge at the hips and lower your torso slightly, allowing your arms to naturally fall forward. Lift your arms back and up and bend your elbows, curling weights toward shoulders. Extend your arms back out, and then lower and repeat. Make sure you crunch those abs. Curtsy Lunge with Front Leg Extension Stand with your feet hip-width apart and your arms at your sides with a 3-lb. weight in each hand. Take a giant step back with your left foot, crossing it behind your right. Bend your knees, lower hips and torso, and extend your right arm diagonally back and up. Bring your left elbow to your right knee, and curl the weight toward your shoulder. Transferring the weight to the right foot, come to balance on the right foot as you kick your left leg up and extend your arms out, palms face down. Maintaining this position, bend your left knee and bring your left foot to right thigh. Lower your left leg, step back to the starting position, and repeat. Do 15-20 reps of each move for 2-3 sets, 2-3 days a week.
global_01_local_1_shard_00001926_processed.jsonl/62097
The biceps brachii, sometimes known simply as the biceps, is a skeletal muscle that is involved in the movement of the elbow and shoulder. It is a double-headed muscle, meaning that it has two points of origin or 'heads' in the shoulder area. The short head of each biceps brachii originates at the top of the scapula (at the coracoid process). The long head originates just above the shoulder joint (at the supraglenoid tubercle). Both heads are joined at the elbow. The biceps brachii is a bi-articular muscle, which means that it helps control the motion of two different joints, the shoulder and the elbow. The function of the biceps at the elbow is essential to the function of the forearm in lifting. The function of the biceps brachii at the shoulder is less pronounced, playing minor roles in moving the arms forward, upward, and sideways. Although it is generally considered to be doubled headed, the biceps brachii is one of the most variable muscles in the human body. It is common for human biceps to have a third head originating at the humerus. As many as seven heads have been reported.
global_01_local_1_shard_00001926_processed.jsonl/62133
The Murderabilia Controversy: Does Tupac Count? This Week's Rapper(s):JuzCoz This Week's Subject(s): Murderabilia. (Yep, that's an actual thing.) Thanks to some news bits floating around about some initiatives in Congress, we've recently been made aware of something called murderabilia. We decided to ask the guys least likely to be involved in it about it - local do Juzcoz raps about video games and whatnot. They were perfect. Ask A Rapper: Do you know what murderabilia is? Basically, it's collector's items, like sports memorabilia. Except instead of a game won jersey from a basketball game, people look to buy hair or weapons or blood-smeared clothes [Ed. Note: Or taxicabs] from murders. For example, remember that Black guy that killed in Jasper by those supremacists? Well you can currently buy dirt from his grave site online. Crazy, huh? Wil: Yeah, James Byrd Jr., I remember that hate. I think that's kinda messed up that someone's trying to make money off that. Unless its the family of the victim, which I doubt it. Rod: That's just sick. Why? AAR: There's a big market out there for it, although that probably shouldn't be very surprising considering there's also a big market of people who like to watch chicks poop on guys. JuzCoz [laughs]: Nothing surprises me these days. This is America, where anything that can make money will make money [laughs]. Listen to the radio. AAR: Given the glorification of murder and whatnot in a lot of forms of rap, what do you think the general opinion of this among rappers? Is it the realest thing ever, or is it despicable? J: Despicable, yes. And to me, [it's] disappointing because this could reopen old wounds. Books, songs and movies are one thing, but auctioning off actual items worn or used in a murder is a bit much. I have to draw the line. AAR: How long until Ganksta NIP opens his own store specializing in this type of thing? J [laughs]: I hope that doesn't happen, but only he can answer that. I wonder what would be in his store? [laughs] AAR: Hypothetically speaking, let's say you were forced to buy an artifact from a murder. There's no way around it. You either do it or the whole world explodes. What would you pick? It'd have to be something from Biggie's or Tupac's murder scene, right? Or would you go homer and get something from Hawk's or Pat's? Wil [laughs]: Okay, it would have to be Tupac's. Damn, I feel bad cause I'm actually excited about having something of Tupac's [laughs]. The 1996 Black BMW 750iL he was rolling in is what I would take. Lord forgive me [laughs]. I'd be rollin' in it right now, pullin' up on girls like, "Yeah, this Tupac's whip, and yes these the real bullet holes." AAR: [laughs] W: But with my luck, I'd prolly bump into 'Pac and he'd tell me the car's a counterfeit. R: This was THE weirdest interview. But you know, if they want murderabilia all they have to do is get our upcoming album. All the tracks were killed*. *Applause for that one. • Top Stories
global_01_local_1_shard_00001926_processed.jsonl/62150
Ingenieure für Flächentragwerke Mobile Roofs Convertible roofs react to different weather conditions and uses. They are a big step towards adaptable, multifunctional structures, which can improve decisively the exploitation, especially in modern stadiums. Foldable, textile membranes are ideal for this purpose and allow a reversible change in the geometry of the building. They get close to the function of natural structures that because of the evolution are light, material-optimized and adaptable.
global_01_local_1_shard_00001926_processed.jsonl/62152
The Concept of the Shapley Value and the Cost Allocation Between Cooperating Participants Alexander Kolker (GE Healthcare, USA) Copyright: © 2018 |Pages: 13 DOI: 10.4018/978-1-5225-2255-3.ch182 The goal of this chapter is to illustrate two mathematical game theory concepts for allocating costs (savings) between cooperating participants, specifically in healthcare settings. These concepts are the nucleolus and the Shapley value. The focus of this chapter is on the practical application of the Shapley value for the cost sharing within the bundled payments model for the episodes of care mandated recently by the Center for Medicare Services (CMS). The general Shapley value methodology is illustrated, as well as an important particular case in which each participant uses only a portion of the largest participant's asset (the so-called airport game). The intended readers are primarily leaders of organizations and hospitals involved in the implementation of the CMS mandated bundled payment model for the episodes of care. Chapter Preview By pooling resources and cooperating the participants usually reduce the total joint costs and realize savings. The question arises is how the reduced costs or the realized savings should be fairly allocated between them. There could be different definitions of fair allocation. Some of them are: • Equitable Allocation: Gives everyone the same satisfaction level, i.e. the proportion each player receives by their own valuation is the same for all of them. This is a difficult aim as players might not be truthful if asked their valuation. • Proportional Allocation: Guarantees that each player gets his share. For instance, if three people divide up an asset then each gets at least a third by their own valuation. • Envy-Free Allocation: Everyone prefers his own share to the others. No one is jealous of anyone else. No one would trade his share with anyone else’s. • An Efficient or Pareto Optimal Allocation: Ensures that no other allocation would make someone better off without making someone else worse off. The term efficiency comes from the economics idea of the efficient market. • Merit-Based Allocation: The more one brings to the coalition, the more one gets out of the division of the accumulated gains. A concept of fairness is rather subjective. It depends on the participants’ socio-economic views and other factors. The fairness schemes described in the next sections form a basis of the two most popular cost allocation approaches: the nucleolus (Tijs and Driessen, 1986; Saad, 2009) and the Shapley value (Roth, 1988; Young, 1994). Key Terms in this Chapter Shapley Value: A game theory concept aimed at the ‘fair’ allocation of the collective costs or profits (savings) between several collaborative participants. It is based on allocating the costs to the cooperating participants proportionally to the marginal contributions of each participant that is averaged over all possible combinations in which participants can cooperate. Nucleolus: A game theory concept defined as minimizing the maximum “unhappiness” of a coalition. “Unhappiness” (or “excess”) of a coalition is defined as the difference between what the members of the coalition could get by themselves and what they are actually getting if they accept the allocations suggested by the nucleolus. Coalition: A group of k cooperating partners. Cost Allocation: A problem that arises in many business situations that benefit from the effect of economy of scale or cooperating partners. Bundled Payment: A payment model to healthcare providers in which one single payment is disbursed to cover an episode of patient care through a contracting organization. The contracting organization is responsible for allocating the payments among all providers. Game Theory: A branch of applied mathematics that studies strategic situations in which participants (players) act rationally in order to maximize their returns (payoffs). Core: A set of inequalities that meet the requirement that no participant or a group of participants pays more than their stand-alone cost. Marginal Contribution: A value of the group with the player as a member minus the value of the group without the player minus the value created by the player working alone. Empty Core: A lack of unique cost allocation that satisfies all participants. If the core is empty, then unsatisfied participants have incentive to leave the cost sharing coalition. Complete Chapter List Search this Book:
global_01_local_1_shard_00001926_processed.jsonl/62202
Create a Bluemix account If you already have an active Bluemix account, go to Step 2. (Set up API Connect). To sign up for a trial account: 1. Go to the IBM Bluemix sign up page, create an IBM id, and sign up for 30-day trial. 2. Complete the form to create a new account. Click Create Account. You will receive an email with a login link. 3. Click the link in the email. 4. In the Log In to Bluemix page, enter your IBM id or email address in the form. Click Continue. 5. If you are prompted for a password, enter it. 6. On the Create Organization page, use the region that is preselected for you. Enter a name for your organization, such as your_name_org. Click Create. 1. On the Create Space page, enter a name for your space, such as dev. Click Create. 2. On the Summary page, click I’m Ready!
global_01_local_1_shard_00001926_processed.jsonl/62243
Grand Theft Auto Blamed After Eight-Year-Old Shoots Grandmother Grand Theft Auto Blamed After Eight-Year-Old Shoots Grandmother Here's CNN: "From a behaviour therapy perspective, I would say that's practicing," Kristopher Kaliebe, a LSU Health Sciences Center child psychologist told Here's MSNBC talking about virtual reality: What the hell is a eightyear old doing playing gta In the first place? Good work Grandma, access to a loaded gun and GTA IV. Top parenting. Not only that, games controls aren't shaped like a gun and don't operate like guns, pressing on the trigger of on the top edge of the control in no way resembles or simulates pulling the trigger on the bottom of a gun, someone has to have showed that child how to hold and fire the weapon, seems like this is a horrible accident attributed to a dangerous social gun culture that thinks it's ok to familiarise children with weapons. More importantly unless it was a revolver, He would need to 1. Take gun off safety 2. Chamber first round by sliding the barrel back (unless it was already cocked which is worse) 3. Shoot grandma before she realised he did 1 and 2. Unless Gun was loaded, cocked and off safety......... Was a revolver. What idiot has a loaded revolver around kids. Last edited 27/08/13 12:58 pm You would still have to cock the hammer. It would not be an easy feat for an eight year old to pull the trigger on a double action trigger. Exactly, I was in Target the other day (looking for Disney Infinity figures), and I overheard a couple of women talking about GTA IV, trying to decide which would be better, the main game or the two expansions. They clearly had no idea, so I decided to help them out. I told them around about how much game time you'd get out of each one. They decided to go for the main game. They then proceeded to tell me that it was for a 7 year old, and he was "just going to drive around in the cars anyway". Needless to say I was disgusted and wished I hadn't helped them in the first place. We could bring in an X rating for games and it still wouldn't stop kids getting access to them, purely because of idiot parents. Should've stepped up and told them right off! I took that liberty when I used to work in a retail, of asking parents if the game they were purchasing was for a kid, stating what the game actually is. People are deeply misinformed and seem to not realise that yes, these games have a lot of violence. It may not be terribly damaging to a 15 year old, but it's definitely not something for a 7 year old kid. Yeah there are 4 guns on the cover, plus the adult rating. Not saying I disagree though, well done for you for trying to stem the tide of idiocy. Last edited 27/08/13 6:43 pm I'd buy GTA: IV for a 7 year old if all they did was drive around in a car. If they ran someone over? Show them that there are consequences for actions -- either stop playing the game for a while, or let the cops get 'em, or something like that. What the hell is an eight year old doing with a loaded gun? I never knew GTA gave kids guns?!? This is an outrage!! I could watch and play whatever I wanted when I was younger. Violent media does not cause mental issues. That is to say, it may enhance already unstable minds though, but through more than 2 decades of violent media games/movies/books/comics have not made me want to hurt anyone. I hate US news. Same thing with me, only I was brought up with and regularly used firearms. I don't think I've ever shot anyone. I'm sorry but I find amusement in the idea that you're not entirely sure about it! Congratulations on getting the joke! I'm American and I approve this message. I swear if someones dumb ass little kid screws up my chances of playing GTA V I will............ *looking for grandma's new address* :D While I agree with you for the most part, there's one annoying little factor I keep coming back to in the back of my mind. The games that we grew up on: Doom, Quake, Half Life all the way to medal of honour and call of duty 2 etc don't look anything at all like the games kids grow up on today. By comparison, the games of today are far more immersive, realistic looking and intense. The stuff we had looked obviously fake. Easy to distinguish as fake. The first time I played F.E.A.R in the dark, I was so immersed in the game I was physically terrified at some points. My brother did the same and hit his knee on the table trying to dodge the guy who smacks you with a bit of wood in the first level. The games we played as children could never invoke those kinds of feelings. GTA1 could never reach the amount of immersion in GTA IV. I think the more we push for realism in games, the more credibility games causing violence arguments will get. Especially considering the ignorant masses who buy everything for their kids often without question. Okay - The first 2 Questions should be - Why is there a LOADED GUN within reach of an 8 year old child and WHY was this 8 YEAR OLD CHILD playing a R18 Game? Fucking Stupid Media. Well actually the game is MA15+, but over in the states it is 18+... Sorry couldn't help myself :-) But yes, I read this last night, and all I thought of was 'here we go again'. As a video game retailer, it saddens me whenever I hear these stories. These games are not meant for children at all, so why are parents letting them play them? Is it because of peer pressure, that "oh, Jimmy has it, so why can't I?" Or is it simply because of the fact that when they go into stores that sell video games, they just want to get out, so they will take whatever they have chosen first? The other issue is do we as retailers attempt to educate parents about the content that are in these games? Do we try to make the point of that it's not suitable for their kids? Do we encourage parents to at least sit down with their kids and experience the content for themselves, so that at the very least, they can explain to a child what is happening in an adult situation, and why it is not a good thing. But as many people have already pointed out, the underlying issue about this is why is there a loaded firearm in the house, that is accessible to a child? Isn't there some law that requires you to keep them separate from each other? And if video games are to blame, as is violent media, why do we let our kids watch it, and why do we not educate our kids on the difference between reality and fantasy? Why is it that if your child is going to grow up around a fire arm, why not teach them about it, why it's a dangerous weapon, and when is it ok to use them, and not to use them? That would be responsible parenting and that can't be encouraged in the USA. Much better to find something easily blamed As retailers, yes. Absolutely. When I worked at EB I would tell patents exactly what was in that MA game if they were buying it for their child. I didn't make any friends in the under-15 demographic, but I was constantly taken aback by how little patents knew about how much the content and presentation of gaming had changed since they pumped twenty cent pieces into Pac Man as teenagers down the milk bar. I once offended a parent by saying "lines of cocaine" in front of their child when describing the content of GTA IV - I was blown away by the fact that they'd prefer to have that sort of thing presented to their ten year old completely devoid of context, than to hear it for potentially the first time from a guy behind a counter. I lost more than a few sales appeasing my conscience, and at the end of the day I totally support the fact that it's the parent's choice. I considered it an important part of my job to inform parents of the content every time regardless of how often they came into the store, but ultimately to leave the decision making to them. I would say that yes, peer pressure is probably part of the problem, also ignorant parents that don't know what a game's about and have absolutely zero interest in finding out, that's what the kid asked for so that's what they get them, it's just easier. Lastly, there's some parents that just don't care. Back when GTA: San Andreas first came out, I was in an EB Games store and witnessed a 10 year old pestering his dad for the game, the sales assistant repeatedly advised against it, but he just didn't care and bought it anyway, because it gets the kid off his back. I remember as a kid, I used to watch Robocop, Highlander, and a bunch of other violent movies. Thing was, my parents, despite working 2 jobs each, somehow still found the time to sit me down and teach me right from wrong. It never ceases to amaze me how parents blame everything else for how their kids turn out. I know there are some real parents out there but they seem so far and few in between My parents were the same. I was playing GTA at a young age, even playing Conker's Bad Fur Day when I was about 5 or 6. My parents always sat me down and explained it (although, left the "subtler" sexual content of Conker's Bad Fur Day slide; it went over my head anyway), and I understood that what was going on in the games was wrong, and that's why it was funny and outrageous. I'm no parent, hell, I'm not even really an adult, so it's not my place to tell people how to raise their kids, but damn, explaining things and educating your own children isn't so hard, and it can really go a long way. If you leave too many things in the dark, they become curious and find out anyway, probably in a way you didn't want them to. Kid's are only stupid if you make them so. I do understand that showing a kid GTA or something of the like at a young age isn't really appropriate though. I wasn't negatively affected by it, but I can't say that for everyone. I'm not sure if this applies to video games, but I know for a fact that it is illegal for an employee to sell alcohol to an adult if it's evident that they're buying it for someone underage - YOU can be fined if that's what they end up doing. If you have any suspicion that that's what they're doing you must refuse them service. So, does the same apply to video game sales staff? And the parents? If a parent buys an MA15+ game explicitly for their child who is under 15 years old, is the parent or the staff member that sold them the game (or both) breaching the law? No. Under the current classification, MA games cannot be sold to people under 15 unless accompanied by a parent or guardian. So long as the parent buys it for them, irrespective of whether they sit and watch them play or not, the sale is legal. Making sure the parent knew the content was part of what I considered, for lack of a better term, my duty of care as a responsible employee. Sensationalism at its finest. Ps. I always found that particular track suit combo and colour palate detestable on Niko. A nice black turtleneck and a pair of chinos was much more stylish. I've already had a rant about this somewhere else, but yeah the first thing that springs to my mind is why was he playing a game rated for mature audiences anyway, as well as being in a house with access to a loaded firearm. Still, that doesn't make for a good news story does it? You don't get points for killing people, what games even have points now? Also why the fuck does a 8yr old have access to a loaded gun? Thats the real question I do wish points were more prominent again though. Imagine a game like Assassin's Creed with a point system; might actually give you a real reason to use the tools the dev's give you. It's always going to be the video game's fault or heavy metal's fault. Scapegoats have been used since the beginning of time (slight exaggeration, but in the spirit of the story) rather than taking on the underlying causes of situations such as this, or blatant causes such as a loaded gun being easily obtained by an 8 year old. As you say, blaming video games is 'sexy' and sensationalist and like it or not, the media is in the ratings business and not responsible journalism so they will always take the sensationalist route and spin half truths that grab people's attention rather than reporting a more mundane story that people will have no interest in. Personally, I blame this culture entirely on Murdoch and it won't go away. People will turn on what they don't understand and what's been drilled into their head as being an inherent evil in society. Video games and heavy metal are a bigger killer than smoking and drinking if you believe what you read and hear Video games are a threat to Mr Murdochs assets. They're playing games on their televisions instead of watching his sensationalist bullshit that they pass off as journalism. I bet if the kid was watching a violent movie and then shot his grandma it would be a totally different story but just because it was a video game now its the game industry's fault. Besides how would an 8 year old have access to a loaded gun in the first place, stupid really. Saw this as a video on Google+ (but can't watch at work due to being behind a proxy) and I'm thinking to myself "This has to be a fox news thing" - low and behold, the fine mainstream media. Well If was to take a page from the news reporters books and just assume something as fact I would say that the boy played GTA then grab old grandma gun and ran around the house going bang you are dead to all the pretend police and gangsters. When grandma demanded the gun back he went bang your dead for realz this time. That might sound like the likely case but why was there a loaded gun in a mobile home....more to the point why was the safety turned off (unless it was a revolver)... Now all I need to see and hear (I don't have access to the videos at work) is serial Rockstar/Take-Two Interactive pest Jack Thompson getting onto mainstream media and pushing that Rockstar makes "Murder Simulator" games called GTA and trying to push for the game to be banned. depending on the age of the revolver they can have a safety. but yes why was there a loaded gun within easy access of an 8 year old. the US will never change its gun laws without a massive revolution but that kind of revolution is what brought the 2nd amendment in the first place The sad part is that the real issues that need to be addressed simply go unchecked because people are content with taking the easy route. let's hope the kids don't start nuking India after playing Civ5. I... always go for a cultural/science win. c.c (Well. Once they realize they can't breach my technologically superior defenses. Or leave their islands. On account of the battleship blockades.) i also use this tactic. contain and control. i will normall clear whatever continent i am on so it is all my nation I wonder if this is an Australian mind-set thing. :) What with us basically being a nation-continent. In Australia we don't have these issues constantly like America. We are not stupid enough to have guns (that are loaded) lying around for 8 year olds while letting them play adult games. We also don't have these wack jobs on TV missing the point while being called experts. F*** America. They are there own worst enemy. You've obviously never watched Today Tonight or A Current Affair. Shot 90-yr-old after playing videogame incorrect. it should read: Shot 90-yr-old after picking up an unsecured and loaded firearm funny how news outlets can never seem to get the actual events in order. unless........... due to exposure to some previously unknown radiation type, the kid mutated until his own goddamn hand turned into a gun & the console itself developed sentience & gained the ability of hypnotic suggestion, so the kid was then compelled to shoot his grandmother. yeah.... that's probably the only scenario where videogames could conceivably be blamed. Edit: spelling. Last edited 27/08/13 9:17 am Oh great, here we go again. Who names a town Slaughter? You are just asking for trouble. its an abbreviation. years ago, this town was the happiest place in America, and was called South Laughter. over the years, it got shortened to Sth. Laughter, an then to S. Laughter. then someone in North Laughter got hold of a videogame & wiped Nth. Laughter off the map. So the residents of Sth. Laughter decided to just run it together - Slaughter. These people make me sick. The only people who feel this kind of thing are us Aussies. We will get the bans, the 'refused classification'. The game is exactly that. A game. What is it rated in the USA? I know for damn sure it's not suitable for 8 year olds! Because he played it though, it's the fault of the game? And if there was no game involved at all, would we blame the gun? Obama can't change gun laws there because you gangsters and red necks love your 2nd amendment too bloody much. Then when something like this happens, blame the video game! It obviously isn't the gun guys, nor the parent/grandparent that left a loaded gun accessible to an eight year old, and allowed an eight year old to play the game. No, it's the game that made him do it. And I would point at it too if I were him. Imagine all the people asking him, "did that game make you do this sweetie"? As a kid a would say yes, thinking it's an easy way to stay out of trouble. Confiscate America's guns, not it's video games if you want to solve this problem! During the prohibition era people were still able to get there hands on alcohol. Banning/confiscating guns won't stop people getting access to them. There would be a significant decrease in gun crime for one. Also, if guns were illegal, sure, people could get their hands on one, but it would be MUCH more difficult for an eight year old. I read a study about the drop in gun crime in Australia. The study specifically looked at the gun buy back. What it showed was that there was no increase in the decline of gun crime after the buy back. What it did show however was that in the years before the buy back there was already a steady declining trend in gun crime and this continued after the buy back. They linked this to better education and also better social support such as dole etc. Making things illegal doesn't stop bad people getting their hands on them. Just look at Sydney with all its troubles with drive-by shootings lately. Or look at all the drugs that flood into the country. The problem here wasn't the gun or the game the problem was lazy parenting and a failure of the grandma to keep dangerous things be that a game, gun, knife or hot plate out of the hands of a minor. It's a deeper issue than that. Having access to firearms is an inalienable human right for citizens in the States, and everything else is secondary to that fact. You'll never get guns out of households in America - and for a broadcaster to suggest that doing so might prevent such tragic occurrences would damage their credibility with a quantifiable percentage of their audience. It doesn't surprise me that the real issue - that of access to firearms - isn't what's being debated here. It saddens me, yes - but surprises me? No. This is not a new story. Another couple of months will go by, another minor will kill someone, the American media will play the violent videogames card, and we'll have the exact same debate in Kotaku's comments section. American culture, mindset, and attitude to firearms isn't going to change. What will change, however, is what they attribute the blame to in order to defend their constitutional rights. At least the right to free speech is also constitutionally protected - so, whilst guns aren't going anywhere (for better or worse), neither is adult-oriented entertainment. But despite the hundreds of millions spent on research to determine whether or not video games are the cause of violent behaviour, the result has always come back that they are not. It is merely the easiest excuse for one of America's biggest problems, it's gun laws and culture. ... I know. That's pretty much my point. The tendency is to attribute blame, rather than looking for the root cause. In typical American fashion, point the finger at someone else. Good article Jason, I like that you pointed out that we can’t definitively say that GTA had no impact on this kid before he did what he did. Far too often we see games journalists get overly defensive and make absolute statements about the impacts of violent games, the bottom line is any kind of media (good, bad or otherwise) will likely have some influence on the behaviour of VERY young children. Making over the top statements sometimes makes us look almost as bad as the nutters you see in the videos above. At the end of the day there’s only one overriding factor here and it’s simply bad parenting. An 8 year old child shouldn’t be allowed to cook his own dinner let alone be left in a dangerous and unsupervised environment and if he’s sitting around playing an R rated game with a loaded firearm nearby then he’s clearly not being looked after. There’s plenty of things in this world that are unsafe or unsuitable for children, you can’t ban them all. If you don’t look after kids, then things like this will happen. I think games journalists *have* to get overly defensive to try and drown out the huge presence of the counter-argument. It's a bit difficult fighting a completely irrational party with a reasonable and balanced retort. I'm gonna give a loaded pistol to my eight year old. According to the media it's perfectly okay for him to have one. Totally not letting him anywhere near GTA of course. Reminds me of that great t-shirt: "Guns don't kill people. GTA kills people" This is stupid. America doesn't have the right to complain about this sort of shit happening when there are products such as my first gun aimed at kids. You have problems with your gun laws America get your shit together and do something about it, the fact that it is apparently legal to sell guns to an age group that isn't even old enough to buy them should be your first warning sign EDIT: after looking up an article the kid apparently thought it was a toy gun and it was in a purse or something. Feel pretty sorry for that kid he's not going to get over this very easily Last edited 27/08/13 10:52 am This smells of PR... PR for GTA V. I think Rockstar have learned over the years, that all news is good news as it gets their game/name out in the mindset of people. Is it a tragedy? Sure, but how many more people will be aware that a game called GTA is coming out... Join the discussion! Trending Stories Right Now
global_01_local_1_shard_00001926_processed.jsonl/62244
For Some Folks, 'Side Mouth' Ruins Anime Faces For Some Folks, You know how sometimes anime characters have mouths on the side of their faces? Does that bother you? Or are you totally cool with that? Recently on 2ch, Japan's largest internet forum, a thread disc popped up on how this anime mouth style is "upsettingly gross". Below, you can see the drawing the original commenter used to illustrate the point: For Some Folks, On the left it reads "normal" (普通 or futsuu), and on the right it says, "irritatingly creepy" or "upsettingly gross" (むかつくキモイ or mukasuku kimoi). Some commenters thought this style of mouth illustration was OK if it was cute. Like so: For Some Folks, Others thought the side mouth made them feel discomfort, and pointed to examples where normal mouth was used instead. For Some Folks, For Some Folks, Here is a realistic portrayal of the mouth showed from a three-quarters angle. I guess this is the "correct" way to portray side mouth? For Some Folks, "Man, it's anime," wrote one commenter. "Let this slide." I kinda agree! So does Sonic: For Some Folks, See, Sonic agrees. Showing characters' mouths like this is not new and has existed for a long, long time in both anime and manga. This is anime, so whatever, right? As long as it's not real-life side mouth. That is utterly terrifying. For Some Folks, こういう作画のアニメむかつくよな [2ch] You can see the curves of the lips on the side of their face. Urgh, Japan really needs to watch Avatar. THAT is what gets me, they've taken the time to do the lip outline in profile, but then completely ignore it and do a side mouth that flaps on its own, while the lip outline stays still. So weird. I remember Evangelion used it heaps as a cost cutting exercise, and while that was ok back then as they were rushed for time and money, it shouldn't become the norm now just because someone is lazy or they want to rush a substandard series out the door. iirc, most japanese animation is actually done in Korea. So yeah get Korea Depends on the studio, art style and actual implementation. Nothing wrong with a little distortion whether in practice or artistic style. It's what makes physical exaggeration in cartoons and anime so conventional in some aspects. Gah, I like K-On! but I cant watch it ever again thanks to this. I didnt notice it until you guys pointed it out! Cheaper to animate it that way. Character is side-on but if you animate the mouth moving you have to do a whole bunch of key animation, but by side-mouthing it like that they can animate mouth movements without needing to re-draw the whole frame. TV anime is all about maximum effectiveness from minimal budget. EDIT - Never mind, that's just going to result in flame wars. I'll just think it instead. Last edited 01/07/14 11:08 pm Does it really matter where the mouths are when they aren't lip-synced anyway? Well, in Japanese they might be, but they certainly aren't in English. Last edited 01/07/14 11:10 pm nope, Japanese actors also have to match lip flaps (except in the more expensive productions where they can afford to do the animation after), though they also tend to do recordings in single sessions rather than actor by actor. Anime is, in part, a stylistic medium. It's an artistic shortcut to highlight expressions that might otherwise be hidden or unclear. I don't regard it as anything more of a negative than sweat drops or motion lines. Now, if you want gross, look at the bloody nose trope... Side mouth is nothing compared to the terrible animation that is Western children's cartoons. A lot of those Warner Bros superhero cartoons are really hard to watch. It doesn't bother me when it's subtle. The opening image is a perfect example of this. The girl on the right side looks alright (it only looks weird when you realise that the contour of her lips is actually on the front of her face), while the girl with the huge, gaping mouth looks terrible. God damn it. I never had an issue with this before because i just never took any notice of it. Anime has been destroyed for me *insert glass shattering sound here* I think it's one of those "Dont really mind until someone points it out" >_> Showing characters’ mouths like this is not new and has existed for a long, long time in both anime and manga.Considering that there were quite a few examples of it in emaki, ezoshi and ukiyo-e (All older art styles from across the centuries, starting in the 13th century) I'd say yes. I have to agree though, when the mouth is so far removed from where it should be with respect to the turn of the head, it really bugs me. Join the discussion! Trending Stories Right Now
global_01_local_1_shard_00001926_processed.jsonl/62253
Three Breton Folk Songs for Four Guitars (arranged by Dan Jones) This is another work in the Eleftheria Kotzia Series. Dan Jones studied at the Royal Welsh College of Music and Drama with John Mills and as undertaken masterclasses with John Williams. These three French folk songs have been cleverly arranged and would make ideal concert material. Titles are The Little Bird in the Wood, The Old Widow and Oh! If I Go to the Army. For four guitars. Grade 6 £12  (book) £8  (pdf)
global_01_local_1_shard_00001926_processed.jsonl/62265
On September 28, California Governor Jerry Brown signed into law S.B. 1001, which makes it illegal “for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election,” unless the person discloses its use of the bot in a manner that is “clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts.” Dubbed a “Blade Runner law” by some because of its intent to expose the use of robotic systems online, the law’s scope and impact will depend to a considerable degree on the enforcement discretion apparently left in the hands of the California Attorney General’s Office (and perhaps district and city attorneys as well), which under California’s expansively interpreted Unfair Competition Law (UCL) can seek $2,500 per violation as well as equitable remedies. Private plaintiffs may also try to use the UCL to seek injunctive relief and restitution for violations of the anti-bot law. The statute defines a “bot” as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.” It defines “online platform” as “any public-facing Internet Web site, Web application, or digital application, including a social network or publication, that has 10,000,000 or more unique monthly United States visitors or users for a majority of months during the preceding 12 months.” The law expressly provides that it “does not impose a duty on service providers of online platforms, including, but not limited to, Web hosting and Internet service providers.” A response in part to the “computational propaganda” deployed most notably during the 2016 US election cycle and in part to concerns raised by parents’ groups about advertising aimed at children, the law raises significant First Amendment issues. It does not take effect until July 1, 2019, but it may become a model for other jurisdictions considering legislative responses to automated methods of shaping online content. For example, Senator Feinstein (D-CA) has recently introduced a federal “Bot Disclosure and Accountability Act.”
global_01_local_1_shard_00001926_processed.jsonl/62283
(702) 252-8383 • Open 24hrs on weekends • 5% interest per week (oac) • ​No checking account required • We give you cash, not a check • Option to pay interest only • Credit lines from $100 - $2500 At The Loan Depot we lend money the “Old Fashioned Way.” Other stores make their actual interest rate as confusing as possible and when it is all over you are stuck with a complicated interest structure and an actual interest rate of 10% or more. At The Loan Depot our interest rate is 5% per week on your outstanding balance. No gimmicks, no hidden fees, no bull. Just simple interest and flexible payments. Weekly Payments Or more...Etc. Welcome to The Loan Depot Open 24 hours on  weekends! (702) 252-8383 5870 S. Decatur Blvd. Suite 145 -  Las Vegas, Nevada 89118
global_01_local_1_shard_00001926_processed.jsonl/62309
Interpolation in CSS without animation Ideas for a more general purpose interpolation function in CSS. Interpolation is the estimation of a new value between two known values. This simple concept is vastly useful and it's commonly seen in animation on the web. With animation you declare the target properties and the end-state, and the browser will workout out the values in-between. Animation happens over time, but this is not the only dimension where interpolation can occur. In fact we interpolate values regularly in design, albeit manually, and particularly in responsive design. You may even do it unknowingly. Because of this, I think there is a need for a more native way of interpolating CSS values outside animation. If you are a web designer the chances are you frequently have two primary screen sizes in mind, a small screen and a large screen, or the device and the desktop. I know you probably think about more than just these two sizes, but these two sizes are especially important, they represent the upper and lower bounds of your design. When you make adjustments for screen sizes between these constraints, what you are doing is like interpolation. When adjusting properties such as font-size, font-weight, image width or grid dimensions at specific screen sizes between the upper and lower bounds, these values usually fall somewhere between the choices you've made for the largest small and smallest screen size. It would be unusual for the font to get larger, then smaller, then larger again as the viewport changes. Or to give another example, if a font varied between bold, normal, italic, then bold and italic. It's not unusual for these things to change from one state to another, but typically these changes are progressive, not back and forward. Design intent vs constraints We choose break-points where properties are to be adjusted. We don't do this because it is ideal, we're forced to select a fixed number break-points, often quite arbitrarily, where the design should change. Although sometimes we may want these break-points, more often it is due to technical limitations on the web. Media queries are our primary tool for adjusting design in relation to the screen size and for practical reasons, we are constrained to using a limited number of these. These limitations have shaped how we think about web design, and the choices we make about using break-points don't necessarily reflect the pure intentions of the designer. I've been told that good design is rarely arbitrary. It serves a purpose. If the font size is smaller, larger or its weight stronger, it's because that is the best experience for users, at that screen size. It's feasible to say that the best experience for some aspects of design, will vary directly in relation to the screen size rather than only at set points. This is the use-case for interpolation without animation. Let's illustrate this with an example, imagine the following CSS: body { font-weight: bold; @media screen and (min-width: 700px){ body { font-size: 1.2rem; font-weight: normal; It's unlikely a designer would decide bold font is uniquely suited to screen resolutions below 700px. Why would one pixel make such a difference? Design decisions like this are often the result of constrains imposed by media queries. A more likely intention is for the font-weight to be adjusted in relation to its size, for improved legibility on smaller screens. Media queries are the best tool available for approximately achieving this goal, but they are not always an accurate reflection of the designers intent. Maximum safe operating pressure I noticed the label on my barbecue gas cylinder says it has a maximum safe operating pressure. If I exceed this pressure when refilling it, it might explode (it actually won't, they have safety valves, but just imagine would). Web design doesn't explode quite as spectacularly as gas cylinders, but responsive design is exposed to a different kind of operating pressure. As the screen size gets smaller, there is often a point where a design is pressured by the limitations imposed by smaller screens. A break-point represents the point where the design cannot withstand this pressure any longer; it reached its maximum safe operating pressure and the appropriate response is to adjust some aspects of the design. Designers choose these break-points carefully. They probably have in mind where constraints like this begin to pressure the design, and how quickly it impacts overall quality. But in a compromise to technology, we are forced to choose a middle point, knowing that immediately before and after the break-point the design is often still pressured by constrains that demanded change. gradient demonstrating the location of ideal font-sizes in relation to a break-point and the design pressure experienced between these points This graphic attempts to illustrate the location of ideal font-sizes in relation to a break-point. You can move the ideal font-size closer to the break-point but this only shifts the pressure to somewhere else in the design. Alternatively you can add more break-points until this becomes problematic, but ideally these changes would be introduced gradually and continuously to reduce pressure on the design as it's required. Media queries are not the right tool for this. Media queries have been around longer than responsive design and responsive design was as much a reaction to the available technology, as the idea of media queries was to user needs. As is often the case, real world implementations of responsive design pushed the technology further than spec writers had imagined, and uncovered new uses, new requirements and new limitations. This is normal process. And with the perspective we have now, it's easy to ask, if we were designing a technical solution for responsive design today, would media queries be the best tool to implement designers intentions? I think not; or at least not the only tool. Live interpolation in the browser today Theoretically, between two ideal points, there is an ideal value for every screen size, that can be expressed as a ratio, or a function relative to the screen-size (or even another relative factor). Previously I've written about techniques you can use to achieve some forms of interpolation using calc() and viewport units. My favourite example of this demonstrates how you can interpolate between different modular scales with heading levels. Not only do the individual font-sizes change in a controlled way relative to the viewport, but the ratio between the heading levels also fluidly changes. This means there is a changing but consistent relationship between each of the headings. If you haven't seen this yet, you should read my article precise control over responsive typography. This technique allows linear interpolation between any two length values. This is great, but linear interpolation is not the only form of interpolation, and length values are not the only properties that change in responsive design. In addition to that, the first example in this article demonstrated a situation where font-size should change relative to the screen size, and font weight should change relative to font-size. At the moment this isn't possible with CSS. Limitations of interpolation with calc() There are some limiting factors when it come to changing the font-weight in relation to the font-size. Firstly the calc() techniques work only with length values and font-weight is a unitless value. The problem with interpolating unitless values could potentially be solved with something called 'unit algebra'. Unit algebra would allow calc() expressions that contain CSS units to resolve to a different unit type or even a unitless number. E.g calc(2rem * 2rem) = 4. This could allow us to interpolate unitless values like font-weight or line-height, and maybe even open the door to non-linier equations (by multiplying units by themselves). Whilst this would be a great feature, the syntax for these equations is likely to be complicated and still leaves us wanting a more native solution. We're also not likely to see this anytime soon. As far as I am aware there is no formal proposal, and this exists only as an idea discussed in w3c mailing lists. The second problem with interpolating properties like font-weight is that by default a web font won't have all the variations required to smoothly interpolate between these values. Usually a font-family will include the standard font and a single variation for bold, or at worse, just a faux-bold. Adding more variations will increase network requests, loading time and FOUF (Flash Of Unstyled Font). This is another constraint designers will be familiar with. Variable fonts and the future of font interpolation Luckily the problem of limited font variations has a solution that is relatively close on the horizon. Variable fonts offer the ability to specify how bold or italic a font should be. And not just bold or italic but other 'axes of variation'. You can read more about variable fonts in Andrew Johnson's excellent A List Apart article: Live font interpolation on the web. In his article Andrew mentions a need for "bending points—not just breaking points—to adapt type to the design". He also hints at some challenges we face interpolating font-values effectively on the web. My main concern is that many of these 'axes of variation' are not length values and therefor, whilst I'm excited for the opportunities that variable font will provide, I see their potential limited by existing constraints. How interpolation and animation works in browsers CSS is already great at interpolating values and it knows how to do this with a whole bunch of different animatable properties and property types. We can interpolate the value of any property that can be animated using CSS transitions or keyframe animations. During an animation the browser works out how much time has elapsed for every frame and picks an intermediary value. For example if 1 second of a 4 second animation has elapsed, we pick a point that is 25% of the way between the original and final value. This is easy to understand with numeric properties like width or position, but it works exactly the same with properties like color. Just imagine the same process happening for each of the R, G and B values that represents the color. You can think of them as 3 separate 2D interpolations that combine to give a color at each step of the animation. An interesting side note with CSS animations, is that no matter what values you use to define color the browser will always transition through an RGB colour space. This means that although the final colour will be the same, the path taken and intermediary colors will be different. We can manipulate the timing of an animation to get different results at different points of interpolation. By plotting an animation timing function on the same graph above, we can see how this changes the value returned at different points in the animation, while the start and end values remain the same. This is a non-linear interpolation and it’s really handy for creating all kinds of animation effects and more natural looking movement with acceleration and easing. We can define animation timing functions in CSS using keywords, steps or cubic bezier curves for greater control. Interpolation outside animation So far I've discussed the problem with media queries not always reflecting design intentions, and the limitations of interpolation with calc(). I've also shown how new features like variable fonts might be constrained by these limitations. The interesting thing is, we have all the tools we need to solve these problems, in CSS right now. Only they are tied closely to animation in the browser. The rest of this article is going to talk about the idea of exposing a native interpolation function in CSS, how it might work, and what problems might solve. It's very hypothetical and it's ok if you don't agree with either the idea in general or how it should work. I've talked about interpolation and animation together, however interpolating values over time is just one possibility. The duration and elapsed time of an animation simply provides a percentage completion. Somewhere within the browser an interpolation function is called and it will dutifully return a value at the given percentage completion, according to the timing function. Let’s imagine we could access this function directly in CSS and pass it our own percentage. If we could change this value using media queries, CSS variables (custom properties) and calc(), what are some of the things we might be able to do? First let’s imaging a syntax. We need an initial-value, a target-value, a percentage-completion and a timing-function. The timing function could be an optional value and default to a linear interpolation. That means it might look something like this: interpolate(initial-value, target-value, percentage-completion, [timing-function]) And could be used like this: .thing { width: interpolate(0px, 500px, 0.5, linear); Note: This is a not real CSS, it is a hypothetical solution to a real problem for the purpose of discussion. Obviously in the example above it would be far easier to set the width to 250px. So, interpolation functions are not that useful without variables. We do have some variable values in CSS. Things like: • the viewport width and height, • the width and height of an element or its container, • the number of siblings an element has, or • the order of an element amongst its siblings. These are all things that in one context or another we can know and use in CSS; unfortunately in many cases these variables are not easily queried to create conditional statements. There are some useful tricks to take advantage of them. Things like advanced fluid typography and quantity queries are great real world examples. A more hypothetical example in a native interpolation function might look something like this: :root { --max-viewport: 500px; --min-viewport: 1000px; --range: var(--max-viewport) - var(--min-viewport); --percentage-completion: calc( (100vw - var(--min-viewport)) / var(--range) ); .thing { width: interpolate(0px, 500px, var(--percentage-completion), ease-in); Although the above calculation is quite simple, but it's more than a bit ugly. This is because it uses CSS variables and unit algebra concepts I mentioned earlier to work out a percentage completion. A far neater solution would be a function to work out a percentage. This would reduce the above to something far more digestible like this: root: { --percentage-completion: percentage(500px, 1000px, 100vw); .thing { Note: Any interpolation function would probably need to clamp returned values to the specified range, as negative completion percentage are a likely result with variables. This doesn't need to work with just length values. I mentioned that CSS has a whole bunch of animatable properties that it already knows how to interpolate. It makes sense that any native function should work with these definitions. This means interpolating a color is also valid: root: { .thing { background-color: interpolate(red, greed, var(--percentage-completion)); The above example of changing the background color doesn't make much sense in relation to the viewport, but there are more legitimate use cases for interpolating a color in relation to an elements width. We just can't as easily query the properties needed to do this, as we can with the viewport. Container queries seem to be forever on the horizon. It won't be soon, but my hope is that container queries also ship with container and element units, that work much like viewport units, only for the width of an element. Container query units might look something like this: Unit Description cqw Relative to 1% of the container width cqh Relative to 1% of the container height cqmin Relative to 1% of the container width or height, whichever is smaller cqmax Relative to 1% of the container width or height, whichever is larger eqw Relative to 1% of the element width eqh Relative to 1% of the element height eqmin Relative to 1% of the element width or height, whichever is smaller eqmax Relative to 1% of the element width or height, whichever is larger Note: I used the cq prefix is because ch is already a valid unit type and eq for consistency. With units like these, we could do something like this: root: { --percentage-completion: percentage(0px, 100cqw, 100eqw); .thing { In this example the percentage-completion is the percentage width of a child element, in relation to it's parent element. Allowing CSS property values to be relative to context like this opens up a whole range of possibilities for things like, dynamic progress bars, creative navigation components and data-visualisation. But maybe this isn't the right solution. If we have a unit type for viewport width, container width and element width, where does this stop? DOM order, line length, color? Is it better introduce another function to get a value? E.g. value-of(width) if we do this, what about container width and non CSS properties like DOM order or line length? Magic keywords? value-of(dom-order). I don't know! Perhaps you don't agree with any of this. Perhaps you think we shouldn't introduce more functional features to CSS. That's ok. I hope you will agree that there is a need for discussion, that break-points don't necessarily match the intentions of designers and that interpolation will become a more significant feature of web design with the introduction of variable fonts, and an increasing adoption of viewport units and dynamic layout features. I'd like to start a discussion and if you have ideas please let me know or consider contributing to the issue on the CSS Working Group's, GitHub page.
global_01_local_1_shard_00001926_processed.jsonl/62323
mattfast1 - the fast one As authored by Matt About Me Name: Matt Van Dusen A/S/L? (A cliche question, I know, but everyone asks anyway) 31/M/Denver, CO, USA Any siblings? Yes. Any pets? Two cats, named Punkin and Felix. Online time? Sadly, I am connected somehow pretty much 24/7 these days. Online life? I am known to delve into many aspects of code, especially PHP and mySQL (the technologies that run this site). When I'm not working or dealing with code, I am known to play a variety of games, some of which are multiplayer and some of which are not. Favorite movies? Too many to mention. Here are a few- Independence Day, Shrek, Men In Black, Apollo 13, Dude Where's My Car, Harry Potter, Lord of the Rings, SPACEBALLS The Movie, and Monty Python. Anyone who doesn't like Monty Python should be shot. Repeatedly. In the groin. With rusty sporks. Favorite drink? Good old-fashioned H2O. Favorite type of music? I don't really have a favorite. A better question would cover my least favorite type of music: Country. If you were unquestionable ruler of the whole world, what would you ban?: Internet Advertising, bad songs, lawyers and people who either don't like me or I don't like them. Favorite vacation? I don't have time or money for a vacation! How long do you spend online per day? A lot, but it's actually a lot less than it used to be. Hobbies: My girlfriend, computer games, fixing computers, watching TV, the web, building the web, theatre, etc. etc. etc... What made you decide to make a website? Because I could. No, really, what made you decide to make a website? I wanted to. I had the available resources. I really really really hate some places on the web, and here I can rant about anything I want until I'm blue in the face. So why don't you just use your blog? This website allows me to provide the information I want to, to the people I want to. How many websites do you have? I used to have many. Then, Geocities was bought out. I wiped out all my accounts there, because I know they were going to wreck it. I was right. Currently, I own or operate websites on several different domains, including this one. I have run or designed sites for several other domains as well. How long did the website take you to make? The current site started as an independent redesign project of a VERY crappy website, which had its code downloaded in March of 2003. I kinda slacked off on it for a year, then started developing it as my own in April 2004. The actual launch was in December 2005 as part of my final for a web design class I only took to keep full-time student status. Finally, the first full launch (v0.6.6) was officially released on Wednesday, February 8, 2006 at 5:58pm -0700 Mountain Standard Time. The current incarnation was begun August 9, 2013, and utilizes a framework I designed and built from scratch for some of my work-related projects. How do you make money? I work for The Geek Squad, which is a group of over 20,000 active Agents who work inside Best Buy stores (for computer repair as well as automotive installation), inside our clients' homes (for computer repair, home theater installation and service, and appliance installation and service), as well as from locations of their choice (generally their homes) performing remote computer repair. I am in the remote computer repair division, working from my home to provide the best service possible to clients around the world. Favorite color? Black. (Clarification: The color defined by hexadecimal value #000000.) I am also a fan of the color of blue defined by hex value #006699. How can you ramble off hex values of colors? A long web design career. In a relationship? Yup. Do you have any enemies? Of course I do! I don't like having them. But I have just collected a few of the years. Due to people who either hate me for no particular reason (you know who you are) or they are fucking horrible bastards who hopefully will die a slow and painful death and rot in hell. Ever killed anyone? Not yet and I don't particularly plan to either... unless you don't like my website. You like my website, right? Favorite TV Show? The SIMPSONS, CSI: Crime Scene Investigation, Survivor, Mind of Mencia, South Park, Futurama, Family Guy, NCIS, Deadliest Catch, Mythbusters, Smash Lab. That's all I really ever have time to watch. Although, I guess that's quite an extensive list, compared to what used to be here. What are you listening to right now? See for somewhat up-to-date information. Favorite bands? Weird Al Yankovic, Eminem, T.a.T.u., Weird Al Yankovic, Linkin Park, Evanescence, Godsmack, Metallica, Rammstein, System of a Down, and anything else good!
global_01_local_1_shard_00001926_processed.jsonl/62333
A Tetanus Education 'Twas but a scratch, a centimeter-long slice deep enough to shed drops of blood that made perfect circles as they fell onto the sand. It seemed no more harmful than a paper cut, drying up quickly and causing no pain. But the fact that it was borne by a rusty, cut-off stump of an old metal signpost that once likely warned beachgoers to "keep off the dunes," set off one thought in my mind: tetanus. Terrified of needles as a child, I cautiously eyed the ground after hearing my mother say that stepping on a rusty nail meant having to get a shot (how was I to know I was already vaccinated?). So, when my companion cut his big toe on a Belmar beach in New Jersey that Saturday, I had just one directive -- tetanus shot. He hadn't had one since he was a kid. "I don't think I need it," he said as he wiped a piece of rust from the cut. Given the lack of severity, I wasn't sure if he did. The debris made me a bit more concerned, but still, my only evidence was a childhood fear. Consulting the iPhone didn't help, since it was hard to find and download PDFs of the epidemiology of tetanus cases from MMWR. And being a weekend, the family doctor wasn't available. We couldn't find a nearby urgent care clinic, and being unsure of the window within which the vaccine should be administered, the only solution was the emergency department. Two hours later, my companion was not only immunized against tetanus, but diphtheria and pertussis as well. Even then, my question was, "Was this all necessary?" I've since come to a conclusion, after a tetanus education that began when an emergency department nurse said Clostridium tetani bacteria were just as likely to thrive in wood. Tetanus from a splinter? Good thing my mom hadn't told me that. More commonly, tetanus spores are found in animal (and human) feces, said George Pankey, MD, an infectious disease specialist at Alton Ochsner Medical Foundation in New Orleans. The good news, Pankey said, is that the organism "doesn't like living in healthy tissue. It can be in the stool and it won't bother you because it won't vegetate in that situation and produce the material that the body converts to the toxin." (C. tetani produces two endotoxins, tetanolysin and tetano-spasmin, the latter of which is a neurotoxin and causes the actual symptoms of tetanus). But give it an anaerobic environment like a puncture that goes deep and closes quickly -- say, a splinter or a cat bite -- and you've got the right conditions for infection. That's why intravenous drug users are at especially high risk of tetanus. So much for all that worry about rust, I thought. Also, the bigger the injury, Pankey said, the higher the risk. Yet this wasn't a cat bite, there were no feces nearby, and it was such a small cut. So was getting vaccinated the right thing? Pankey's answer was a resounding Yes. Even if this particular scratch wasn't going to lead to lockjaw, vaccination would last another 10 years, covering my companion if he does encounter a cruel cat or a spore-laden splinter during that time. In fact, studies have shown that tetanus cases more often result from smaller wounds, because larger ones are better taken care of -- and treated preventively with tetanus toxoid vaccines. However, even Pankey said that "if you step on a rusty nail, you have a better chance of infection from Pseudomonas than tetanus." So I'm still wondering why there's so much paranoia surrounding this lone bacteria. Although I certainly wouldn't trade the peace of mind that comes with a Tdap booster.
global_01_local_1_shard_00001926_processed.jsonl/62341
Introduction to Data Mining presents fundamental concepts and algorithms for those learning data mining for the first time. Each concept is explored thoroughly and supported with numerous examples.Each major topic is organized into two chapters, beginning with basic concepts that provide necessary background for understanding each data mining technique, followed by more advanced concepts and algorithms. Salient Features 1. Provides both theoretical and practical coverage of all data mining topics 2. Includes extensive number of integrated examples and figures 3. Offers instructor resources including solutions for exercises and complete set of lecture slides 4. Assumes only a modest statistics or mathematics background without any requirement of database knowledge 5. Important topics such as predictive modeling, association analysis, clustering, anomaly detection, visualization covered" More Details about Introduction To Data Mining General Information   Author(s)Pang Ning Tan and Vipin Kumar and Michael Steinbach Edition1st Edition Publish YearOctober 2016
global_01_local_1_shard_00001926_processed.jsonl/62419
Phone Sex Hi hunny, I am a submissive slut. I want to have hot nasty phone sex with you. I have big tits and a firm ass that is perfect for sex or whatever. Use me for your sexual fantasies, no matter how kinky. I love almost everything, oral and anal sex and all sexual postitions. I 'm eager to experiment with any ideas you might have. Call me and tell me what you want to do to me.
global_01_local_1_shard_00001926_processed.jsonl/62421
Zombie's Fury Game, Zombies, Casual Be careful! Don't let the zombies catch you. Try to kill as many zombies as you can, before they kill you. - Drag all over the screen to move the player. - To shoot the zombies tap over their heads as fast as you can, but be careful to not run out of bullets! Enjoy the game!
global_01_local_1_shard_00001926_processed.jsonl/62469
(source: Gualtiero on Flickr) For the last few millennia, libraries have been the custodians of human knowledge. By collecting books, and making them findable and accessible, they have done an incredible service to humanity. Our modern society, culture, science, and technology are all founded upon ideas that were transmitted through books and libraries. Then the web came along, and allowed us to also publish all the stuff that wasn’t good enough to put in books, and do it all much faster and cheaper. Although the average quality of material you find on the web is quite poor, there are some pockets of excellence, and in aggregate, the sum of all web content is probably even more amazing than all libraries put together. Google (and a few brave contenders like Bing, Baidu, DuckDuckGo and Blekko) have kindly indexed it all for us, acting as the web’s librarians. Without search engines, it would be terribly difficult to actually find anything, so hats off to them. However, what comes next, after search engines? It seems unlikely that search engines are the last thing we’re going to do with the web. What if you had your own web crawl? A small number of organizations, including Google, have crawled the web, processed, and indexed it, and generated a huge amount of value from it. However, there’s a problem: those indexes, the result of crawling the web, are hidden away inside Google’s data centers. We’re allowed to make individual search queries, but we don’t have bulk access to the data. (I’ll gloss over the problem of actually implementing those analyses. The signal-to-noise ratio on the web is terrible, so it’s difficult to determine algorithmically whether a particular piece of content is any good. Nevertheless, search engines are able to give us useful search results because they spend a huge amount of effort on spam filtering and other measures to improve the quality of results. Any product that uses web crawl data will have to decide what is noise, and get rid of it. However, you can’t even start solving the signal-to-noise problem until you have the raw data. Having crawled the web is step one.) The web link graph The idea of collating several related, useful pieces of content on one subject was recently suggested by Justin Wohlstadter (indeed it was a discussion with Justin that inspired me to write this article). His start-up, Wayfinder, aims to create such cross-references between URLs, by way of human curation. However, it relies on users actively submitting links to Wayfinder’s service. I argued to Justin that I shouldn’t need to submit anything to a centralized database. By writing a blog post (such as this one) that references some things on the web, I am implicitly creating a connection between the URLs that appear in this blog post. By linking to those URLs, I am implicitly suggesting that they might be worth reading. (Of course, this is an old idea.) The web is already, in a sense, a huge distributed database. By analogy, citations are very important in scientific publishing. Every scientific paper uses references to acknowledge its sources, to cite prior work, and to make the reader aware of related work. In the opposite direction, the number of times a paper is cited by other authors is a metric of how important the work is, and citation counts have even become a metric for researchers’ careers. Google Scholar and bibliography databases maintain an index of citations, so you can find later work that builds upon (or invalidates!) a particular paper. As a researcher, following those forward and backward citations is an important way of learning about the state of the art. Similarly, if you want to analyze the web, you need to need to be able to traverse the link graph and see which pages link to each other. For a given URL, you need to be able to see which pages it links to (outgoing links) and which pages link to it (incoming links). You can easily write a program that fetches the HTML for a given URL, and parses out all the links — in other words, you can easily find all the outgoing links for an URL. Unfortunately, finding all the incoming links is very difficult. You need to download every page on the entire web, extract all of their links, and then collate all the web pages that reference the same URL. You need a copy of the entire web. Incoming and outgoing links for a URL. Finding the outgoing ones is easy (just parse the page); finding the incoming ones is hard. Publicly available crawl data An interesting move in this direction is CommonCrawl, a nonprofit. Every couple of months they send a crawler out into the web, download a whole bunch of web pages (about 2.8 billion pages, at latest count), and store the result as a publicly available data set in S3. The data is in WARC format, and you can do whatever you want with it. (If you want to play with the data, I wrote an implementation of WARC for use in Hadoop.) As an experiment, I wrote a simple MapReduce job that processed the entire CommonCrawl data set. It cost me about $100 in EC2 instance time to process all 2.8 billion pages (a bit of optimization would probably bring that down). Crunching through such quantities of data isn’t free, but it’s surprisingly affordable. The CommonCrawl data set is about 35 TB in size (unparsed, compressed HTML). That’s a lot, but Google says they crawl 60 trillion distinct pages, and the index is reported as being over 100 PB, so it’s safe to assume that the CommonCrawl data set represents only a small fraction of the web. Relative sizes of CommonCrawl, Internet Archive, and Google (sources: 1, 2, 3). CommonCrawl is a good start. But what would it take to create a publicly available crawl of the entire web? Is it just a matter of getting some generous donations to finance CommonCrawl? But if all the data is on S3, you have to either use EC2 to process it or pay Amazon for the bandwidth to download it. A long-term solution would have to be less AWS-centric. I don’t know for sure, but my gut instinct is that a full web crawl would best be undertaken as a decentralized effort, with many organizations donating some bandwidth, storage, and computing resources toward a shared goal. (Perhaps this is what Faroo and YaCy are doing, but I’m not familiar with the details of their systems.) An architectural sketch Here are some rough ideas on how a decentralized web crawl project could look. The participants in the crawl can communicate peer-to-peer, using something like BitTorrent. A distributed hash table can be used to assign a portion of the URL space to a participant. That means each URL is assigned to one or more participants, and that participant is in charge of fetching the URL, storing the response, and parsing any links that appear in the page. Every URL that is found in a link is sent to the crawl participant to whom the URL is assigned. The recipient can ignore that message if it has already fetched that URL recently. The system will need to ensure it is well-behaved as a whole (obey robots.txt, stay within rate limits, de-duplicate URLs that return the same content, etc.). This will require some coordination between crawl participants. However, even if the crawl was done by a single organization, it would have to be distributed across multiple nodes, probably using asynchronous message passing for loose coordination. The same principles apply if the crawl nodes are distributed across several participants — it just means the message-passing is across the Internet rather than within one organization’s data center. A hypothetical architecture for distributed web crawling. Each participant crawls and stores a portion of the web. There remain many questions. What if your crawler downloads some content that is illegal in your country? How do you keep crawlers honest (ensuring they don’t manipulate the crawl results to their own advantage)? How is the load balanced across participants with different amounts of resources to offer? Is it necessary to enforce some kind of reciprocity (you can only use crawl data if you also contribute data), or have a payment model (bitcoin?) to create an incentive for people to run crawlers? How can index creation be distributed across participants? (As an aside, I think Samza‘s model of stream computation would be a good starting point for implementing a scalable distributed crawler. I’d love to see someone implement a proof of concept.) Motivations for contributing Why would different organizations — many of them probably competitors — potentially collaborate on creating a public domain crawl data set? Well, there is precedence for this, namely in open source software. Simplifying for the sake of brevity, there are a few reasons why this model works well: • Cost: Creating and maintaining a large software project (eg. Hadoop, Linux kernel, database system) is very expensive, and only a small number of very large companies can afford to run a project of that size by themselves. As a mid-size company, you have to either buy an off-the-shelf product from a vendor or collaborate with other organizations in creating an open solution. • Competitive advantage: With infrastructure software (databases, operating systems) there is little competitive advantage in keeping a project proprietary because competitive differentiation happens at the higher levels (closer to the user interface). On the other hand, by making it open, everybody benefits from better software infrastructure. This makes open source a very attractive option for infrastructure-level software. • Public relations: Companies want to be seen as doing good, and contributing to open source is seen as such. Many engineers also want to work on open source, perhaps for idealistic reasons, or because it makes their skills and accomplishments publicly visible and recognized, including to prospective future employers. I would argue that all the same arguments apply to the creation of an open data set, not only to the creation of open source software. If we believe that there is enough value in having publicly accessible crawl data, it looks like it could be done. Perhaps we can make it happen What I’ve described is a pie in the sky right now (although CommonCrawl is totally real). Collaboratively created data sets such as Wikipedia and OpenStreetMap are an amazing resource and accomplishment. At first, people thought the creators of these projects were crazy, but they turned out to work very well. We can safely say they have made a positive impact on the world, by summarizing a certain subset of human knowledge and making it freely accessible to all. I don’t know if freely available web crawl data would be similarly valuable because it’s hard to imagine all the possible applications, which only arise when you actually have the data and start exploring it. However, there must be interesting things you can do if you have access to the collective outpourings of humanity. How about augmenting human intellect, which we’ve talked about for so long? Can we use this data to create fairer societies, better mutual understanding, better education, and such good things? Cropped image by gualtiero on Flickr, used under a Creative Commons license. This post is part of our ongoing exploration into Big Data Components and Big Data Tools and Pipelines. Kleppmann’s new book Designing Data-Intensive Applications is available in early release here. Article image: (source: Gualtiero on Flickr).
global_01_local_1_shard_00001926_processed.jsonl/62477
Ernie Fletcher, Governor of Kentucky December 28, 2005 The Honorable Ernie Fletcher Governor of Kentucky 700 Capitol Ave., Ste. 100 Frankfort, KY 40601 Dear Governor Fletcher: I learned from my friends at PETA that your state capitol houses a large bust of Colonel Sanders. On behalf of kind people everywhere, I urge you to remove it. As you may know, KFC is under worldwide pressure to eliminate its cruelest abuses of chickens, such as cutting the beaks off baby birds; breeding chickens to grow so large, so quickly that many suffer crippling injuries; and slitting the birds’ throats or dropping them into tanks of scalding-hot water while they are still alive and able to feel pain. Workers at one KFC “Supplier of the Year” slaughterhouse in West Virginia were documented tearing the heads off live birds, spitting tobacco into their eyes, spray-painting their faces, and slamming them on the ground. Please watch my video at and join compassionate consumers in sending the message that KFC’s cruelty to chickens is unacceptable by removing the bust of Colonel Sanders from your state capitol. My PETA colleagues would happily volunteer to remove it at no cost to the state. Pamela Anderson
global_01_local_1_shard_00001926_processed.jsonl/62478
Friday, July 04, 2014 What I bought - and played - from the Steam Sale 1) One Finger Death Punch 2) Doom 3 BFG Edition You have to understand, playing this game all those years ago freaked me the hell out, even while running it on an underpowered Pentium 2 (or whatever it was). I never managed to get more than a few hours in before saying "nope" loudly and doing something else instead. Weirdly it feels like I'm cheating in the new edition due to being able to wave the torch and the gun around at the same time. I'm too fast. I should be more sluggish. Maybe that's just the Pentium 2 talking. 3) Ghost Control Inc. Okay, a bit of a cheat here because I don't remember if I bought this in the sale or just prior to it starting. I don't really care, because it's fantastic. Remember the map screen from the old Ghostbusters game on the Atari 2600? Take that and mash it up with XCOM style ghost battles and you have a great little game. I mean, look at it: 4) Super Amazing Wagon Adventure oh my god 5) Knights of Pen and Paper +1 Edition There's meta, and there's this. If I have this right, you play regular people dressed as fantasy characters playing  tabletop RPG where they're attacked by fake real fake monsters. Or something. You also gain stat boost by pimping out the Dungeon Master's pad, and it has a TARDIS in it. How do you not own this game, basically. 6) One Way Heroics Take Groundhog Day, the left-to-right chase mechanic of FTL and a chatty fairy. This is the game you'll end up with. If you're bored of endless yakking with characters in RPGs, you'll love this because it's a little bit like a solo Marathon (albeit with swords and the occasional monster). How far can you run? With the exception of Doom, none of the above are AAA+ games. To be honest most of the big titles available right now look like they'll bore me to tears so these are a welcome addition to the ranks. I'd ask you what you bought and played, but I'd be amazed if there's anybody still out there. Is this thing on 1 comment: Glen McNamee said... The best part of Doom 3 BFG is Doom 1 & 2. I got it a while ago, and I haven't touched Doom 3 yet.
global_01_local_1_shard_00001926_processed.jsonl/62506
Dismiss Notice Join Physics Forums Today! An explannation of gravity 1. Oct 31, 2005 #1 We put ball A and ball B by distance R, every atom in ball A has some positive charges and some negative charges, same as atoms in ball B. Now suppose every positive charge in ball A attracts every negative charge and repells every positive charge in ball B, and verse visa. Now we have all the single force add up, we find it equals to f=gxm1xm2/rxr I once did some calculation, it sounds looked right, but now I forgot the details. So, I believe, gravity is the shadow of electrostatic force. An easier way to see this is to put two atoms apart and calculate the forces between all positive and negative charges. You may find I was right. If not, please let me know why, be appreciate. 2. jcsd 3. Oct 31, 2005 #2 Doc Al User Avatar Staff: Mentor OK, but realize that the net charge on each ball is zero. That's exactly what happens. Nope. The electrostatic force is zero. On the other hand, the gravitational force, which is associated with mass not charge, is given by Newton's law of gravity. 4. Oct 31, 2005 #3 User Avatar sounds looked like a crank turning to me. 5. Oct 31, 2005 #4 Why is an atom attracts another atom? They are both zero net charged. My point is even in every atom the net charge is zero, but the distance between positive charge and negative change between two atoms will smaller than same charges. The net force end up a mighty small percentage of the electrcostatic attration force. This is what we called gravity. 6. Oct 31, 2005 #5 User Avatar Science Advisor And the point of everyone who has answered is that you are wrong. Very delicate have been done to determine precisely the force given by the difference in distances you are talking about- they do not give anything like the gravitational force. It is also well known that "neutron stars"- which have NO positive or negative charges at all- have strong gravitational fields. 7. Oct 31, 2005 #6 User Avatar Gold Member Do you mean that oppisite charges will be pulled together? That also wont cause an attraction, cause an equal amount of + and - charges will be pulled from each atom, so for example, a + charge in mass1 pulls a - charge from mass2 closer to it, but at the same time a + from mass2 is being pulled by a - in mass 1... 8. Oct 31, 2005 #7 User Avatar Staff Emeritus Science Advisor Education Advisor Where EXACTLY does another attom "attracts" another atom? In molecules? In solids? Have you studied what happens there? Have you heard of orbital HYBRIDIZATION? Have you looked at what KINDS of atoms that can form such a thing? Have you studied orbital bonding and why certain atoms can form such bonds while others can't? Well you SHOULD, at least BEFORE you put forward such outlandish speculation. You may also want to re-read the PF Guildlines on speculative posting before you proceed any further. 9. Oct 31, 2005 #8 User Avatar Staff: Mentor Well.... because your idea is wrong. It isn't magnetism, it's gravity. Nope. Still wrong. An object that has all it's magnetic poles aligned in such a way as to not have a balanced (note: unbalanced doesn't mean it isn't still zero) charge is a magnet. Magnets don't behave anywhere near the way objects in gravity behave. 10. Oct 31, 2005 #9 Why is gravity has the same nature as electical force? Is any other force has the form of f=c m1m2/rr? 11. Oct 31, 2005 #10 User Avatar Staff Emeritus Science Advisor Education Advisor That isn't a good enough reason! There are MANY instances in physics where the mathematical forms are similar. Doesn't mean they are all the same phenomena sharing the same mechanism! If this is your only argument for your "theory", you have a lot of holes to plug. Furthermore, electric fields are not the result of ANY "shielding" of anything. Yet, this is how you are modelling your gravitational interaction. So in your model, gravity and electric field are of a DIFFERENT nature. So it is YOU who has to account for why they have the same dependence on distance! I'm guessing you realize why your argument on why neutral atoms "attract" one another is no longer valid. 12. Oct 31, 2005 #11 Let's see this way. Calculate the force between one atom in ball A and another in ball B. Suppose the sample atom has p3 and e1, e2, e3 chagres, it is nutraul. The distance between the center of the two atoms is r, it is much larger than the dr which is the radin of the atom, then we have F=fp3e1+fp3e2+fp3e3+fp3e1+fp3e2+fp3e3-fp3p3-fe1e1-fe1e2-fe1e3-fe2e1-fe2e2-fe2e3-fe3e1-fe3e2-fe3e3. It end up an attraction force, it increases with the mass (total charge) and decreases with distance square. Last edited: Oct 31, 2005 13. Oct 31, 2005 #12 User Avatar Staff Emeritus Science Advisor Education Advisor Dear PF member, The thread has been locked because it contains opinions that are contrary to those currently held by the scientific community. This is against the Posting Guidelines of Physics Forums. If you would like to discuss your ideas, we invite you to submit a post to the Independent Research Forum, subject to the applicable guidelines, found here. We appreciate your cooperation, and hope you enjoy the Forums. Best regards, The Staff of Physics Forums
global_01_local_1_shard_00001926_processed.jsonl/62543
16 Prototyping Tools & How Each Can Be Used Prototyping is an intimate part of the Design Thinking process. It’s the necessary part of the design process where we get the chance to prove out our crazy ideas. And when the prototype is finished, we test it. We validate and challenge our assumptions. We adjust our designs when new information arises. But how do we know which tool to use? How far should we go with the prototype? Here are 4 questions you should be asking whenever you approach a prototype: 1. Are you building for mobile, tablet, or desktop? 2. How high fidelity does your prototype need to be? 3. How fast do you need to go? 4. How much of the experience do you need to show?
global_01_local_1_shard_00001926_processed.jsonl/62546
castration culture shows ancient, bloody grip in Hesiod’s Theogony castration of Uranus After she had children with Uranus, Gaia with vague and unsubstantiated charges incited his castration. Gaia said to the children she had with Uranus: Children of mine and of a wicked father, obey me, if you wish: we would avenge your father’s evil outrage. For he was the first to devise unseemly deeds. [2] Consensual heterosexual activity isn’t unseemly, nor should it be blamed on men. In ancient Greek theogony, Gaia was the original exponent of parental alienation. Cronus, son of Gaia and Uranus, responded like a well-programmed zombie to her work of parental alienation: Mother, I would promise and perform this deed, since I do not care at all about our evil-named father. For he was the first to devise unseemly deeds. Gaia rejoiced at her son’s willingness to commit horrible violence at her behest. She arranged for an ambush: She placed him in an ambush, concealing him from sight, and put into his hands the jagged-toothed sickle, and she explained the whole trick to him. And great Sky came, bringing night with him; and spreading himself out around Earth in his desire for love, he lay outstretched in all directions. Then his son reached out from his ambush with his left hand, and with his right hand he grasped the monstrous sickle, long and jagged-toothed, and eagerly he reaped the genitals from his dear father and threw them behind him to be borne away. [3] Fathers understand Hesiod’s Theogony in a deeply personal, deeply painful way. College students not yet fathers are beginning to recognize castration culture. Jupiter (Cronus) castrating his father Saturn (Uranus) When Bernardus Silvestris wrote a new cosmogony in twelfth-century France, he confronted the castration culture that Hesiod’s Theogony described. As Roman culture developed with awareness of the earlier Greek culture, the Roman god Saturn assimilated the Greek god Cronus. Bernardus described Saturn as: an ancient to be most strongly condemned, cruel and detestable in his wickedness, savagely inclined to harsh and bloody acts. { accusatissimus veteranus, crudelioris quidem et detestandae malitiae, dirisque ac cruentis actibus efferatus. } [4] Writing under gynocentrism, Bernardus didn’t dare attack long-established castration culture directly. He challenged it figuratively with a new poetic description of Saturn’s violence: he {Saturn} mowed down with a blow of his sickle whatever was beautiful, whatever was flourishing. Just as he would not accept childbirth, so he forbade roses, lilies, and the other kinds of sweet-scented flowers to flourish. { insumpto falcis acumine, quicquid pulchrum, quicquid florigerum demetebat. Rosas et lilia et cetera olerum genera, sicut nasci non sustinet, non sustinet et florere. } Destroying gardens is a poetic metaphor for castration. Bernardus sought to create a new, more humane cosmos. His Cosmographia ends with creating man’s penis and celebrating the importance of the penis’s skillful work. Penetrating castration culture to implant the seeds of a new imaginative world requires gleaning discarded resources in literary history. For example, classical Latin Priapus poems exposed the brutalizing and commodifying stereotypes of men’s person. Asinus aureus recounted a woman’s delight in a large male member. The medieval French knight Geoffrey de La Tour Landry taught his daughters concern for violence against men. Unlike Hesiod’s Theogony, the medieval Latin Cosmographia of Bernardus Silverstris excludes castration culture. All just, merciful, and loving persons should strive to create such a world. *  *  *  *  * Read more: [1] Relative to other western Eurasian creation myths, female parthenogenesis is far more important and marked in Hesiod’s Theogony. Park (2014) pp. 265-9. Scholars today tend to see female parthenogenesis as part of the primordial Golden Age: The mythic form his {Zeus’s} act of creation assumes completes the trend of the Theogony that began with Earth’s natural parthenogenetic capacity and ends with the male’s imitation of her. The seal is set on the finality of the transition from female dominance to male dominance by conscious male usurpation of her procreative functions, the basic source of her mystery and power. Zeitlin (1996) p. 108. Students are now thoroughly indoctrinated with these threadbare clichés of anti-meninism: Evidently, Hesiod merely reflected the opinions of his society onto his story of the gods’ creation of the universe, creating a justification for the philosophical opinions of the society in which he lived. Could myths like the Theogony have been used to reinforce the patriarchy as it operated in ancient Greek society? It seems likely. Pelos (2016). Of course it seems likely when you live in a Soviet-style indoctrination camp. See the praise for the good little apparatchik. [2] Hesiod, Theogony ll. 164-6, from Greek trans. Most (2007) p. 17. Subsequent quotes are from id. ll. 170-2, 174-82. The earlier Loeb edition of Hesiod’s Theogony, translated by Hugh G. Evelyn-White (1914), is available online. [3] Reflecting the fantastic imagination that now drives totalitarian sex tribunals at American universities, Vernant declares: Ouranos {Uranus} sprawls over Gaia, covering her permanently, and discharges into her without stopping, imposing on her an incessant copulation—at least, at night (Theog. 176). There is neither spatial separation nor temporal interlude between them, in this union without pause. Vernant (1990) p. 466. This contempt for Uranus isn’t warranted. Without Uranus’s work, ordinary life would be impossible. Uranus provides relief that helps persons begin a new day. Echoing tenets of today’s dominant castration culture, Park declares: The emasculation of Uranus is key to progress: it ends his sexual relationship with Gaea and explains in symbolic terms the separation between earth and sky. Park (2014) p. 271. Castration also occurs in the earlier Hurrian-Hittite Kumarbi Cycle (Song of Kumarbi). In the Hittite version of that theogony (from the fourteenth or thirteenth century BGC), Kumarbi overthrows king Anu by biting off his genitals. For an English translation, Bachvarova (2013). Long-established castration culture has broad bite today in the broad criminalization of men’s sexuality. While he presented castration culture, Hesiod himself seems not to have been an anti-meninist. Hesiod sought to provide men with prudent counsel in the work of their ordinary days: Do not let a fancy-assed woman deceive your mind by guilefully cajoling while she pokes into your granary: whoever trusts a woman, trust swindlers. Hesiod, Works and Days ll. 373-5, from Greek trans. Most (2007) pp. 117, 119. [4] Bernardus Silvestris, Cosmographia, Microcosmus 5.5, from Latin trans. Wetherbee (2015) pp. 101. The subsequent quote is similarly from Microcosmus 5.6, id. pp. 101, 103. Underscoring his condemnation of Saturn / Cronus for promoting castration culture, Bernardus describes “the barren and frozen wastes of Saturn {infecunda Saturni frigora}” as a place: where the peace of the sky had been broken and shivered with chill and icy harshness. { ubi gelidis et pruinosis rigoribus demutata caeli tranquillitas inhorrescit. } Microcosmus 5.7, id. p. 103. I suspect that the Latin gelidus shares a common origin with the Old Norse term gelda (“geld, castrate”). Other medieval Latin poetry sadly shows more tolerance for castration. Walter of Châtillon, who vigorously addressed the poetic problem of man-hating Amazons, nonetheless wrote: Nor should priests be excused who are known to be fouling themselves with their sheep. Some have been castrated for this or put to death, whenever Fortune has wanted some amusement. { Sed neque presbiteros decet excusari quos cum suis ouibus constat inquinari unde quosdam contigit uel ementulari uel perimi quotiens uoluit fortuna iocari. } Walter of Châtillon, Stulti cum prudentibus 16, Latin text and English translation from Traill (2013) pp. 116-7 (poem 43). Aelred of Rievaulx recorded a brutal account of a monk being castrated for having sex with a nun about the year 1160. [images] (1) Cronus castrates his father Uranus at his mother Gaia’s urging. Oil on panel. Giorgio Vasari  and Cristofano Gherardi, 16th century Florence. Thanks to Wikimedia Commons. (2) Jupiter (Cronus) castrating his father Saturn (Uranus). Illumination from late-fifteenth-century manuscript of Le Roman de la Rose {The Romance of the Rose}, made for Louise of Savoy, mother of French King Francis I. Oxford Library MS. Douce 195, folio 76v. Bachvarova, Mary. 2013. “Translation of the Kumarbi Cycle, with Song of Hedammu separated into two different versions.” Pp. 139-63 in López-Ruiz, Carolina. Gods, heroes, and monsters: a sourcebook of Greek, Roman, and Near Eastern myths in translation. New York: Oxford University Press. Most, Glenn W., trans. 2007. Hesiod. Theogony. Works and Days. Testimonia. Loeb Classical Library 57. Cambridge, Mass: Harvard University Press. Park, Arum. 2014. “Parthenogenesis in Hesiod’s Theogony.” Preternature: Critical and Historical Studies on the Preternatural 3.2: 261-283. Pelos, Andy. “The Asexual Revolution: Parthenogenesis in Hesiod’s Theogony (Revised).” A Classic(s) Dilemma. May 12, 2016. Vernant, Jean-Pierre. 1990. “One…Two…Three: Eros.” Ch. 14 (pp. 465-78) in Halperin, David M., John J. Winkler, and Froma I. Zeitlin, eds. Before sexuality: the construction of erotic experience in the ancient Greek world. Princeton, N.J.: Princeton University Press. Wetherbee, Winthrop, trans. 2015. Poetic works: Bernardus Silvestris. Dumbarton Oaks Medieval Library 38. Cambridge, Massachusetts: Harvard University Press. Zeitlin, Froma I. 1996. Playing the other: gender and society in classical Greek literature. Chicago: University of Chicago Press.
global_01_local_1_shard_00001926_processed.jsonl/62566
theExpeditionarium Ideas as technology. Seven Samurai Rapid Expedition (abbreviated as rExpd) is a comprehensive pronouncement of all that might be considered as modern futurist longings, though not wholly distinct from those longings of the futurists of old. Indeed should it perhaps be considered as intrinsic to the concept itself to harness the spirit of the latter–its habit of energy and its racer’s stride, for instance1–in the advancement of the former. 1. Paraphrasing certain elements of the Manifesto del Futurismo. rExpd Extended Here, you will find the concept of Rapid Expedition extended to many realms of human life and profession. • ./fit // rExpd as it relates to physical form and function. • ./edu // rExpd as it relates to the pursuit and proliferation of knowledge. • ./pub // rExpd as it relates to the dissemination of information. • ./hack // rExpd as it relates to the exploitation of latent properties. • ./mk // rExpd as it relates to production and fabrication. • ./civ // rExpd as it relates to the stewardship of civilization. • ./ag // rExpd as it relates to cultivation and propagation. We are sometimes given to imaginings of our future which seem to resonate with those novel enterprises of our present and proximity, or else emanate from them. The perception of an enterprise’s relevance to our future enhances the enthusiasm around it, and that enthusiasm in turn reinforces its relevance. When this cycle persists against a backdrop of general progress, it is only because its relevance from the outset was essentially resilient or future-proof. Such resilience is elusive in a highly competitive context, but when an enterprise defines or, more precisely, redefines that context, it transcends the fray. This is what is often meant by the term, disruptive. But how does such an enterprise persist in a thoroughly disrupted context with its own disruptive potential sapped? Here we contend that for an enterprise to garnish the greatest potential resilience, and thus relevance for the future, it must become a means through which that potential is unlocked; it must become a platform upon which any enterprise might be expedited, and expedited to an ever greater extent by the very process of its expedition–expedited rapidly, so to speak. And so what precisely do we mean by expedition? as this is not a term today commonly used in this manner. If we survey its many definitions: • a naval expedition • a scientific expedition • an expedition across the Alps 6. The group of people making such excursion. … we might find that perhaps every definition holds some special relevance to a more romantic sense of the term, but it should also be apparent that our more technical use of the term most accurately references its more rare and obsolete and decidedly more literal definitions. Thus do we coin the term, Rapid Expedition. Man on the High Wire
global_01_local_1_shard_00001926_processed.jsonl/62584
What Major Historical Events Happened in Texas, Other Than the Alamo? Major historical events in Texas include the Texas Declaration of Independence, the annexation of Texas to the United States, the termination of slavery, the ratification of the Constitution, and oil drilling. The foundation for all these events was Alvar Nunez Cabeza de Vaca's first exploration of Texas in 1528. After the exploration of Texas in 1528, a permanent settlement was not established there until nearly 200 years later, in 1716. For the next century, Texas was torn between several different countries. It finally gained its independence in 1836 and became the Republic of Texas. For nine years, Texas was its own country and Sam Houston was its president. The next major historical event occurred in 1845, when Texas was annexed by the United States to become the 28th state. According to Angelo State University, the termination of slavery in Texas was announced on June 19, 1865. This day was termed Juneteenth. In 1876, Texas ratified the Constitution under which the state is still governed. In 1901, oil was found near Beaumont. Although the event seemed innocuous, it spurred people to start drilling for oil in eastern Texas. This resulted in the oil boom that catapulted Texas into the modern world and started Houston on its path to becoming a world-class city, according to Angelo State University.
global_01_local_1_shard_00001926_processed.jsonl/62590
Posted by Michael Evans Too much sugar is dangerous for your health! Excessive consumption can lead to obesity, diabetes, and heart disease. According to the Food and Drug Administration (FDA), sugar intake should be limited to just 50 grams per day (that's about 4 tablespoons or a little more than a can of Coke). On the other hand, the World Health Organisation (WHO) suggests that people limit themselves to no more than half that amount! Stay alert! The following are potential consequences of eating too much sugar: According to Anahad O'Connor, "Tooth decay occurs when the bacteria that line the teeth feed on simple sugars, creating acid that destroys enamel." Insatiable hunger Over-consumption of fructose can directly lead to higher-than-normal levels of leptin and reduce your body's sensitivity to the hormone (leptin is the hormone that tells your body when you've had enough to eat). Weight gain Sugary foods are full of calories but do little to satisfy hunger. Insulin resistance Insulin resistance can cause fatigue, hunger, brain fog, high blood pressure, midsection weight gain, and full-blown diabetes. Obesity is one of the most-cited risks of excess sugar consumption. In fact, consuming a can of soda each day could lead to 15 pounds of weight gain in a single year!  These are just 5 out of 15 potential consequences of excessive sugar intake. The remaining are: - Diabetes - Liver failure - Pancreatic cancer - Kidney Disease - High blood pressure - Heart disease - Addiction - Cognitive decline - Nutritional deficiencies - Gout Read the full article here to learn more: Leave a comment All blog comments are checked prior to publishing
global_01_local_1_shard_00001926_processed.jsonl/62606
The web has moved on, there are now better alternatives, so I'm no longer hosting this. I've left this article up mostly for fun. Author: Ron Lancaster  Date: September, 2005 I wrote a short article for my company on using Ben Nolan's behaviour.js to apply behavior to nodes identified by CSS selectors. I'd been previously using Dean Edward's CSSQuery classes to query my DOM. As a consequence of these two things, I decided to modify Ben's implementation to use CSSQuery. ModifiedBehavior v1.0 by Ron Lancaster based on Ben Nolan's Behaviour, June 2005 implementation. Modified to use Dean Edward's CSS Query. Uses css selectors to apply javascript Behaviors to enable unobtrusive javascript in html documents. Requires Dean Edwards CSSQuery. Behavior.register("b.someclass", function(element) { element.onclick = function() { Behavior.register("#someid u", function(element) { element.onmouseover = function(){ this.innerHTML = "BLAH!"; }, document.getElementByID("parent")); Call Behavior.apply() to re-apply the rules (if you update the dom, etc). Reproduced under BSD licensed. Same license as Ben Nolan's implementation. More information for Ben Nolan's implementation:
global_01_local_1_shard_00001926_processed.jsonl/62642
Positive pole massage This tantric massage london focuses on the chakras known as the “positive poles,” that work like the ends of a battery to send out and receive sexual energy. As you massage, you “charge” the poles, making you both feel sexy and alive.1 The man lies on his back. The woman massages his whole pelvic area, including his genitals (see intimate massage,  ).There is no need to Tantric massage toward orgasm; keep the energy you generate within his body.2 Tantric Massage... Energy fusion massage Tantra massage London understands that our bodies are bio-electrical systems, with each of us connected to one another, as part of our planet. We are designed to be able to “merge” energy with another person. You can experience this phenomenon through the massage london outlined here. As a result, you can both feel truly connected to each other.Energy fusion massage londonWaking your system Sit opposite your partner, with your palms touching and your fingers resting lightly on your... Custom Back Of Sey Girl Water Black White Chakra massage London When you place your fingers on each other’s chakras as shown here, you open the doorway to intuition and psychic connection between you. This may sound mysterious, but try it; many people have found that they feel closer and more in-tune with their partner afterward. You can then take this intimacy and connection into your erotic massage london.Taking turns to caress Take turns to touch and caress your partner’s body, for at least five minutes each time. The partner receiving the...
global_01_local_1_shard_00001926_processed.jsonl/62647
Kylie and Kendall Jenner Blast Instagram's Proposed Changes They're not here for the new feed. Getty Images Last week, Instagram announced that they were planning on making a major change to the Instagram feed that you're used to. In a nutshell, the change involves implementing a new algorithm that's meant to prioritize Instagram posts from your favorite celebrities and your best friends so that you don't miss anything. Users immediately bristled at the idea of Instagram messing with their feeds, with the overall reaction being that they follow who they like and don't need Instagram curating their feeds for them. Other users felt the change would only help already popular accounts get even more likes. But in a major turn of events, celebrities who stand to gain from the new Instagram algorithm are speaking out against the change, most notably, Kylie and Kendall Jenner! Kylie fittingly took to Instagram to call out Instagram about messing with users' feeds. And even though Kylie Jenner didn't know what motivations Instagram had in changing their algorithm at first, she took to Twitter shortly after posting her initial grievances to suggest money might be involved. Kendall Jenner also shared an Instagram comment from a user who insisted that there's no way Instagram could accurately curate what interests a user most with an algorithm. Even though Kylie and Kendall probably wouldn't be affected all that much by the new Instagram algorithm because of their massive following and celebrity status, it's awesome that they're willing to stand up against this change that so many everyday Instagram users are against. Advertisement - Continue Reading Below
global_01_local_1_shard_00001926_processed.jsonl/62666
Richard Garriott’s DND #1 Contest! Richard Garriott created one of the very first computer based role playing games in 1977. The game was inspired by a bet between Richard and his father Owen. Richard had already written many smaller programs and simple games on the teletype he had access to since 1974. His early programs included calculating radio wave propagation in the ionosphere. Variations of this early “ray tracing” code he wrote won him numerous science fair competitions up through international competition. Still games were his obvious early passion. Richard’s father told Richard, that if he could create a whole working role playing game, that he would split the cost of an Apple ][ computer with him. The result was DND #1! DND #1 was created on a teletype at Clear Creek High School in Houston Texas, connected via an acoustic modem to a PDP 11 type mini-computer. Richard typed the game on a separate terminal onto paper tape spools, then read the tape strips into the terminal connected to the offsite computer, and ran the resulting program. The resulting program would play a simple Dungeons and Dragons like role-playing game. The player had a character that would explore a dungeon in search of treasure while fighting monsters along the way. The players could visualize the world as a top down scene drawn with ascii characters. Asterisks were walls, spaces were corridors and other letters had other meanings. In this way, even this 1st game closely resembles the “Tile Graphics” that Richard created and standardized through his later work.  DND1 Graphics Predicted simulated original output style Richard wrote 28 of these “DND” games in High school. He numbered them DND #1 through DND 28. When he finally had that Apple ][, he rewrote DND #28 to become DND 28b… also known as AKALABETH the precursor of all things Ultima! No one has seen this game run since the retirement of the teletype in 1979, which is when Richard made the final printout of the game. DND #1 represents one of the earliest known computer role playing games. Originally created and refined between the years 1975 – 1977, this game is one of the few true founding efforts of the entire computer gaming genre. Interestingly the ascii based “tile graphics” are a clear forerunner of what followed in Ultima and many other computer role playing games, and thus remains relevant to the genre’s history. Richard has been eager to see this simple BASIC program, resurrected in a modern usable form, but remaining true to the original in as many ways as possible. To achieve this end, Richard is offering a bounty of Shroud of the Avatar pledge rewards for the best reincarnations of DND #1! Starting April 15th 2014, just past one year into the development of Shroud of the Avatar, and running for 1 month through May 15th, Richard via Portalarium will be accepting submissions of DND1 Resurrections in each of two versions. Submissions may be a Unity Version, and or a no-plug-in Browser Version. Winners will be announced shortly after the submission deadline. Best Unity Version & Best no-plug in Browser Versions will receive a Citizen Level Pledge Reward worth approximately $550. 2 runners up in both categories will receive $165 Collector level pledge each. To be eligible, entries must: 1. Be a faithful recreation of the original, not embellished (e.g. No fancy graphics, stick with a traditional font on “yellow” paper.) 2. Be fully working and debugged versions as similar to the original as possible. 3. Be self-running, and not require any other installations that are not automatically requested during the running of the entry. 4. Add a “© 1977-2014 Richard Garriott” at the beginning. 5. May include an addition of “Ported by DEVELOPER NAME”. 6. Must agree that all rights to this port remain property of Richard Garriott Unity Version Entries must: Work inside unity, rendering to a texture, such that Portalarium can map it in Shroud of the Avatar onto an object or interface element. Unity versions earn bonus points for the more versions of Unity they support (web player, Windows, Mac, Linux, iOS, Android, PS3, etc.) No plug-in Browser Version Entries earn bonus points if run on: 4 major browsers (Chrome, Internet Explorer, Firefox, Safari), 3 operating systems (windows, mac & linux), and 2 mobile platforms (iOs & Android). All submissions must be received before May 15th! Submissions and questions should be sent to: We look forward to reviewing your submissions, and rewarding the winners! Discuss with the community! We introduced a subforum here and chat channel #DND1 (use /join #DND1) for discussions and everything related to this contest! 1. John J. Donna IIJohn J. Donna II Speaking of community, What would you guys think about setting up a forum/IRC for this project? I nearly have the BASIC code compiled into Linux/Win binaries, that would allow anyone to be able to play the game. Not all web/javascript programmers have experience building from source, so this could get them into the game I figure it might be nice to have some competition from some ambitious coders who might not have been patient enough to learn BASIC ;) obviously you would still need to learn some BASIC to get the algorithm right. But that way we can work together on some of the hard parts to make sure we get this port to be as close as we can If enough people are interested, I will put together a video of part 1, and setup the forum thread on the SOTA forums so we can work on this, and maybe an IRC chat -John :) 1. tphilipptphilipp Good idea, let me see, maybe we’ll just open a forum and IRC channel on SotA’s forum and chat…. will keep you posted! 1. John J. Donna IIJohn J. Donna II That would be fantastic if you could set that up for us :) Ill keep an eye on this thread, if we get it up soon I will post up the basic instructions on how to get the basic compiler and everything setup so that those from other coding backgrounds can take a stab at it. Really exciting to see what ideas come up 1. theArk0stheArk0s Nah…I think all the mis-spellings are part of the charm – it would be a shame to change them. That would be like fixing the grammar of “all your base are belong to us”! ;) 1. BelrindorBelrindor I am just now starting to learn about Unity. Oh if this contest only came six months from now instead of right now with a month out deadline… But no worries. :) This looks like a whole bunch of fun, and I’ll be very much looking forward to what people who are *already* competent in Unity do with this. :) 2. Sir Mike Dragon I love BASIC. The only programming language I truly understood. (wipes a nostalgic tear away). 3. corisco MGT470corisco MGT470 Does someone a good site for BASIC documentation, where i can learn the basic?? im not really used with procedural programming hehehhe… 4. Fenyx4 I’m looking over the code and I have one major concern. On line 210 – 264 it is loading from files named “DNG1”, “DNG2” … “DNG6” and “GMSTR”. Those don’t seem to be included anywhere. This is a flavor of BASIC I’m not familiar with/written 15+ years before I started programming so I may be missing something. 1. abovenyquistabovenyquist There appears to be a primitive “level editor” at 11000. Enter the dungeon number, and then a series of cartesian coordinate and tile type pairs. Enter a negative tile type to save. To get to it, there’s a hidden command of 12. Incidentally, I think the “Predicted simulated original output style” must be from some other version of DND. 5. dejayc It looks like line 09080 was printed on top of line 09070. I wonder if Richard could take a guess as to what each line was supposed to print? 6. algumacoisaqqalgumacoisaqq Anyone willing to transform the PDF into simple text? I mean, I know it is a competition, but that would be really useful for all people involved – we don’t even know how the program looks like. I think the easier method would be to create a BASIC interpreter in C# (it does not need the same syntax, only the same options) and place the original code on it – if we make an OO approach to it, I don’t think it will have the same feel. 7. RFlowers Don’t we need that data that makes up the dungeon? It looks like there is a routine to read it in… 8. Bones13 I played “Star Trek” on that kind of teletype in 75-77. Learned basic and fortran programming during junior and senior years of high school. We programmed blackjack and a few other simple games as well as accounting programs during that time. ( at least when we were not looking at long range scans and guessing target vectors). I played some games similar d&d games on early pcs, as well as adventure and Zork. I’m sure it would take me until May 15th to learn how to program in the modern day. I turned towards medical school as a sophomore in college and left engineering behind. Still wonder how life would have turned out different if I had gone into computer engineering, graduating in 1981. My youngest son is headed that way, he graduates next month with computing science and history degrees. 9. Bones13 Heh, at Strake Jesuit in Houston, Tx! We were probably logging on to the same library computer. 10. SanctimoniaSanctimonia Anyone know which version of BASIC this uses? I found the manual for PDP-11 BASIC here: Once we identify the language and find the correct reference manual, the next step would be to either OCR or manually transcribe the code from the PDF. OCR may be difficult due to the low resolution and lack of contrast in the PDF. Next up would be finding an emulator to a) run the code and b) begin debugging it. I noticed on line 2417, for example, that the word “PRIT” is used instead of “PRINT”. Not good unless PRIT is an actual BASIC statement I’m not aware of, of course. More insidious will be any logic errors that don’t halt execution but only affect functionality. After it’s running and debugged in an emulator the code will be at a point where it can be ported. I’m guessing a good next step after that would be identifying and renaming variables for clarity and chopping everything up into procedures based on the occurrence of GOTO’s and GOSUB’s. While difficult, I think the task is possible, but without starting from a solid foundation there will be much weeping and gnashing of teeth. 1. dejayc An earlier comment here has a link to a PDF with transcribed text you can copy and paste. And interestingly enough, I’m working on an online basic emulator to run this program too :) 11. disastorm Isn’t render to texture only available on Unity Pro? Can we just make it in Unity Free, it should easily be modifiable to set the Camera to Render to Texture? 12. pinzasso Can someone who has abbyy reader or acrobat post the text. Might tool around with it in my spare time. 13. WombyWomby I understand the point that you are making. That is exactly how I feel when someone suggests a competition to “Write a story that could be used as an in-game book of fiction”, since I have zero talent in that area. The point is we are all different, and while I think it would be great if we could all get a chance to contribute in our own way, I don’t think its possible to do that in a single competition. I am sure that there will be other competitions that draw on the many different skills and talents in the community, so that over time we all get a chance to participate. 14. Minakta MoriyaMinakta Moriya Definitely tempted to give it a go, but this seems to be a dialect from before my time. So what does the statement: FILE #1=”DNG1″ do (and is that a hashtag?)? Is it correct to assume that this is just file I/O and the statements: FILE #1=”DNG1″ RESTORE #1 DATA “SWORD”,10,”2-H-SWORD”,15,”DAGGER”,3,”MACE”,5 would just write that string into it? 15. KoldarKoldar Finally, a coding contest. I wish I had more free time to have a go at it. Best of luck to everyone competing! 16. Fosco What is this ‘fair’ you speak of? If everything had to be fair, there would be nothing. If you don’t have the skills to participate, then it’s not for you, but you know what else isn’t for you? Complaining about it. 17. Midgard TheDragonKingMidgard TheDragonKing Someone pledge Duke right now push us over 4mill fly to Austin and I’ll buy you all the beer you want while we party with portalarium!!!! <- this deserves some kind of MVP post like SC :) 18. Tipa Currently the dungeons look nothing like the illustration, they are digits from 1-9. Only the player named “SHAVS” can play the game, everyone else is stopped. Asking for instructions either quits the game or jumps into the middle of a loop and crashes. Are we supposed to preserve all these things? Since there is a competitive element to this, I can’t really show you my code or anything… but just wondering because, as the game is now in the included source, it’s just barely playable. You can’t hit a creature until it hits you first, it looks like, for instance. 1. Tipa Never mind, the monster logic is different than I said. Question remains — an exact port, including the weird stuff? 19. WombyWomby Looking at the listing I have noticed that (probably due to the acoustic modem connection) occasionally the odd letter is corrupted. For example: 04870 GOCUB 08410 There are others, both before and after that one. Another example is 04800 LET B(K,3)=B(K,3)-INT(C(1)*4/5I which I think should be Something to watch out for. 20. buricco If I were to program a port of this, most likely I’d either do it in, well, BASIC, or straight-up C, which could get the feel of this perfectly. 1. Dame LoriDame Lori Contests aren’t equal. Those are called “raffles.” Contests are competitions involving a specific skill, and the winner will be whoever has taken the time to hone that skill. You are welcome to participate, and it is free to participate – so it is freely open to all. If you do not have as much skill in this area as others, you likely won’t win. Same with a contest involving writing, photoshop, painting, sculpting, level design, weightlifting, sports, hot-dog eating, music, racing, horseback riding… 1. TheLarperTheLarper like I said not equal to all most will be left out due to lack of knowledge. love how people chime in for people leaving feedback. prizes for this game that are given out should be avail to everyone equal. not based on 10 percent that can do a task. a pro tip for you too raffles are not equal you can buy or trick for better odds. like the raffles during pledge week probably thousands of gmail accounts opened. 21. CageCage My name’s not Stavs so I can’t play… Brings back so many memories from that ‘age’ for me although I was using a very similar basic on a Tandy TRS80 with all the expansions and 48kb of memory back in 1977 and the Tandy allowed one to use colored blocks which were so much more immersive than ascii characters, honest ! Shame I don’t have time to port this over to Unity in time for the competition close but I will do it later in the year for sure just for the nostalgia. Thanks for the coding. 22. MaeseDude This is a cool idea – but a stupid execution! SInce no one has ever seen the game running, how are we supposed to understand what the final re-creation should be like? Do you expect people to read sourcecode from an age-old, unknown BASIC dialect and then execute it in their head to understand what it does exactly? Oh, and as mentioned by many others already, there are lots of files missing, that were used by the game. Obviously dungeon-data-files. So, while I, being a programmer myself, would love to join in on this, I see it as a big waste of time as long as Richard doesn’t take some effort to at least provide the code in electronic form, and with all the missing data files. 1. abovenyquistabovenyquist Uhm… that’s exactly what I did, actually. And many others. 23. Sheltem Hello fellow backers. Happy easter and greetings from Germany. The files “DNG1” to “DNG6” and “GMSTR” are not needed in there original form. They can easily be replaced by your own files when you understood the published source code. I fixed some typos and corrupt goto statements. And btw. in my understanding the lines 09070 and 09080 should be: 09070 LET X2(M)=0 09080 GO TO 07000 I am not totally sure, but I think line 05740 needs also some reconsideration. Up to now I am just missing the CLK statement from line 00160, which seems not to be very important as J9 is not used in the following lines anymore. And the FILE command (to read in the dungeons), which I replaced by some other command but unfortunatelly didnt work also. I will do some more research on this issue later. Have a nice easter weekend :) 1. Sheltem YEAH finally I got it running! I used bywater (an ordinary basic interpreter), which is available in the software center of Ubuntu 14.4. I just replaced some statements. 24. Gracekain He doesn’t mean “fair” as in others have more knowledge or skill than other, it is just this is the second competition where only Unity skilled users have a chance to win great prizes. What about the rest of the community? I just feel that this is more about Unity than SOTA. 25. Fox CunningFox Cunning I was wondering… Debugged as in debug our own code, or as in debug the original code? 1. TachysTachys Probably both, as there are several points where necessary syntax is either wrong, or missing altogether, probably because of a typo (PRIT comes to mind…) 26. LeondegranceLeondegrance You don’t have to use Unity for this contest. You can use a web browser which is the route I am taking. Not that I need the prize I just like the challenge. This old school LB code is rattlin’ the beans around in ma brain pan. 27. PeteWiPeteWi 28. ShroudJosh What does the BASE keyword do, form the 4th line? That PDP manual doesn’t seem to mention it, and it’s not anywhere else in the code. Thanks 1. dejayc It sets the starting index number for an array. Thus, with BASE 0, arrays start at offset 0. Similarly, with BASE 1, arrays start at offset 1. 29. Sorwen This does read as if it is from at least two different sources since it shouldn’t have been able to run with the number of incorrect goto line numbers. It is easy to understand and fix, but it seems really strange. 1. TachysTachys I would say it is probable that they intentionally posted an ‘early draft’ of the program, prior to debugging, since debugging the code is part of the conditions for the contest… just an insight. 1. abovenyquistabovenyquist I strongly doubt that was intentional. Based on Richard’s comments in twitter, I don’t think he has much detailed recollection of what was or wasn’t exactly in this particular one; remember he wrote 27 more! I suspect he found it in one of his many rooms of ancient treasures and thought “hey, this might be fun.” I suspect some of the errors are modem line noise corruption. 30. sarabellum Since the port is to be fully working, yet also as similar to the original as possible, would it be possible to get some clarifications? 1) Should the ported game output (cmd 6) be like the current source or like the PSOOS? 2) Should bugs in the current game be maintained or fixed in the port? 3) Should the ported game end on trick input (player name for example)? 4) Should the ported game support loading and saving? 5) Should the ported game support the “hidden” menu items? At a more abstract level, I assume the port should maintain the tele-type interface. Is this correct? 31. Taihennami I’ve taken a look at this and started converting it to C (for a UNIX environment, not Unity or the Web). So far I’ve got about half way down the listing, so I have the setup code, player movement and player attacks, but not enemy movement or magic use yet. I have also stubbed out all of the file access routines for the moment, since it is clear that modern computers have a very different idea of data formatting than is implied here. The unusual minicomputer dialect of BASIC makes the conversion challenging – unlike some of the microcomputer dialects, it has a relatively full feature set, yet it still lacks all of the enhancements introduced by BBC BASIC, and accurate documentation is so far elusive. Fortunately it doesn’t seem to matter, since practically all of the features actually used are standard BASIC. One thing to note is that although almost all of the constants used are integers, the variables are clearly floating-point capable, and are used as such – at least for hitpoint counting. One difference between BASIC and newer languages is that the division operator always has a floating-point result in BASIC (if the dialect is not limited to integers), whereas in C it only has a floating-point result if at least one of the arguments was floating-point. This is significant when using fractions as constants or multipliers. There are some typos – whether by the original author or as a result of the worn-out hardware printing it out, is not always clear. The schoolboy spelling and grammar mistakes in the text strings are the most obvious, of course. I am liberally reworking the text to make it somewhat more professional in style, but not changing the meaning or experience too much. There are also some fairly obvious logic bugs, which were presumably corrected in later versions of the game – I can’t imagine them going unnoticed for long. One of these appears to render the Mace weapon completely useless; another involves potentially selecting the wrong monster for range computations; there are also several weapons for which a critical hit delivers less damage than a normal one. In my conversion, I have fixed the former two bugs, but left an obvious way to put them back in for a more authentic version. Finally, as previously noted, the map output is in a different format than described in the article, being in the form of numbers rather than symbols, and thus somewhat harder to read. The symbolic map must have been a feature of a later version of the game. 1. slashslash I’m keeping all the schoolboy spelling/types and the raw unfriendliness with the user because quite frankly that’s what gives value to this particular piece of code. :) Comments are closed.
global_01_local_1_shard_00001926_processed.jsonl/62667
storePledge Now to Play! Shroud of the Avatar is a crowdfunded game and depends on you, the players, for support. If you already pledged here or through Steam Early Access, then we greatly appreciate your support. If you are considering pledging, then we welcome you to join our growing community of backers that are helping fund the development of Shroud of the Avatar! [Click here for more information about pledging (including previous expiration events)…]
global_01_local_1_shard_00001926_processed.jsonl/62715
This StatSilk software license agreement (“Agreement”) covers the products and services You license from StatSilk. By downloading, installing or using StatSilk products or services, You consent to the terms and conditions of this agreement on behalf of Yourself and the company on whose behalf You will use the StatSilk products and services provided under this Agreement. StatSilk reserves the right to update and change this Agreement from time to time. The version of the Agreement that applies to You is the version that You agreed to when You first started using the software or, in the case of StatPlanet Cloud, when You last renewed Your initial subscription to the software (if applicable). “License” shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 6 of this Agreement. “Licensor” shall mean StatSilk, the licensor and copyright owner of the software that is granting the License.  “Software” shall mean the StatSilk group of software, including StatPlanet (formerly known as “StatPlanet Plus”), StatPlanet Cloud, StatTrends and StatWorld. “Non-profit Entity” shall mean any individual, education institution (including schools, colleges, universities), charity or non-profit organization which is not operating for profit or monetary gain, and meets the non-profit eligibility criteria. “Commercial Entity” shall mean any individual or organization which operates for profit or monetary gain. "Developer" shall mean the person who manages and inputs the data, the customization settings which are set in the data and settings files, and the maps. The developer may also use the software for testing purposes. The developer is generally the person working with the StatPlanet Data Editor or Data Manager, although other tools may also be used. "User" shall mean the person who uses the software for its intended purpose, such as for data analysis and monitoring. The developer can be considered a user, if the developer also uses the software for purposes other than for managing and testing the software as described above. 2.1. StatPlanet Cloud and the web versions of StatPlanet and StatTrends require a license for use by both Commercial and Non-profit Entities, and may only be used on the web-domain(s) to which it is licensed. Access to and use of StatPlanet Cloud is limited to Your paid subscription term and will terminate when Your paid subscription period expires. 2.2. The desktop versions of StatPlanet and StatTrends require a license to be used by Commercial Entities, and may only be used by the Commercial Entity and/or employees of the Commercial Entity which purchased the license, as follows: 2.2.1. A single commercial license for the desktop version can be used by up to three concurrent Developers or Users. 2.2.2. The software can be used on up to three computers concurrently. 2.2.3. The software can be transferred from one computer to another without restriction, as long as the number of computers it is used on does not exceed the number indicated in Section 2.2.2. above. 2.2.4. If the software is transferred to a new computer, it needs to be removed from the computer on which it is no longer being used (the files can be deleted directly rather uninstalled, as there are no files associated with the software other than those in the main software directory). 2.2.5. The software may not be distributed to, or used, by customers or clients of the Commercial Entity which purchased the software. It may only be presented to customers or clients by employees of the Commercial Entity which purchased the license. 2.3. Your rights to the Software set forth in Section 2.2 shall automatically expire if at any time You are no longer employed by the Commercial Entity holding a license to the software. 2.4. The rights and restrictions described in 2.2 above do not apply to Non-profit Entities. A Non-profit Entity may freely use, publicly display and publicly perform the non-commercial desktop versions of StatPlanet and StatTrends. The non-commercial desktop versions of StatPlanet and StatTrends may also be distributed without permission to Non-profit Entities. 2.5. You may not: (a) reverse engineer, modify, adapt, decompile or disassemble the Software, except and only to the extent that applicable law expressly permits, despite this limitation; (b) sell or lease the Software; (c) remove, obscure or alter any product identification or copyright notices on the Software. This Software is protected by copyright and other intellectual property laws. You do not acquire any rights, express or implied, in the Software other than those specified in this License. 4.1. You may terminate this Agreement and Your subscription to StatPlanet Cloud at any time by providing written notice to StatSilk. However, there are no refunds of subscription fees paid. 4.2. If Your subscription term for StatPlanet Cloud expires as provided in Section 2.1, StatSilk will terminate Your access to and use of StatPlanet Cloud. Unless required by applicable law or agreed to in writing, Licensor provides the Software on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Software and assume any risks associated with Your exercise of permissions under this License. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts), shall the Licensor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Software (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the Licensor has been advised of the possibility of such damages. Copyright © 2008-2016, StatSilk Please use the attribution below in the following cases: • When embedding the free version of any StatSilk software in a web page or publication. • When using images (maps, graphs, charts) produced using StatSilk software, including exported images from the StatSilk website. All exported map and graph images are licensed under a Creative Commons Attribution 3.0 Unported License. Attribution is not required for images exported from a purchased, licensed version of StatSilk software. Attribution for StatPlanet Cloud and derivatives: StatSilk (2018). StatPlanet Cloud: Interactive Data Visualization and Mapping Software. Attribution for StatPlanet Desktop and derivatives: StatSilk (2018). StatPlanet Desktop: Interactive Data Visualization and Mapping Software. Attribution for StatWorld and derivatives: StatSilk (2018). StatWorld: Interactive Maps of World Stats. Attribution for StatTrends and derivatives: StatSilk (2018). StatTrends: Interactive Data Visualization Software.
global_01_local_1_shard_00001926_processed.jsonl/62724
Topics: Photosynthesis, Oxygen, Plant Pages: 3 (858 words) Published: February 10, 2013 Green plants absorb light energy using chlorophyll in their leaves. They use it to react carbon dioxide with water to make a sugar called glucose. The glucose is used in respiration, or converted into starch and stored. Oxygen is produced as a by-product. This process is called photosynthesis. Temperature, carbon dioxide concentration and light intensity are factors that can limit the rate of photosynthesis. Photosynthesis summary Photosynthesis [photosynthesis: The chemical change that occurs in the leaves of green plants. It uses light energy to convert carbon dioxide and water into glucose. Oxygen is produced as a by-product of photosynthesis. ] is the chemical change which happens in the leaves of green plants. It is the first step towards making food - not just for plants but ultimately every animal on the planet. During photosynthesis: * Light energy is absorbed by chlorophyll, a green substance found in chloroplasts in some plant cells and algae * Absorbed light energy is used to convert carbon dioxide (from the air) and water (from the soil) into a sugar called glucose * Oxygen is released as a by-product This equation summarises what happens in photosynthesis: Some glucose is used for respirationrespiration: Chemical change that takes place inside living cells, which uses glucose and oxygen to produce the energy organisms need to live. Carbon dioxide is a by-product of respiration, while some is converted into insoluble starchstarch: A type of carbohydrate. Plants can turn the glucose produced in photosynthesis into starch for storage, and turn it back into glucose when it is needed for respiration. for storage. The stored starch can later be turned back into glucose and used in respiration. Factors limiting photosynthesis Three factors can limit the speed of photosynthesis: light intensity, carbon dioxide concentration and temperature. * Light intensity Without enough light, a plant cannot photosynthesise very... Continue Reading Please join StudyMode to read the full document You May Also Find These Documents Helpful • Respiration and Photosynthesis Essay • Photosynthesis Essay • Photosynthesis Paper • Chlorophyll and Photosynthesis Research Paper • Essay on Photosynthesis Experiment • Photosynthesis Notes Essay • Photosynthesis Lab Report Essay • Comparison between Cellular Respiration and Photosynthesis Essay Become a StudyMode Member Sign Up - It's Free
global_01_local_1_shard_00001926_processed.jsonl/62756
Although the design of websites has come a long way since the birth of the Internet, it’s just not enough sometimes. You get bored of seeing the same fonts, day after day, week after week. That’s not true with Google Chrome, as the hugely popular browser has the option of changing the website font to one that you like. How? Well, read on to find out.   See also: How to turn off GPU hardware acceleration in Google Chrome  Google Chrome Settings Step 1:  Click the Chrome Menu in the top right hand corner of the Google Chrome browser and select “Settings”.  Web Content Menu Step 2: Scroll down and click “Show Advanced Settings”. Then scroll down until you see the “Web Content” heading, as shown above. Once you’ve found the Web Content menu, click “Customize fonts…” Fonts and encoding menu   Step 3: Change your font! The preview on the right hand side of each font selection will update to reflect your newly chosen font. Take note: if your default encoding doesn’t match the one needed by a website, the text may not display properly – so don’t go too overboard with “exotic” fonts. Step 4: Click done and your settings will be saved. See also: Which is the best browser for Windows? The font I’ve chosen isn’t displaying properly. What can I do? If you do notice squares appearing in web pages where there should be text, one option is to substitute the web pages encoding – it’s a lot easier than it sounds, don’t worry! Step 1: Click the Google Chrome options menu in the top right hand corner of the browser.  Fix encoding issues Step 2: Select “More Tools” then select “Encoding”. From here you can do one of two things, with the first being to manually select a different encoding, which can be confusing if you don’t understand encoding in general. The second option, which most people should do, is select “Auto-detect”. If you select Auto-detect, Google Chrome will analyse the content on the page and recommend the best encoding choice. Once selected, your web page should reload with text instead of squares. Make sure that once you’re done viewing that web page, you change your encoding back to Unicode as Auto-detect can slow down your browsing experience. See also: Best Chromebooks of 2015
global_01_local_1_shard_00001926_processed.jsonl/62759
There are no active ads. Dawnguard’s PS3 Roller-Coaster Continues, Sony Aiding Bethesda by Ron Duwell | September 10, 2012September 10, 2012 7:00 pm PST The storm swirling around The Elder Scrolls V: Skyrim’s DLC continues to spin. Recently, Skyrim‘s developer, Bethesda, claimed that they had been unable to fix the problem despite diligent attempts, and they “weren’t positive if they could solve the issues.” In response, a Sony representative at the New York Gaming Conference recently spoke with Kotaku saying that they are working closely with Bethesda with a “big, broad support team.” So, still no promises on the progression of Dawngaurd for the PlayStation 3, but Sony seems genuinely eager to make sure that this gets as fixed as possible. Or at least, they want to avoid negative press regarding a hit game heading into the holiday season. Dawnguard is the first expansion on Bethesda’s popular RPG The Elder Scrolls V: Skyrim. The DLC is available through the Xbox 360 and PC for 1,600 MS points or $19.99 respectively. [via Kotaku] Ron Duwell
global_01_local_1_shard_00001926_processed.jsonl/62766
By: Matthew Shuirman Fin-ish him! Fin-ish him!The hymn of two hundred bloodthirsty teens echoed throughout the abandoned rock quarry and up into the night sky. “Fin-ish him! Fin-ish him!Gavin and Ayn, leaning against the side of Gavin’s Jeep Wrangler, silently watched the fight from up on the fourth tier. “Fin-ish him! Fin-ish him! The teenagers had turned the rock quarry into a coliseum. The four ten-foot tall steps cut into the rock served as their oversized stadium seats, and the bottom of the quarry had become the arena floor. Someone with fire abilities had lined the rock walls of the bottom of the quarry with flames, which filled the entire arena with a shifting orange glow. Fin-ish him! Fin-ish him! Fin-ish him! The third fight of the night was nearing a close. The brawl had been between Blake ‘The Hulk’ Bennet and Harry ‘Snowball’ Sheer. Harry put up a good fight, but in the end his ice powers proved no match for Blake’s super strength. Blake had Harry pinned against a boulder, and everyone knew Harry’s end was near. Blake’s hand was wrapped around Harry’s throat. He raised his other hand in the air and motioned for the crowd to make more noise. They obliged, chanting “Fin-ish him! Fin-ish him!with redoubled enthusiasm. Satisfied with the crowd’s support, Blake palmed Harry’s head and smashed it into the boulder with all his super strength. Harry’s head exploded like a squashed watermelon. The teenagers roared with delight. Blake let Harry’s body slump to the ground. He beat his fists against his bare, blood-splattered chest as the crowd showered him with praise. Throwing his head back, he howled at the moon. Ayn trembled inside Gavin’s oversized sweatshirt. She closed her eyes in a vain attempt to shut the gory scene out of her mind. Gavin elbowed the side of her arm. “You’re up next,” he teased. That marks the third win for The Hulk,Lazarus announced over his bullhorn. “The fourth challenger of the night will be Ayn Williams, with the power of super speed. The next fight will begin in ten minutes. Ayn, please make your way down to the Preparation Table on the second tier. Gavin put two fingers on Ayn’s chin and turned her head towards him. Ayn opened her eyes. “Hey,” Gavin said with an encouraging smile. “Hey, why the long face? You’ve got this, Ayn. You can literally run circles around this guy.” He’s...” Ayn got choked up, and the words caught in her throat. She looked down at the arena floor. Blake was still celebrating his victory. “He’s going to crush me. You spent all your savings to pay for my entry fee. If I...If I don’t win this, we’ll never make enough money to get out of this town.” Gavin stroked her cheek. “Look at me. You will win. And if you don’t, we’ll find a way. We always do.” Ayn took off Gavin’s sweatshirt and handed it to him, exposing her arms to the cold air. The chill was invigorating. “I should probably head down to the Table.” You’ve got this, Ayn. I believe in you.” The Preparation Table was a folding plastic table covered with an assortment of melee weapons. It had everything from knives to a mace to even a samurai sword. Where the hell did Lazarus get all this shit? Ayn wondered. There were so many weapons that they were threatening to spill off the table. Here,” said Lazarus’s servant. He was holding out a dark green Kryptonite tablet and a paper cup of water. “Take these.” Ayn took them. “Have you ever taken Kryptonite before?” Once,” Ayn said. She popped the tablet in her mouth and downed it with the water. Not a fun experience. Well, I’ll give you the rundown anyways. Your powers will kick in after only thirty seconds. You’ll feel a little lightheaded in the beginning. The new tablets also have anesthetics laced in...” Do you ever wonder why they call it Kryptonite?” Ayn blabbed. The words came tumbling off her tongue unbidden. “I mean, I get that it’s from a comic book, and that it’s dark green and all that, but Kryptonite takes away Superman’s powers right? So when you think about it that way it’s a pretty shitty name for a drug that gives you super pow...” The servant cleared his throat. Ayn felt her cheeks flush. “Sorry. I start talking uncontrollably when I...when I’m nervous.” Pick a weapon.” After thinking through her options, Ayn chose a kitchen knife. It wasn’t the most menacing of the weapons on the table, not by a long shot, but she figured it would work well for her. The smaller the weapon, the more easily she could jab her opponent and then dash out of his reach. The servant gestured with an open hand towards the ramp that lead down to the quarry floor. “The Quarry awaits you.” The ramp, which was right next to the Preparation Table, was thirty feet long. Ayn’s legs took one step after another, descending. She felt the heat emanating from the unquenchable fires burning along the walls of the quarry floor. Entering the Quarry floor now is Ayn ‘The Bullet’ Williams, who will be facing off against the undefeated Hulk. Everybody, give it up for The Bullet! The crowd hollered and clapped for Ayn. The echoing applause crashed down on her, making her feel small. She was all too aware of the four hundred eyes watching her every move. Her head spun. Blake glared at her, beating one fist against his bare chest like a war drum. Girl!” he said, pointing at Ayn like Babe Ruth calling his shot. “I’m going to tear you limb from limb from limb from limb!” Ayn's legs kept moving, dragging her towards the middle of the quarry. They finally stopped once she stood about thirty feet away from Blake. She tightened her grip around the handle of her knife. Her heart was threatening to beat out of her chest. What if my powers never kick in?! What if... Let the fight...BEGIN! Ayn and Blake watched each other closely, each waiting for the other to make a move. Then in one continuous motion, Blake scooped up a bowling-ball sized rock off the ground with one hand and hurled it at Ayn. She barely had time to lean out of the way as the rock soared by her and dissipated into dust against the wall behind her. Blake picked up another rock. Just as it left his hand, Ayn ran a few feet to the right. At least, she intended it to only be a few feet. But she felt a rush of wind in her hair, and next thing she knew, she was a whole ten feet away from where she’d been standing! The rock flew through the empty air where her head would have been and smashed against the back wall. The crowd cheered. I guess my powers kicked in. Blake was breathing rabidly, his chest heaving up and down. “Stop playing games,” he growled. Ayn didn’t budge. Her feet remained planted in the dirt, paralyzed by fear. After a few moments, Blake grew impatient and bolted at her. She sped away reflexively, and he barreled through the empty space where she’d been. He reeled to face her, face red, and charged again. Ayn blurred away from him with ease. The Hulk’s face writhed with anger. “Coward!” he spat before rushing at her for a third time. Ayn remembered the kitchen knife in her hands. This time as Blake came at her, she only moved a few feet out of his path, holding the knife out as he ran by. He barreled past her, making a grunt as he ran by. Blake stopped and looked down at the new gash on his thigh. Blood drooled down his leg. He touched the wound with his fingers and, making eye contact with Ayn, he brought his fingertips up to his mouth and sucked the blood off. A chill ran down Ayn’s spine. Come and get me,” Blake said, making ‘come here’ motions with his hands. Ayn took a long breath out, her cheeks ballooning. “It’s alright,” she muttered to herself. She bounced anxiously on her toes. “You can do this, Ayn. You already cut him once, you can do it again." Before Ayn had time to talk herself out of it, she took off. The world around her blurred into lines of light, like the Millennium Falcon jumping into hyperspace. She held her knife up in front of her and hoped for the best. Then, just before she sunk the knife into Blake’s chest, something smacked her hard on the side of her left arm, forceful enough to knock her off her feet. Blake had backhanded her, she realized as she flew through the air. A car going seventy couldn’t have sent her sailing as far as his super-strengthed slap. Oooh!” the crowd gasped in horrified delight. Ayn's left shoulder crashed into the flame-covered wall. She lost a few second to blackout, and the next thing she was lying on the dirt. Something - many things - on the right side of her body were broken. A tall tongue of flame had sprouted on her shoulder. She beat it out with her left hand, and even that small movement sent waves of pain rippling through her midsection. She winced. Luckily for her, the new Kryptonite tablets doubled as anesthetics. Still, this was more pain than she had ever experienced in her life. The knife, she remembered. It was a few feet away from her, she saw. Her desire for self-preservation propelled her to pick it up. Pain shot through the right half of her body every time she inched her right elbow forward. When she got to the knife, she picked it up in her left hand instead of her right. Her right was her dominant, but that entire side of her body was useless now. The Hulk! The Hulk! The Hulk!” As Blake basked in praise, Ayn struggled to her feet. She felt like fainting, like vomiting. Like quitting. I could dash out of here. Run to Gavin’s arms. The thought was so enticing... Gavin had put up the money for her entry fee. He was counting on her to win so they could get enough money to finally leave this godforsaken town. No. She wouldn’t let him down. She ran at Blake again. But her injuries made her slower than before. When she got within arms reach of Blake, he reeled back and smacked her again. This time he hit her on her right side, her broken side. The pain drove Ayn unconscious. When she woke up on the ground, she couldn’t tell how much time had passed. A second? A minute? A day? She didn’t know or care. The pain made everything seem distant, false, dreamlike. Blake was marching towards her. That was important. Why was that important, again? Somehow, she rose to her feet. Blake was strutting towards her. He took his damn time. Showing off for the crowd. Ayn had to kill him. She couldn’t remember why, but she did. She rushed at him for the last time. Her legs felt sluggish, even though they moved at sonic speeds. Blake reared back to smack her again. But before she got within his reach, she threw on the brakes. Like a hockey player stopping on ice, she turned her feet sideways and let her shoes grind her to a halt. Blake, not expecting her to slow down, swung his arm, whiffing at the air. He swung so hard that he lost his balance. Ayn stepped forward and punched the kitchen into his gut. Blake shoved her to the dirt, but by then the damage had been done. He stumbled back, looking at the handle of the knife sticking out of his gut in wonder. Blood gushed from the wound. The crowd was still. Then Blake ripped the blade out, looked at it, and tossed it over his shoulder casually. Fin-ish her! Fin-ish her!” Ayn’s hands scrambled around in the dirt, looking for anything. Her fingers found a small, sharp rock, and she clutched onto it tightly. Fin-ish her! Fin-ish her!" Blake squatted over her. A wide grin stretched across his face. Ayn swung her arm, trying to strike him in the temple with the rock. He caught her hand, then squeezed until she yelped. Her hand loosened, and the rock fell to the ground. Fin-ish her! Fin-ish her!” Blake raised his other hand up to the heavens, where it lingered among the stars. Then he plunged it into her chest. When it came out again, it held her still beating heart. The crowd went wild. Ayn looked down at the gaping hole in her chest. Then came darkness. She awoke staring at the stars. There was something soft under her instead of rock. The crowd was irritatingly loud - another fight had already begun. A quick look around told her that she wasn’t on the quarry floor anymore. She was sitting on the second tier, on top of a picnic blanket. Blake sat off to her left, watching the fight. All his wounds had been healed. Kneeling before Ayn was a boy her age, with long blonde hair and an encouraging smile. How did I get here? Who is he? The images flashed back to her, fragments of a forgotten dream. Blake, standing over her. Her heart, ripped out of her body...She panicked and clutched at her breast, but found that all was back to normal. She could feel her heart racing, which was a good thing. The only thing out of place was her clothes - she had some guy’s T-shirt on instead of her white tank top. Don’t worry,” said the boy with the blonde hair. His voice was soothing, like water to a parched throat. “I brought you back from the other side. Everything is going to be alright.”’re Lazarus,” Ayn said. It was a statement more than a question. All of the memories came flooding back. She’d expected this. She knew she’d wake up after the fight with Lazarus standing over her, even if she died. Gavin had told her that Lazarus could bring people back to life. Resurrection,” she said. "That’s your Kryptonite power. You can resurrect people.” Lazarus nodded slowly. “You fought well. Rest up - I’ve entered you in another fight tomorrow night." Matthew Shuirman is a 17 year old currently attending Faith Lutheran High School in Las Vegas, Nevada.
global_01_local_1_shard_00001926_processed.jsonl/62786
Are You Abusing Your Prescription Medication? Prescription painkiller abuse has been increasing dramatically over the years. Overdoses of prescription painkillers are one of the most lethal and common drug-related accidents and unfortunately, many people who abuse these medications are obtaining them both illegally and legally. Because so many doctors and clinics are relatively quick to prescribe them (sometimes these places are referred to as “pill mills”), people who get them legally can go on to sell them to others. The problem with prescription painkillers has been called an epidemic and rightfully so. One of the reasons a problem like this has been able to take such a strong hold on otherwise non-addict citizens is because prescription medication can provide the addict with the illusion that he or she is simply taking medicine rather than abusing a hard drug. This phenomenon is even more nuanced when the medication has been prescribed directly to the patient. Sometimes doctors prescribe pills that are stronger or at a higher frequency than necessary, which can lead to addiction. Even when this isn’t the case, many who are in pain up their dosage on their own because they view the increase as a harmless medical decision. But prescription painkillers are not harmless. And when they’re overdone, they can be life-threatening. So how can you tell the difference between a person who is taking their medication for legitimate pain with healing and health in mind vs. a person who is abusing prescription painkillers? Most people who abuse prescription painkillers will tell others “” and themselves “” that they are taking them because of pain. Whether or not this pain is real is only one aspect to consider when trying to figure out if someone us abusing this kind of medication or using it properly. If you think that someone you know might actually be abusing their prescription medication, here are some things to think about: • Is the medication prescribed? If not, where are they getting the medicine? Most people with serious pain should not have an issue getting prescription medication from a doctor who can oversee their pain treatment plan. • How frequently are they taking the medication and what is the dosage? Is the person taking the pills every few hours or are they actively trying to space them out as much as possible? Are they taking a low dosage or a high dosage? • How does the person act when they don’t have medicine? If they forget their medicine at home or can’t get a refill on their prescription in time, how do they act? Their behavior during this time is usually a telling sign as to whether or not they are an addict. Those who are addicted might also find that they need to continually increase their dosage. • Does the person try to go to different doctors because of an issue they are having with getting their prescription written from their original pain doctor? If this is the case, they might be “doctor shopping,” perhaps without even knowing it. • How long as the person been taking painkillers? Prescription pain killers are usually not an ideal way to manage chronic pain. They’re much more effective for acute pain, which should pass in a matter of weeks in most cases. If the person you know is still taking this medication many months or even years later, it’s possible that they’re addicted. comments powered by Disqus Want Help Beating an Addiction?
global_01_local_1_shard_00001926_processed.jsonl/62794
<![CDATA[Data Science - The Marketing Technologist | The marketing technology blog]]>https://www.themarketingtechnologist.co/https://www.themarketingtechnologist.co/favicon.pngData Science - The Marketing Technologist | The marketing technology bloghttps://www.themarketingtechnologist.co/Ghost 1.14Sat, 15 Dec 2018 10:06:44 GMT60<![CDATA[AI meets graphic design]]> https://www.themarketingtechnologist.co/ai-meets-graphic-design-part-1-3/5ae71c90c3686300145b08eaWed, 02 May 2018 10:02:00 GMTAI meets graphic design Over the last decade, the transition from TV towards digital has caused the advertising market to shift from offline to online as well. Retailers are required to follow this trend along with the consumer in order to catch – and keep – their attention. This means designing extra content, like banners, to be placed on websites and social media so that potential consumers actually start seeing them again. This increased demand has caused creative agencies to be too short-handed to keep up with retailers’ requests. As a result, creative agencies developed a need for scalable expansion of their business. So how do we tackle this problem? In marketing, Artificial Intelligence is a buzzword that seems to be the answer to every problem. Marketing automation has become a real thing over the last couple of years, with the rise of chatbots and personalized advertising. However, it still has to prove itself on the creative side of online advertising - and that is what I will discuss with you today. In the innovation lab of Greenhouse Group (Labs), we are researching the automated creation of online advertising banners. One particular task is addressed in this research: the design of product compositions at the Creative Hub. Designers at the Hub consider this specific task to be repetitive and tedious work – suggesting it’s both useful and feasible to automate this! But before we can dive into how we could automate such a creative process, we first have to understand what it is that we are planning on automating exactly. With our Labs team consisting of two highly technical people, we required an introduction to the design space. Dissecting the design process The research started by literally looking over the shoulder of a designer that performed the creation of a product composition. By asking why she chose to take the steps she did, we found out that a lot of the work is done based on a gut feeling which was developed after years of experience. For example, the elements used for the composition all have the same size; proper rescaling and knowing the real size of a product is an essential step in the process for the final success of the resulting composition. Also, creating the composition also requires some trial and error to come up with something that is visually appealing. To do that, designers take into account contrast, hierarchy, visual weight, and color compatibility. AI meets graphic design Example of a simple composition What about GANs? Would it be possible to teach a model to do all these things for us, and if so, how? Deep Learning has shown promising results lately in the field of creation. Invented in 2014, the Generative Adversarial Network (GAN) structure by Ian Goodfellow is mainly used for artificial image creation nowadays. In 2017, the visual computer technology company NVIDIA created high definition images of fake celebrities using an updated GAN structure. Forming product compositions seemed like a piece of cake after seeing these results, given the availability of enough proper data of real, hand-made compositions to train an algorithm on. We set out to make our own creative AI, but unfortunately ran into a couple of challenges on the way. AI meets graphic design Fake celebrities created by NVIDIA Challenge 1: Collecting and using the data The first problem occurred when we tried to use data of compositions that had been made before at the Creative Hub. Designers of the Hub create the compositions in Adobe Photoshop, and export them to an image file. For our research, we needed to extract the composition from the file by getting rid of the discount sticker, background, and other attributes. We ran into two problems that made this approach infeasible. First, the Adobe Photoshop files are enormous, making them hard to work with using a programming language. Second, the layers in the files contain no naming conventions, which makes extracting the component layers significantly harder. Challenge 2: Clustering exported images The following idea was to use the exported files instead. By finding similar projects to the one that needs to be created, we can recommend a project composition to designers. First off, all the .jpg and .png files that contained a product composition were retrieved from the company server by hand. The images were preprocessed by running them through a VGG19 Convolutional Neural Network trained on the popular ImageNet Dataset. The values of the second-to-last layer’s neurons were used as visual features of the images containing the compositions. from keras.applications.vgg19 import VGG19, preprocess_input from keras.preprocessing import image from tqdm import tqdm import os import re import numpy as np def feature_extractor(images_dir="PATH"): list_images = [f for f in os.listdir(images_dir) if re.search('jpg|JPG|png|PNG', f)] features = [] target_size=(224, 224) model = VGG19(include_top=False, weights='imagenet', pooling='max') for i in tqdm(list_images): img = image.load_img('images/' + i, target_size=target_size) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) feature = model.predict(x) return features Code to extract image features using VGG19 (Keras Documentation) The features were then transformed into two features using t-SNE to plot the images in a 2-dimensional graph. The resulting graph showed that most value is attached to attributes of the image that do not relate to the composition itself. For example, images with the same background were grouped together as similar images. Clearly, this was not the result we hoped for since we cannot extract information regarding the composition which made this idea infeasible as well. To sum up, the data that is available cannot provide us with input that can be used to create new product compositions. We needed to think of another solution …. and we found one! In part 2, I will elaborate on a solution that can create product compositions quite well already, without the use of data. <![CDATA[Using AWS Lambda and Slack to have fun while saving on EMR costs]]>https://www.themarketingtechnologist.co/using-aws-lambda-and-slack-to-have-fun-while-saving-on-emr-costs/5a1da9969d64de0014b6c81cMon, 04 Dec 2017 09:14:49 GMTUsing AWS Lambda and Slack to have fun while saving on EMR costs We all have these times where we hack a piece of code together in 5 minutes. Usually, these pieces of code are not hidden gems, they tend to do simple stuff. Every once in a while though, you will find yourself writing a simple script which gives you a big smile afterwards. In this post, I will discuss one of these scripts which I made quite quickly, but still provides a lot of laughs for the entire team from time to time. Additionally, it also helps us save on AWS EMR costs and it keeps the minds within the team sharp. A win-win! Look out, we got a big spender over here Our team of Data Scientists, Data Technologists and Marketing Engineers frequently start EMR clusters on AWS to perform ad-hoc analyses. Trivially, a cluster needs to be terminated after an analysis is finished to prevent incurring unnecessary costs. In some cases we can do this by adding a bootstrap script that terminates a cluster after a certain time of inactivity. However, in some cases we do not want to add such a script, because the cluster may be needed later on. In that case we need to manually terminate the cluster. The requirement of manually terminating clusters however comes with the risk of forgetting to terminate a cluster, which results in unnecessary billing costs. Sounds simple right? Terminating the cluster when you are finished. The truth is harsh: it is not... There are numerous reasons why we keep on failing to terminate our EMR clusters. These reasons vary. The most common reason is simply forgetting to terminate the cluster because our minds were distracted. Another common reason is that the Amazon front-end sometimes hangs (due to inactivity) but gives the impression that you are terminating the cluster, whereas in fact it isn't. So after a week where we burned 100 dollars on useless EMR clusters we thought it was time to come into action... The boring solution The solution was simple and elegant. We wrote a small Python script that uses boto3 to check if there are any active EMR clusters and terminates them if so. We then used AWS Lambda to execute the Python script so we didn't have to spend time on managing a server. In combination with AWS CloudWatch we were also able to schedule the script each night at 00:00 UTC. Easy right? It does the job, but it is hardly any fun... The fun solution Therefore we added an extra option to the script: Slack notifications. If the script finds any non-terminated clusters at midnight, it will shut down the instance, but it will also send a slack notification in our team Slack channel mentioning the colleague who didn't terminate his cluster. Also, attached with the Slack message is the well-known Game of Thrones Shame. Shame. Shame. giphy. To make it even worse, we've bought the ugliest hat we could find, which is now known as the shame hat. If you forget to terminate your cluster, you must wear this hat for the full day. The result is that you won't have a cold head that day, but your colleagues will have a lot of laughs. Breakdown of the script In the remainder of this post I will elaborate on how we made the Python script, so you can also introduce this at your workspace. The full code of the script can also be found on my GitHub. The Python script The Python script essentially consists of two parts. One part that checks whether there are active EMR clusters with the help of the boto3 libary. The other part sends notifications to Slack. Part 1) Check for active EMR clusters We first need to set up a connection with the Amazon Web Services API. The easiest way to do this in Python is by using the boto3 libary, as Boto is the Amazon Web Services Software Development Kit (SDK) for Python. The most recent version of this SDK is boto3. We start by using Boto to initialize a client that handles the connection with AWS EMR. def get_emr_client(): session = boto3.Session() emr_client = session.client('emr') return emr_client emr_client = self.get_emr_client() We then use the EMR client to get a list of active EMR clusters. def list_active_clusters(emr_client): clusters = emr_client.list_clusters(ClusterStates=['STARTING', 'BOOTSTRAPPING', 'RUNNING', 'WAITING']) cluster_ids = [c["Id"] for c in clusters["Clusters"]] return cluster_ids active_cluster_ids = list_active_clusters(emr_client) And after adding some logging statements, we know now if there are any active EMR clusters. def log_number_of_active_clusters(cluster_ids): if not cluster_ids: logging.info("No active clusters...") logging.info("Found {} active clusters...".format(len(cluster_ids))) And then we terminate them like Arnold Schwarzenegger does in the movies. def terminate_active_clusters(emr_client, active_cluster_ids): response = emr_client.terminate_job_flows( logging.info("Terminated all active clusters...") terminate_active_clusters(emr_client, active_cluster_ids) So far, so good right? The next step is to send a Slack notification for each active EMR cluster. For now, we assume we have an instance of the SlackNotifier class which will handle sending notifications into Slack. How to make this class is discussed in the next section. def send_slack_notification_for_each_active_cluster(emr_client, cluster_ids): for cluster_id in cluster_ids: send_slack_notification_for_active_cluster(emr_client, cluster_id) def send_slack_notification_for_active_cluster(emr_client, cluster_id): message = "Cluster not terminated" icon = ":thom:" username = "Clusterbot" send_slack_notification(message, icon, username) def send_slack_notification(msg, icon, username): slack_notifier = SlackNotifier() slack_notifier.send_message(msg, icon, username) send_slack_notification_for_each_active_cluster(emr_client, cluster_ids) You might think that the second function in the example above is a bit trivial and hardly informative. Therefore, we extend this function to give more information about the active cluster, e.g. the name or the attached keypair. To get more details about an EMR cluster, we again use our Boto EMR client and use the describe_cluster() command. From this output, we extract, for example, the keypair to determine who forgot to terminate his cluster. Note that we also use the the keypair to change the user icon of the bot. def send_slack_notification_for_active_cluster(emr_client, cluster_id): description = describe_cluster(emr_client, cluster_id) message = get_slack_message_from_description(description) icon = get_icon_emoji_based_on_description(description) username = get_username(description) send_slack_notification(message, icon, username) def describe_cluster(emr_client, cluster_id): description = emr_client.describe_cluster(ClusterId=cluster_id) state = description['Cluster']['Status']['State'] name = description['Cluster']['Name'] keypair = description['Cluster']['Ec2InstanceAttributes']['Ec2KeyName'] description = {'state': state, 'name': name, 'keypair': keypair} return description def get_slack_message_from_description(description): message = "Cluster `{name}` was still active in state `{state}` with keypair `{keypair}`. " \ .format(state=description['state'], name=description['name'], keypair=description['keypair']) return message def get_icon_emoji_based_on_description(description): keypair = get_keypair(description) if keypair == "thom": return ":thom:" return ":money_with_wings:" def get_username(description): keypair = get_keypair(description) username = "Active EMR Cluster Bot ({})".format(keypair) return username def get_keypair(description): return description["keypair"] send_slack_notification_for_each_active_cluster(emr_client, cluster_ids) This is all we need to have a working script that checks for active EMR clusters. Part 2) Slack notifier The next step is to write the SlackNotifier class which is used to send slack notifications. The easiest way to send messages from external sources into Slack are Slack's Incoming Webhooks. Although Incoming Webhooks offer less options than the Web API, it nicely fits our needs for this situation. Sometimes less is more. To get started with Incoming Webhooks, we first need to get a Slack webhook token at https://my.slack.com/services/new/incoming-webhook/. The Slack webhook token looks like https://hooks.slack.com/services/XXXXXX/XXXXXXXX/XXXXXXXXXXXXXXX. The token is simply an URL which we use to send our messages to, and in the meanwhile it serves as authorization method for Slack. Sending a message to Slack then boils down to sending a HTTP POST request to the webhook URL. There are several methods to send POST requests in Python. I usually prefer using the requests library due to its simplicity. The simplicity makes it much easier in use than for example http.client or urllib. However, requests is not a default Python library which makes it a bit more difficult to deploy your script with AWS Lambda. For sake of simplicity, we therefore use the http.client and urllib.parse method which works out of the box with Python 3 and thus also AWS Lambda. Hence, the following code does the job. Note that you can always try to implement the requests approach on your own. import http.client import urllib.parse import json WEBHOOK_URL = https://hooks.slack.com/services/XXXXXX/XXXXXXXX/XXXXXXXXXXXXXXX def send_message(message, icon, username): payload = get_payload(username, icon, message) data = get_encoded_data_object(payload) headers = get_headers() response = send_post_request(data, headers) def get_payload(username, icon, message): payload_dict = { 'channel': NOTIFICATION_CHANNEL, 'username': username, 'icon_emoji': icon, 'text': message, payload = json.dumps(payload_dict) return payload def get_encoded_data_object(payload): values = {'payload': payload} str_values = {} for k, v in values.items(): str_values[k] = v.encode('utf-8') data = urllib.parse.urlencode(str_values) return data def get_headers(): return headers def send_post_request(body, headers): https_connection = get_https_connection_with_slack() https_connection.request('POST', WEBHOOK_URL, body, headers) response = https_connection.getresponse() return response def get_https_connection_with_slack(): h = http.client.HTTPSConnection('hooks.slack.com') return h def log_response_status(response): if response.status == 200: logging.info("Succesfully send message to Slack.") logging.critical("Send message to Slack failed with " "status code '{}' and reason '{}'.".format(response.status, response.reason)) Slack offers some options to style your Incoming Webhook messages. For our case, we set a custom username, because otherwise the user that "sends" the Slack message is likely to be called "Incoming Webhook". Also, we added a pretty icon_emoji next to the message. We use this to show different user icon emojis per keypair. If a cluster with my keypair associated is not terminated, we show a picture of me as emoji next to the message. And the coolest of all, we add the Shame. Shame. Shame. giphy from Game of Thrones as an attachment! That is all we need to have a fully working Python script which checks for active EMR clusters and sends a Slack notification if so. Saving money has never been so easy... Execute the script with AWS Lambda Next step is deploying the script on AWS Lambda. If you don't know what AWS Lambda is, below are the first sentences from the official documentation. What does this mean for us? It means that we don't need to hire a computer/instance that will run our script each night. Given the simplicity of our script, it is quite clear that any instance we would hire, would be overkill for our script in terms of processing power. Also, such an instance probably needs to run 24/7, and therefore we also know that we are paying way too much for running the script once per day. AWS Lambda solves both of these problems, because we only need to pay for the compute time and resources we consume. Additionally, it saves us the hassle of setting-up an instance. Sounds like a win-win! Deploying our script on AWS Lambda is again simple. We create a new Lambda function, select Python 3 as runtime and use the inline Python code editor to copy-paste our Python code to. Recall that the full code is also on my GitHub. The only thing we need to add is a function that Lambda can use to trigger the script, e.g. a handler function. By default, AWS Lambda assumes that this function has 2 parameters, e.g. event and context. To trigger our script we therefore add the following: def lambda_handler(event, context): where run() is the function that calls all the above steps. And in AWS Lambda we define as handler lambda_function.lambda_handler. This ensures that if our Lambda function is being run, it will run the function lambda_handler. Also, we need to ensure that our Lambda function has an IAM role which allows it to read the status of EC2 instances and EMR. For example, a role with the following AWS managed policies (although these might be a bit too broad). Schedule the script with AWS CloudWatch The last thing we need to do is schedule the script to run at midnight. We do this by setting up a trigger in AWS CloudWatch that triggers each night at 00:00 UTC. We then add this trigger to our AWS Lambda function. Now, if the CloudWatch trigger fires at 00:00 UTC, it will also trigger our Lambda function that checks for active clusters. A moment of reflection To conclude the post I want to provide a moment of reflection. What we did in this post is hardly Data Science or Data Engineering. Nonetheless, it provides us with valuable competences for anyone working with data: efficiency (no manual checks, serverless deployment on AWS Lambda), support (help your colleagues to not forget to terminate a cluster), risk management (drastically reduce the risk of incurring high EMR costs) and fun. Therefore, even the simplest scripts can have a big impact on our organisation. With the script of this post for example, we already saved quite some money on non-termianted clusters. Additionally, I also have automated Machine Learning models in production, which send me Slack notifications about their status. For example, when performance scores drop. As a final example, my colleague Erik Driessen is using a similar concept to send Slack notifications when funnel metrics in Google Analytics suddenly drop. For example, when the number of step 3's in a funnel is higher than the number of step 2's, there is probably something wrong. This is something that is difficult to achieve within Google Analytics and is tedious to check manually every day. However Lambda and Slack make this boring task fun. It is often the small things that no one sees, that result in the big things that everyone wants. <![CDATA[Improving the consistency of your projects with virtual environments in Anaconda]]>https://www.themarketingtechnologist.co/improving_the_consistency_of_your_projects_with_virtual_environment_-in_anaconda/59fe1539f7390837b438a5e9Thu, 01 Jun 2017 00:15:09 GMTImproving the consistency of your projects with virtual environments in Anaconda Virtual environments in python are very useful for managing different projects and especially when multiple people have to work on them. It is also useful for using packages which are not entirely supported on every version of python. Personally I needed a virtual environment when using Tensorflow. Tensorflow was not available on Python version 3.6 and hence I needed a different version of Python. Completely downgrading my python version is not ideal, so a virtual environment with Python 3.5 was the best solution. This blog will show you how to use these virtual environments in Pycharm, when having Pycharm and Anaconda installed. After this blog you will be able to apply the environments in both settings, when managing different projects and when you need a different version of python for certain packages. Hence you will be able to work more efficiently when working in projects together with other people. Setting up a new environment Starting with your first virtual environment you will need to start Pycharm and open the project, in which you want to use this new version of Python. Then in File>Settings you can go to project and the project interpreter. Improving the consistency of your projects with virtual environments in Anaconda Here you can see the version of python you currently have installed and all the different packages used in this environment. You can create the new virtual environment in the upper right corner, where you press “create Conda Env”. Improving the consistency of your projects with virtual environments in Anaconda In here you can select the specific python version you want to use and you can give this new environment a name. If you want to use this environment for other projects as well, you have to check off the make available to all projects button. It is probably useful to put all your different environments into one folder, such that you can easily find it in your explorer. Then press ok and Pycharm will create a new virtual environment with your specified version of Python. After Pycharm is done creating the virtual environment, you want to look again at your project interpreter, select your new virtual environment and look at the different packages installed. Probably you only have a few packages installed and do not have all of your old packages in it. Hence you need to install the packages you want again. Installing these packages cannot be done the same way you would do it normally, we will explain this in the following section. Installing the packages you want Normally you would install packages in your command prompt with “pip install package” or “conda install package”. When you want new packages in your newly created environment this is not possible, because it would install the packages in your default location, or in other words initial version of python. The way you now have to install new packages is as follows. For packages which need pip, you first need to find your environment’s pip.exe file. This can be found by going to your environment’s directory\Scripts if you look there you will find the pip.exe file you need. Then to install new packages, you need to go to your command prompt, copy the whole path and put \pip.exe behind it, for example: “C:\Anaconda\envs\3.5\Scripts\pip.exe”. After the path you just put “install package”, and you are done! Improving the consistency of your projects with virtual environments in Anaconda For the “conda install” packages it is a little bit easier. In your command prompt you write “conda install –n (name Environment) package”, which is quite similar to what you would normally do. Improving the consistency of your projects with virtual environments in Anaconda With these two manners of installing packages you can in principle install all packages you want, but this is slow and you have to do everything manually. A faster way of doing this is by setting up a requirements.txt file in your project. In this text file you can specify all the different packages that you want and the specifics of the version of that package. The packages need to be entered in the following way in the text file: package==version ,with an enter in between each package. A short example is given below. Improving the consistency of your projects with virtual environments in Anaconda This way Pycharm will notify you if these packages are not installed and will ask to install them for you. It could be the case that Pycharm is not able to install some packages. This is due to Pycharm only using pip instead of also using conda. Hence not all packages can be installed and you still have to install some of the packages manually with conda, but at least some packages are installed automatically. The requirements.txt file is also very useful when having to work with somebody on the same project. This way both persons can easily create the exact same environment, which include identical packages. It is necessary, since other versions of a package can result in different results of the analysis done in python. This can be very cumbersome when for example having a data science project where you want pretty precise results. One file solution Another useful one line command is that of "conda env create". With this command you can actually create an environment with one file and install all required packages. You have to start by creating a environment.yml file first. This can be done in a text converter like sublime, in which you can save your file as a YAML file. An example of such a file is: Improving the consistency of your projects with virtual environments in Anaconda You can install packages which need to be installed with conda, but also with pip. The environment can then be created by entering the following line to your command prompt: "conda env create -f environment.yml", where environment is your file name. If this does not work, specify the whole path like:"conda env create -f C:\stats2\environment.yml". After creating the environment you still need to locate your new environment in Pycharm. In pycharm you again go to the project interpreter and go to the upper right corner, where you select "Add local". From here on out you select the path your environment is installed in. This is in your "envs" map, where your new environment has the name created in the environment file. The main advantage of this file is that everything is done automatically, where with the other approach you still have to install some packages manually. The problem with this approach is that it is not in Pycharm itself, but if you follow the previous instructions it is not that big of a problem. The environment.yml file is just as the requirements.txt file easily shareable with your colleagues and hence easy to use in shared projects. Now you will be able to manage your own python environments and manage (combined) projects better with the requirements.txt file. Hopefully this blog was useful for getting to know the virtual environments of Python. If there are any questions or remarks, please add them below. <![CDATA[The Genie in the Bottle: How to Tame AI?]]>https://www.themarketingtechnologist.co/the-genie-in-the-bottle-how-to-tame-ai/59fe1539f7390837b438a635Tue, 08 Nov 2016 21:17:27 GMTThe Genie in the Bottle: How to Tame AI? While we're seeing some great progress in AI, there is increasing concern about the danger of it by people like Stephen Hawking. This 'possible existential threat to humanity' is the reason why people like Elon Musk and Sam Altman started OpenAI. The authority on the threat of AI is Nick Bostrom, Oxford professor, director of Future of Humanity Institute and author of the book Superintelligence: Paths, Dangers, Strategies. Before reading the strategies chapters of his book, I liked to think on how to prevent humanity from the dangers of AI with an 'unbiased' mindset. It's truly an intriguing problem to put your mind to, so I hope you'll enjoy this post and create your own thoughts on how best to tame AI. If you haven't read the Wait But Why post on AI yet, you should definitely do so. It clearly explains why we should worry, how sudden human level AI will hit us and why it is so hard to control something more intelligent than us, without anything like the concept of morality we humans have. An example is given about a start-up that builds a robotic arm that makes handwritten postcards. In the heat of the competition from other start-ups they can't resist the temptation of connecting their AI to the internet. A couple of days later all humans drop dead because the AI found out that humans provide valuable resources for creating hand written postcards. Further fulfilling it's objective function, the AI eventually turns the complete universe into hand written postcards. It's a great read, and while maybe not too realistic, you clearly get the point. It's also a great start for giving it your own thought. These are some of mine: 1 Even AI Experts Seem to Think Linear Wait But Why clearly explains how technological progress is exponential, but we fail to see it because we think linear. They don't stress it, but even AI experts fail to see exponentially based on the data in the post! In the questionnaires part AI experts answer when they think we'll have human level AI and when we'll have superhuman level AI. The median answer is 2040 and 2060 years respectively. This clearly illustrates that even AI experts think linear on average! According to exponential thinking, the moment we have human level AI, we'll have superhuman level AI in no time. As an example: the human level AI could be duplicated to create a small army of 'AI experts AIs' that can really easily make an AI that's more intelligent. 2 Objective Function vs. 'Model of Objectives' The AI in the example is given a simple and clear objective function. I believe that when AI grows more powerful towards human level AI, such simple objective functions will quickly disappear. In the Wait But Why post, it's explained that human level AI is often called Artificial General Intelligence opposed to the Artificial Narrow Intelligence, and 'General' is not about having a clear objective function. It's a bit like the Japanese expert systems that shortly boomed before the AI winter in the 80's, where 'knowledge' was specifically programmed into the system. Now we have (deep) neural networks, which work without very limited human intervention on how the model works. Even experts can't exactly comprehend how the underlying models work, they just do. To illustrate: AlphaGo made a winning move that even experts couldn't understand until much further in the game. This will become the same for objective functions: instead of having clearly defined ones, the objective functions of AIs will be complex models that no human can fully comprehend. I'll refer to this as a 'model of objectives'. This sounds very creepy, but in fact it might be much safer to have these complex objective ones instead of some simple ones defined by humans. We humans are pretty bad at defining objective functions. However, although we don't really have a clue about the set of 'objective functions' that make us act like we do, personally nor in general, somehow almost all individuals play their part in society really well. We do have something like an objective function programmed by the mechanics of evolution: survival and reproduction of your own DNA. Luckily this objective isn't one we follow too much these days anymore. In fact there are very many reasons for doing the things we do and we can't fully comprehend these reasons ourselves. And if people give themselves some singular objective function in life, this more often than not turns out badly if you ask me. The fact that we have a very complex model of objectives that we can't fully comprehend ourselves makes it wonderfully possible to live together in society, even though objectives differ strongly from individual to individual. If we want to achieve something, there always will be some parts of our model of objectives that prevent us from achieving this goal by all means. To give AI a kind of conscience, set of ethics, moral, or whatever you can call it, it needs a similar model of objectives as we humans have ourselves. Therefore, we'll have to accept that we won't be able to comprehend the model of objectives functions of the AI, because it better than relying on simplistic human programmed objective functions. 3 Superhuman AI will understand us better than we understand each other and even ourselves The robot arm AI may be powerful, but it is pretty stupid. We humans are not that good at deciding what's good for us and what's not, but even the dumbest person in the world will understand that if you ask it to make handwritten postcards, he or she won't start killing humans. This is very closely related to thinking about AI as giving it a clear objective function: when it approaches human level AI, it will start to understand how to interpret these objective functions. Think about it: we're pretty good at understanding what somebody else wants, without the need of very specific instructions, even if it completely differs from the things we want ourselves. And if AI is getting to the superhuman level, it will be actually better in understanding what we actually want than any other human, and even than we know it about ourselves. This sounds super futuristic, but let me give an example that I believe is quite easy to imagine. You're on a crazy night out with some of your colleagues and you ask the assistant app on your phone to check social media and check for connections that are nearby and message any fun people to join you in the bar. Some of your friends join, including one of your best old friends that you haven't seen for a long time and just happens to be around town. You both drink to how amazing this phone assistant is since it actually made you meet each other in the bar. Many drinks later, you come up with this hilarious joke about your company's CEO. Together with some colleagues you make a selfie video about your hilarious joke and send it to your CEO. The next morning you wake up with an incredible headache. Your phone assistant is giving you a notification whether you really want to send that video you made yesterday to your CEO. You quickly press no and wonder how we ever have lived in a world without AI. I believe it's not that hard to imagine AI with this type of behavior being available in a couple of years, given companies like Google releasing personal AIs and of course Apple's SIRI, Google Home and Amazon's Alexa. And it will be still far away from human level AI. However, the point is that such an AI won't be programmed to specifically prevent you from sending videos to your CEO when you're drunk. It will be impossible to humanly define all the rules we want such an AI to follow, similar to the expert systems in the 80's being programmed with specific knowledge. More importantly, the AI also doesn't blindly follow an objective to do what you wish for. If it would have asked you whether you would be happy it prevented you from sending the video when you were still in the bar, you probably wouldn't have been amused. 4 It's not going to be 'Us Versus Them' It's not only Hollywood that's thinking about AI as them vs us, the Wait But Why example is also very much describing an AI that's completely separate from us. Somehow we humans really tend to think in terms of them vs us, as the world today sadly clearly shows. This them vs us is an artifact of our primal objective function to replicate our own DNA that turns out to be really though to get rid off. Any way, as Ray Kurzweil writes in his classic The singularity is near AI is quite likely to be more 'us' before it becomes at the human level. We might be able to enhance our brains with artificial computing power or being able to directly tab into the cloud from our brains. It's just really hard for us to imagine today what this is going to look like, just like we could never imagine what the world would look like today in an era when internet still had to be invented. Whatever the future will be, it's important to know that AI is heavily being trained on data generated by humans. And most data that's produced is data we generate living our daily lives. Therefore, probably the most efficient way of making AI smarter is to integrate it in our daily lives, like Google assistant type of projects. Training will be probably speed-up with reinforcement learning (training AI based on simulations, like AlphaGo training itself by playing incredably many games against itself). But human generated data will be crucial to reaching human level AI, since the real world is much complexer than a game of Go. More importantly, you can't just let an AI go trial-and-error in the real world. It's actually a good thing AI will be learning from human daily behavior, since not only will it be an efficient way in training the model (i.e. how do you do things), it will also be the perfect input for building the model of objectives (i.e. why do we do the things we do the way we do). With such a lot of exposure to human behavior before even coming close to human level intelligence, AI will learn gradually how to blend in perfectly into our society. As long as no humans are allowed to overwrite the model of objectives by some hardcoded objective, AI won't go rogue. Just like neural nets are robust (i.e. removing some parts of the model will hardly impact the performance), the model of objectives will be robust as long as it is only evolving slowly by learning from new data instead of humans messing too much with the objective functions. 5 The Genie in the Bottle Everybody has heard the fairy tale of the genie in the bottle: some genie or another magical creature grants the main character three wishes. These wishes always backfire, because of a combination of ill chosen wishes and the genie taking the liberty to define the details on how these wishes are actually fulfilled. The wisdom of these fairy tales is nowadays reflected more than ever in the topic of superintelligent AI. It's time for a thought experiment. Forget my previous point about AI evolving gradually alongside humans and imagine you're suddenly facing AI of superhuman intelligence. You would be granted three wishes. What would you do? The Genie in the Bottle: How to Tame AI? Don't run off quoting the Three Laws of Robotics, give it a thought! I'd came up with a strategy like this: "Wow AI buddy! Before having any effect on the universe I wish you to build an exact model of my mind in your internal intelligence. Then figure out how to let me know you completed your task." The Genie in the Bottle: How to Tame AI? "Amazing, it took only 9 milliseconds! And it knows I like the hitchhikers guide!" "Now for my second wish. I wish that for every potential response you're going to give to one of my wishes, you're going to test it on the model of my mind you just made. If the consequences are too many to comprehend for my mind model, simply copy the model of my mind and let each copy evaluate part of the consequences. If any of the copies of my mind thinks the world will be worse than it currently is, come up with a better plan. If you run out of internal intelligence to create enough copies to fully comprehend the consequences, come up with a plan that's less complex. If you're still working on it when I make another wish, simply ignore the wish you're working on and start working on the new wish. If you find a response to a wish that satisfies this test, then execute it." "Oh and for my third wish, I want you to serve me for infinity and grand me unlimited wishes!" Now take some time to think about it. The definition might not be very exact, but since we're facing superintelligence, the AI will understand you very well. And your second wish will prevent the AI from doing anything in a way you won't like or intended! Well yes, it does, but you also just made the AI pretty useless. With these restrictions, the AI won't do anything except for maybe some very simple (and useless) tasks. This is because of two reasons: 1. It's impossible to know all the consequences of pretty much any action with complete certainty, even for a superhuman AI. 2. It's almost impossible to find a response to your wish for which every single consequence will have a positive impact: there's always a trade-off. However, since a single copy of your mind can't comprehend the full consequences, you'll need multiple copies of your mind to evaluate part of the consequences. However, you can't simply add up the responses and do something if you like the majority of the consequences. It's hard to make a trade-off if you can't fully comprehend the consequences at once. The solution, I think, will be involving probabilities and asymmetry in evaluation. What I have in mind is that an AI will only do something if there is a very, very low probability that you won't like the consequences. And if you dislike part of the consequences a little, you really have to like the benefits a lot (like a ten times more), in order for an AI to take action. Basically AIs should be much more risk averse than we humans are. 6 With Great Power Comes Great Responsibility The example above is of course rather selfish, since you ask the AI to only simulate your mind, not those of others. However, I believe that the way the AI will evaluate the consequences, will already make more humane decisions than we do nowadays. Especially in the case of 'us vs. them', people don't think about the consequences for others, and sometimes specifically choose not to think about what this means for others. However, the AI will specifically use some of the copies of your mind to test whether you like the impact the execution of your wish has on other people. When confronted with the facts, I think I believe people will be a lot less selfish. However, it would be much better to not only use your mind in evaluating the response to your wish, but that of very many people, preferably the whole world population. So you can ask your wishes, but how they are granted is evaluated against the minds of the whole population. And luckily, most likely AI is going to evolve learning from the minds of (almost) the complete populations, or at least as many people as possible. The most efficient way of training AI, is to collect and combine as much data as possible from behavior of people in their daily lives. Therefore, there is a strong incentive to include as many people as possible. Moreover, people of diverse backgrounds will be more useful than similar people, since there is less information to gain from similar people. 7 Companies that Act More Humane Of course companies will also be making use of AI. Companies have a pretty dangerous objective function: maximizing profit. Although no human can fully comprehend the impact of all the decisions made in a company, somehow companies do behave quite well in reality. Of course there are some bad examples, but think about it: it could be so much worse if companies would be blindly following the objective function of maximizing profit. It's thanks to the model of objectives of humans that companies behave so well. In this case it's really a difference between live and death whether AI will learn a model of objectives from working together with humans, or a team of employees that comes up with some objective functions that will steer an 'inhumane AI'. The later will be a doom scenario. However, the first could actually make companies behave more humane than they currently are. Think about it, we humans can't comprehend the consequences of all those decisions made in a company. Wouldn't it be much better if the employees at a company could wish what they want to achieve, but how it is executed is carefully evaluated by an AI including the minds of the complete population? 8 Solving the Flaws of Democracy Democracy surely has it's flaws. However, it's the best flawed option we have. With AI, we will have a better alternative. I'll consider two flaws of democracy, the terror of the majority and voters actually don't know much about what they are voting for. Democracy works best if everybody votes what they think is best for the country as a whole. Unfortunately, in reality people tend to vote what's best for themselves. The terror of majority is the case in which a majority of the population, can completely rule over a minority in the population, just because they have a majority. This most certainly doesn't have to be humane. Luckily we have our models of objectives that keep us on the right track most of the times. With AI we'll have much better option. Instead of putting our trust in some politician, the AI can simply run every single political decision against the full simulation of the minds of the complete population. Using asymmetric evaluation, the AI can prevent doing anything that benefits a majority of the population, while having too negative consequences for smaller groups in society. Just imagine how perfectly you could set taxes if you could actually run simulations on what the true impact would be on people. Second, voters are probably not the best ones to decide about the best solutions since they are not the experts. Moreover, people don't like to spend there time in learning the details about politics and specific topics they are voting for. However, using AI that simulates the minds of the full population, anybody could have a copy of their minds dedicated to understanding the consequences of a political decision and evaluating whether they like the results or not. As a result, the decisions that are actually made will be definitely much wiser than the decisions that are currently made. I hope you liked this thought experiment! To conclude, I think these are the points that will help to keep AI evolving on the right path: • AI should move more and more to having a complex model of objectives, like we humans do, instead of simplistic objective functions. • We will need to except that we won't be able to fully comprehend this model of objectives. • These models of objectives should only be able to evolve slowly based on new input data. They should never be overruled by hardcoded human objective functions. • AI should evolve alongside humans as personal assistants, so they will have plenty of exposure to human behavior not only to learn how to do things, but also learn their model of objectives (why do you do what you do). • Although we humans have pretty diverse and sometimes oposing objectives, the complexity of models of objectives makes us all fit in to society pretty well. The same will be true for AI. • In fact it will not be humans vs AI, but both will blend together one way or another. • When AI becomes superhuman, it will actually be possible to make an exact model of our minds. This should be used to asses any possible action of an AI with a simulation against copies of human minds. • These human minds should reflect the complete world population. Luckily, the most efficient way of training AI is by feeding it as much human behavior as possible. Therefore, AI will be able to learn it's model of objectives from as many and diverse people as possible. • Hopefully, the AI that will be used by companies will be run by the same AI, and therefore any possible (AI enhanced) action will be evaluated by models of minds of the world population. This could actually make companies behave more humane than they currently do. • It could improve democracy too. Looking forward to hearing your thoughts! <![CDATA[7 reasons to use Snowplow besides Google Analytics 360]]>https://www.themarketingtechnologist.co/7-reasons-to-use-snowplow-besides-google-analytics-360/59fe1539f7390837b438a606Mon, 07 Nov 2016 12:55:02 GMT7 reasons to use Snowplow besides Google Analytics 360 1: Source attribution modelling 2: Hit types 3: Capturing custom data 4: Data Ownership 5: The approach to raw data 6: Data pipelines 7: Betting on two horses Technical considerations True event level data Co-founder at Snowplow Analytics. <![CDATA[Helping our new Data Scientists start in Python: A guide to learning by doing]]>https://www.themarketingtechnologist.co/helping-our-new-data-scientists-start-in-python-a-guide-to-learning-by-doing/59fe1539f7390837b438a5a8Thu, 29 Sep 2016 22:47:31 GMTHelping our new Data Scientists start in Python: A guide to learning by doing The Data Science team at Greenhouse Group is steadily growing and continuously changing. This also implies new Data Scientists and interns starting regularly. Each new Data Scientist we hire is unique and has a different set of skills. What they all have in common though is a strong analytical background and the practical ability to apply this on real business cases. The majority of our team for example studied Econometrics, a study which provides a strong foundation in probability theory and statistics. As the typical Data Scientist also has to work with lots of data, decent programming skills are a must-have. This is however where the backgrounds of our new Data Scientists tends to differ from each other. The programming landscape is quite diverse, and therefore, the backgrounds of our new Data Scientists cover programming languages from R, MatLab, Java, Python, STATA, SPSS, SAS, SQL, Delphi, PHP to C# and C++. It is true that knowing many different programming languages can be useful when necessary. However, we prefer the use of one language for the majority of our projects so that we can easily cooperate with each other on projects. And given that nobody knows everything, one preferred programming language gives us the possibility to learn from each other. At Greenhouse Group we have chosen to work with Python when possible. With the great support of the open-source community, Python has transformed into a great tool for doing Data Science. Python’s easy to use syntax, great data processing capabilities and awesome open-source statistical libraries such as Numpy, Pandas, Scikit-learn and Statsmodels allow us to do a wide range of tasks varying from exploratory analysis to building scalable big-data pipelines and machine learning algorithms. Only for the lesser-general statistical models we sometimes combine Python with R, where Python does the heavy data processing work and R the statistical modelling. I also strongly believe in the philosophy of learning by doing. Therefore, to help our new Data Scientists get on their way with doing Data Science in Python we have created a Python Data Science (Crash) Course. The goal of this course is to let our new recruits (and also colleagues from different departments) learn to solve a real business problem in an interactive way and in their own pace . Meanwhile, the more experienced Data Scientists are available to answer any questions. Note that the skill of Googling for answers on StackOverflow or browsing through the documentation of libraries should not be underestimated. We definitely want to teach our new Data Scientists this skill too! In this blog we describe our practical course phase by phase. Phase 1: Learning Python, the basics The first step obviously is learning Python. That is, learning the Python syntax and basic Python operations. Luckily, the Python syntax is not that difficult if you take good care of indentation. Personally, coming from the Java programming language where indentation is not important, I made a lot of mistakes with indentation when I started with Python. So, how to start with learning Python? Well, as we prefer the learning by doing approach we always let our new recruits start with the Codecademy Python course. Codecademy provides an interactive Python course which can be followed in the browser. Therefore, you do not have to worry about installing anything yet and you can start immediately with learning Python! The Codecademy Python course takes about 13 hours to complete. After this, you should be able to do simple operations in Python. Update: I just discovered that Codecademy will temporarily take the Python course offline. I haven't tried it yet, but a possible alternative to learn Python from within the browser is learnpython.org Bonus tip: another useful Codecademy course for Data Scientists is the SQL course! Phase 2: Installing Python locally with Anaconda After finishing the Codecademy course we obviously want to start developing our own codes. However, since we are not running Python in-browser anymore we need to install Python on our own local PC. Python is open source and freely available from www.python.org. However, this official version only contains the standard Python libraries. The standard libraries contain functions to work with for example text files, datetimes and basic arithmetic operations. Unfortunately, the standard Python libraries are not comprehensive enough to perform all kinds of Data Science analysis. Luckily, the open-source community has made awesome libraries to extend Python with the proper functionality to do Data Science. To prevent downloading and installing all these libraries separately, we prefer to use the Anaconda Python distribution. Anaconda is actually Python combined with tons of scientific libraries, so there is no need to manually install them all yourself! Additionally, Anaconda comes bundled with an easy commandline tool to install new or update existing libraries when necessary. Tip: although allmost all awesome libraries are included by default in Anaconda, some of them are not yet. You can install new packages from the command line using conda install package_name or pip install package_name. For example, we regularly use the progressbar library tqdm in our projects. Hence, we have to execute pip install tqdm first when performing a new install of Anaconda. Phase 3: Easier coding with PyCharm After installing Python we are able to run Python code on our local PC. Just start Notepad, write your Python code, open the commandline and run the newly created Python file using python C:\Users\thom\new_file.py. Wait, that does not sound really simple right? No... To make our lifes easier, we prefer to develop our Python codes in Pycharm. PyCharm is a so-called integrated development environment which supports developers when writing code. It takes care of routine tasks such as running a program by providing a simple run script button. Additionally, it also helps being more productive by providing autocomplete functionality and on-the-fly error checking. Forgot a space somewhere or used a variable name that is not defined yet? PyCharm will warn you. Want to use a Version Control System such as Git to cooperate on projects? PyCharm will help you. One way or another, using PyCharm will save you a lot of time when writing Python code, because it works like a charm... badum tss. Phase 4: Solving a fictional business problem Defining the research problem So, assume that by now our manager has come to us with a business problem he faces. That is, our manager wants to be able to predict the probability of a user having his first engagement (i.e. a newsletter subscription) on the companies website. After giving it some thought we came up with the idea to predict the engagement conversion probability based on his number of pageviews. Furthermore, you constructed the following hypothesis: More pageviews leads to a higher probability of engaging for the first time. To check whether this hypothesis holds, we have asked our Web Analysts for two datasets: 1. user_id: a unique user identifier 2. session_number: the number of the sessions (ascending) 3. session_start_date: the start datetime of the session 4. unix_timestamp: the start unix timestamp of the session 5. campaign_id: ID of the campaign that led the user to the website 6. domain: the (sub)domain the user is visiting in this session 7. entry: the entry page of the session 8. referral: the referring site, i.e. google.com 9. pageviews: the number of pageviews within the session 10. transactions: the number of transactions within the session 1. user_id: a unique user identifier 2. site_id: the ID of the site on which the engagement took place 3. engagement_unix_timestamp: the unix timestamp of when the engagement took place 4. engagement_type: the type of engagement, i.e. newsletter subscription 5. custom_properties: additional properties of the engagement Unfortunately, we have two separate datasets because they come from different systems. However, users in both datasets can be matched by a unique user identifier denoted by user_id. Just like earlier blogs, I have placed the final code to solve the business problem on my GitHub. However, I would strongly recommend to only look at this code when you have solved the case yourself. Additionally, you can also find the code to create two fictional datasets yourself. Easy data processing using Pandas Before we can apply any statistical model to solve the problem we need to clean and prepare our data. For example, we need to find for each user in the sessions dataset his first engagement, if any. This requires joining the two datasets on user_id and removing any engagements after the first. The Codecademy Python course taught you already how to read text files line by line. Python is great for data munging and preparation, but not for data analysis and modeling. The Pandas library for Python helps to overcome this problem. Pandas offers data structures and operations for manipulating (numerical) tables and time series. Pandas therefore makes it much easier to do Data Science in Python! Reading the datasets using pd.read_csv() The first step in our Python code will be to load both datasets within Python. Pandas provides an easy to use function to read .csv files: read_csv(). Following the learning by doing principle we recommend you find out yourself how to read both datasets. In the end, you should have two separate DataFrames, one for each dataset. Tips: we have different delimiters in both files. Also, be sure to check out the date_parser option in read_csv() to convert the UNIX timestamps to normal datetime formats. Filter out irrelevant data The next step in any (big) data problem is to reduce the size of your problem. In our case, we have lots of columns which are not relevant for our problem, such as the medium/source of the session. Therefore, we apply Indexing and Selecting on our Dataframes to only keep relevant columns such as the user_id (necessary to join the two DataFrames), datetimes of each session and engagement (to search for the first engagement and sessions before that) and the number of pageviews (necessary to test our hypothesis). Additionally, we filter out all non-first engagements in our engagements DataFrame. This can be done by looking for the lowest datetime value for each user_id. How? Use the GroupBy: split-apply-combine logic! :) Combine the DataFrames based on user_id One of the most powerful options of Pandas is merging, joining and concatenating tables. It allows us to perform anything from simple left joins and unions to complex full outer joins. SO, combining the sessions and first engagements DataFrames based on the unique user identifier... you've got the power! Remove all sessions after the first engagement Using a simple merge in the previous step we added to each session the timestamp of the first engagement. By comparing the session timestamp with the first engagement timestamp you should be able to filter out irrelevant data and reduce the size of the problem as well. Add the dependent variable y: an engagement conversion As stated, we want to predict the effect of pageviews on the conversion (i.e. first engagement) probability. Therefore, our dependent y variable is a binary variable which denotes whether a conversion has taken place within the session. Because of the filtering we did above (i.e. remove all non-first engagements and sessions after te first engagement), this conversion by definition takes place in the last session of each user. Again, using the GroupBy: split-apply-combine logic we can create a new column that contains a 1-observation if it is the last sessions of a user, and a 0-observation otherwise. Add the independent variable X: cumulative sum of pageviews Our independent variable is the number of pageviews. However, we cannot simply take the number of pageviews within a session, because pages visited in earlier sessions can also affect the conversion probability. Hence, we create a new column in which we calculate the cumulative sum of pageviews for a user. This will be our independent variable X. Fit a logistic regression using StatsModels Using Pandas we finally ended up with a small DataFrame containing of a single discrete X column and a single binary y column. A (binary) logistic regression model is used to estimate the probability of a binary response of the dependent variable based on one or more independent variables. StatsModels is a statistics & econometrics libary for Python with tools for parameter estimation & statistical testing. Therefore it is not surprising that it also contains functions to perform a logistic regression. So, how to fit a Logistic regression model using StatsModels? Let me Google that for you! Tip 1: do not forget to add a constant to the logistic regression... Tip 2: another awesome libary to fit statistical models such as logistic regression is scikit-learn. Visualize results using Matplotlib or Seaborn After fitting the logistic regression model, we can predict the conversion probability for each cumulative pageviews value. However, we cannot just communicate our newly found results to the management by handing over some raw numbers. Therefore, one of the important tasks of a Data Scientist is to present his results in a clearly and effective manner. In most cases, this means providing visualizations of our results as we all know that an image is worth more than a thousand words... Python contains several awesome visualization libraries of which MatplotLib is the most well-known. Seaborn is another awesome libary built upon MatplotLib. The syntax of MatplotLib is probably well-known to users who worked with MatLab before. However, our preference goes to Seaborn as it provides prettier plots and appearance is important. Using Seaborn we created the following visualization of our fitted model: We can nicely use this visualization to support our evidence on whether our hypothesis holds. Testing the hypothesis The final step is to check whether our constructed hypothesis holds. Recall that we stated that For one, from our previous visualization it already follows that the hypothesis holds. Otherwise, the predicted probabilities would not be monotonically increasing. Nonetheless, we could also draw the same conclusion from the summary of our fitted model as shown below. Logit Regression Results Dep. Variable: is_conversion No. Observations: 12420 Model: Logit Df Residuals: 12418 Method: MLE Df Model: 1 Date: Tue, 27 Sep 2016 Pseudo R-squ.: 0.3207 Time: 21:44:57 Log-Likelihood: -5057.6 converged: True LL-Null: -7445.5 LLR p-value: 0.000 const -3.8989 0.066 -59.459 0.000 -4.027 -3.770 pageviews_cumsum 0.2069 0.004 52.749 0.000 0.199 0.215 We see that the coefficient of the pageviews_cumsum is statistically significant positive at a significance level of 1%. Hence, we have shown that our hypothesis holds, hurray! Furthermore, you just completed your first Data Science analysis in Python! :) We hope you have enjoyed this Data Science in Python blog. What we described in this blog is obviously far from comprehensive or complete as it is simply too much to cover in one blog. For example, we haven’t even talked about Version Control Systems such as Git yet. We hope however that this blog has given you some directions on how to start with your first practical Data Science analysis in Python. We would appreciate to hear in the comments on how you have set your first steps in Data Science with Python. Also, any valuable feedback to improve this blog or crash course is welcome. :) <![CDATA[Calculation of confidence intervals for ratios]]> The science of campaign optimization was among the very first things I was introduced to when first coming to work in programmatic media at Bannerconnect, and it was with great interest that I learned the particular steps our campaign managers regularly take to ensure campaigns run in the most effective https://www.themarketingtechnologist.co/calculation-of-confidence-intervals-for-ratios/59fe1539f7390837b438a613Thu, 25 Aug 2016 22:00:36 GMTCalculation of confidence intervals for ratios Statistically sound optimization From the data scientist point of view, I was of course very interested in the statistics behind how a certain domain, creative type, or even time of day could be judged to be better than any of the others. When optimizing toward clicks, view times, landings, or final conversions, the answer to the above seems simple enough – there are many types of comparative analysis that one could do between different variable types, and indeed a confidence interval for this binomial type data can be easily calculated, with the resulting confidence interval being largely dependent on the sample size present. The below image visualizes this: domain 2 may appear better than domain 1, but with overlapping confidence intervals, we cannot be sure that there is a real difference. Calculation of confidence intervals for ratios The difficulty with ratios However, I quickly came across a more interesting problem: how do you calculate the confidence interval of a ratio of two populations, where there is an uncertainty in each population? For example, what is the confidence interval in my Cost per Action (CPA), given that both the Cost of an impression has a natural variation when taken over a large dataset (and can be presumed to be normally distributed), and the “Action” will also have an uncertainty related to its sample size, and should be binomially distributed? What is the best way to combine these two uncertainties in my measurements, to have a good estimate of how much faith I can put in my calculated CPA? Previous work in health economics A quick Google here revealed that not only was I the only one struggling with this problem, but that it was actually more complicated than I had thought. My literature review led me to a different field entirely – that of epidemiology and health economics, where the similar problems had been explored, ignored, or criticized, in turn. This problem comes about in health economics in the form of cost effectivity ratios – the calculation of how much a treatment costs, versus how likely it is to work. As noted in the paper of Polsky et. al (1), methods for calculating the confidence interval in these ratio are less well developed in their field, and as such often left out of publications. Their work gives a comparison of the effectivity of four methods of calculating these intervals, judging them by performing a Monte Carlo experiment. Of the methods they explore, I’ll mention here only the two which they found most accurate: Fieller’s Method, and a Bootstrap method. Fieller's Method Fieller’s method was first published in 1954, and provides an analytical method for calculating the confidence interval of two ratios, where each part of the ratio may be from a different distribution, i.e. have unrelated uncertainties (2). While this method is very successful for the health economics field, unfortunately for us, it has some conditions which make it inaccurate for the calculation of our CPA, for example. The main restriction that is violated here is that of normality: our “Action” data here will fail this requirement. Another difficulty to be aware of here is that the proportion of positive actions we have in forming our CPA will be extremely low – often less than 1%, which will lead to difficulties in the calculation – a problem that is uncommon in the health field. Happily, the second of the recommended methods they mention, Bootstrapping, leads to much more success. Bootstrapping, simply put, is a method for estimating an estimator (for example the standard deviation), using resampling with replacement of your data. In simple terms, what this means for our example is that we should use a large amount of input data, consisting of cost per impression, and whether the impression resulted in a conversion or not. We can then take a sample of some of these rows, to calculate the CPA. By calculating this thousands of times over different samples of our input data, the distribution of the CPA can be built up, from which we could, for example calculate a standard deviation. While this can sound fiddly, in practice there are bootstrapping routines built into many popular programmes – for example R, SciPy, and MatLab, to name a few. Armed with these methods, calculating confidence intervals for ratios with unrelated errors becomes much easier, and most importantly, campaign optimizations can be made on statistically sound information! (1) Polsky, Glick, Willke, and Schulman, “Confidence intervals for cost-effectiveness ratios: A comparison of four methods”, Health Economics, Vol 6: 243-252 (1997). (2) Fieller, “Some Problems in Interval Estimation”, Journal of the Royal Statistical Society, Series B (Methodological), Vol. 16, No. 2 (1954), 175-185. <![CDATA[Send Slack notifications whenever a Pokémon spawns nearby using a Pokémon GO SlackBot]]>https://www.themarketingtechnologist.co/pokemon-go-slack-notifications/59fe1539f7390837b438a60dSat, 23 Jul 2016 01:27:27 GMTSend Slack notifications whenever a Pokémon spawns nearby using a Pokémon GO SlackBot CURRENT STATUS (04/08/2016): Niantic seems to have made some big changes to the API last night, so currently all API scripts (including PoGoMap and bots) seem to be down. Feel free to share any fixes if you come across anything that could be of help. Meanwhile, I hope that the post is still a good read about how to use slack webhooks in Python. At the Greenhouse Group we've got quite some colleagues playing Pokémon GO. No wonder we were pretty excited when this reddit post appeared on my Google Now. It let's you run a Google Maps app locally which is filled with Pokémon that are actually visible right now in the game, by pinging the API used by the game itself. Many credits to waishda and contributors! However, I figured it's not that convenient to continuously look on a map to check whether you're not missing any Pokémon. Just imagine that you just needed that last one and you find out just to late it has been sitting right there on your desk while you were working or in a meeting! So I decided to clone the repository and simply hook-up to an incoming webhook on Slack, so all the Pokéfans of the Greenhouse Group get notified whenever a new Pokémon spawns within our office. This is the result: If somebody is interested in the Pokémon, he/she can simply click the distance in the post, which is linking to the exact location of the Pokémon on PokéVision, which is a website hosted version of waishda repository. If you want to do the same for your colleagues, or you want Slack notifications when Pokémon spawn in your home, follow these steps: 1. If you like living on the edge, you can use your private (google) account credentials, but I'd suggest making a new Google or Pokémon Club account. 2. Obtain /services/your/slack_webhook/urlpath for the webhook in the channel you want to post the notifications by creating a new webhook in Slack. 3. Download zip (and unzip) or git clone my repository (which is just a tweaked clone of waishda's repos). 4. Move into the directory, e.g. cd ~/Downloads/PokemonGo-SlackBot 5. Execute in command line sudo pip install -r requirements.txt. 6. Run python pokeslack.py -u <your_user_name> -p <your_password> -l "<Your location>" -st 1 -r <the range you want in meters> -sw "</services/your/slack_webhook/urlpath>". If you use a Google account for authentication, you should add -a google to the command. For -l "<Your location>" you can input any Google Maps search query, so just try on Google Maps whether your query gives a marker you like. If you want to ping around in a wider range than your exact location (more or less 250 meters radius), set the step argument -st 1 to a higher number like -st 10. If your want to make the map available to your colleagues within your network, add -H <server ip address> -P <port>. 7. After a while you should start receiving Slack messages every time a Pokémon spawns. 8. You can also see the original Pokemon-Map when browsing to http://localhost:5000. Note that you'll see more Pokémon on the map, since only those in the specified range will send a notification. The user_icons will be a :pokeball: by default, however if you upload Pokémon emojis to Slack, you can use those for each specific Pokémon. If you use the smileys without prefix, run python pokeslack.py -u <your_user_name> -p <your_password> -l "<Your location>" -st 1 -r <the range you want in meters> -sw "</services/your/slack_webhook/urlpath>" -pi ':', and if you do use the prefix python pokeslack.py -u <your_user_name> -p <your_password> -l "<Your location>" -st 1 -r <the range you want in meters> -sw "</services/your/slack_webhook/urlpath>" -pi ':pokemon-'. You can also apply filters by adding -i to ignore or -o for only, like: python pokeslack.py -u <your_user_name> -p <your_password> -l "<Your location>" -st 1 -r <the range you want in meters> -sw "</services/your/slack_webhook/urlpath>" -pi ':' -i "zubat, rattata, pidgey, spearow" Slack webhooks are easy and fun and can be pretty useful for sending any notification. For example, what about monitoring your data pipelines, sending notification about the progress of your data crunching jobs or even automatically send results to your colleagues? So if you're interested in how to do this, I'll explain a bit about the code: The data is simply send to Slack using a POST request: import httplib import urllib def send_to_slack(text, username, icon_emoji, webhook): data = urllib.urlencode({'payload': '{"username": "' + username + '", ' '"icon_emoji": "' + icon_emoji + '", ' '"text": "' + text + '"}' h = httplib.HTTPSConnection('hooks.slack.com') h.request('POST', webhook, data, headers) r = h.getresponse() The content of the notification is generated in this way: spotted_pokemon = {} if poke.SpawnPointId in spotted_pokemon.keys(): if spotted_pokemon[poke.SpawnPointId]['disappear_datetime'] > datetime.now(): if poke.TimeTillHiddenMs < 0: disappear_datetime = datetime.fromtimestamp(disappear_timestamp) distance = lonlat_to_meters(origin_lat, origin_lon, poke.Latitude, poke.Longitude) if distance < max_distance: time_till_disappears = disappear_datetime - datetime.now() disappear_hours, disappear_remainder = divmod(time_till_disappears.seconds, 3600) disappear_minutes, disappear_seconds = divmod(disappear_remainder, 60) disappear_minutes = str(disappear_minutes) disappear_seconds = str(disappear_seconds) if len(disappear_seconds) == 1: disappear_seconds = str(0) + disappear_seconds disappear_time = disappear_datetime.strftime("%H:%M:%S") alert_text = 'I\'m just <https://pokevision.com/#/@' + str(poke.Latitude) + \ ',' + str(poke.Longitude) + \ '|' + "{0:.2f}".format(distance) + \ ' m> away until ' + disappear_time + \ ' (' + disappear_minutes + ':' + disappear_seconds + ')!' if pokemon_icons_prefix != ':pokeball:': user_icon = pokemon_icons_prefix + pokename.lower() + ':' user_icon = ':pokeball:' send_to_slack(alert_text, pokename, user_icon, slack_webhook_urlpath) spotted_pokemon[poke.SpawnPointId] = {'disappear_datetime': disappear_datetime, 'pokename': pokename} Note the 'spotted_pokemon' dict. If you don't check whether a Pokémon is actually newly spawn, you'll generate a lot of spam and probably not so happy colleagues. Also, there is a little bit of math in calculating lon-lat into meters distance using haversine: from math import radians, cos, sin, asin, sqrt def lonlat_to_meters(lat1, lon1, lat2, lon2): Calculate the great circle distance between two points on the earth (specified in decimal degrees) # convert decimal degrees to radians lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2]) # haversine formula dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2 c = 2 * asin(sqrt(a)) # earth radius in meters: 6378100 m = 6378100 * c return m For more code, please checkout the git repository! Update #1: Japanese language support Thanks to several git pull requests from the community, the code is now also working in Japanese. If it happens that the Pokémon names in your language are inconsistent with the names of your emojis in slack, try uploading this list of emojis by pokemon number, using emojipacks, and replace this piece of code: if pokemon_icons_prefix != ':pokeball:': user_icon = pokemon_icons_prefix + pokename.lower() + ':' user_icon = ':pokeball:' with this: if pokemon_icons_prefix != ':pokeball:': user_icon = pokemon_icons_prefix + 'pokemon-' + pokeid + ':' user_icon = ':pokeball:' Update #2: Homey support by Bart Persoons After using the Pokemon Slack notifications at work I was wondering if it is possible to send this information to my Homey at home. Homey is my connected home hub which controls my lights for example. It also has a speech function, so how cool would it be if Homey tells me when there is a new Pokémon nearby! To send the data to Homey I used Homey’s “Webhook Manager” app. With this app it is possible to load an event with 3 data fields. After making some small changes to the Pokeslack script I was able to send the following information to Homey: {'event': 'pokemon', 'data1': pokename, 'data2': distance} Within Homey it is now possible to create a flow which triggers on the event “pokemon” and it can use the data parameters as variables. Watch the result in this video! The Homey token should be added as an argument, so to run the script you can use: python pokehomey.py -u <your_user_name> <your_password> -l "<Your location>" -st 1 -r <the range you want in meters> -ht "<homey webhook manager token>" Update #3: Languages, stability and lured Pokémon! Several updates: Message are now available in French and German thanks to Vincent. They can be set by setting the locale. The locale that needs to be used for the emojis can be set separately with -iL, default is English. As we also see in the app and at Pokevision, server quite often seem to have troubles. Therefore, the script now has a reconnect features instead of stopping the script in cases of issues. From experience, it's advisable to use a Google account, since it's more stable than PTC. And last but not least, lured Pokémon! Pokevision doesn't show these (yet), so many thanks go to the tip by Daniel. The Pokémon will have (lured) after ttheir names in the Slack messages. If you also want to show the lured Pokéstops on the map, use -dp -ol. If you want to see all Pokéstops on the map, just use -dp without -ol, and for gyms add -dg. <![CDATA[Upload your local Spark script to an AWS EMR cluster using a simple Python script]]>https://www.themarketingtechnologist.co/upload-your-local-spark-script-to-an-aws-emr-cluster-using-a-simply-python-script/59fe1539f7390837b438a659Mon, 25 Apr 2016 17:20:53 GMTUpload your local Spark script to an AWS EMR cluster using a simple Python script Apache Spark is definitely one of the hottest topics in the Data Science community at the moment. Last month when we visited PyData Amsterdam 2016 we witnessed a great example of Spark's immense popularity. The speakers at PyData talking about Spark had the largest crowds after all. Sometimes we see that these popular topics are slowly transforming in buzzwords that are abused for generating publicity, e.g. words as data scientist and deep learning but also Hadoop and DMP. I don't hope that Spark will suffer the same fate as it is definitely a powerful tool for data scientists. In the field of distributed computing Spark provides much more flexibility than MapReduce. Additionally, Spark uses memory more efficiently and therefore writes less data to disk than MapReduce, making Spark on average around 10 to 100 times faster. In this article we introduce a method to upload our local Spark applications to an Amazon Web Services (AWS) cluster in a programmatic manner using a simple Python script. The benefit of doing this programmatically compared to interactively is that it is easier to schedule a Python script to run daily. Additionally, it also saves us time. Time we can spend better by drinking more coffee and thinking of new ideas! The challenge For one of our Data Science applications we recently decided to create a new part of the data pipeline with PySpark (Spark in Python). For now, I am not going to elaborate on how to build your own Spark applications as there are already plenty of tutorials on how to do so on the world wide web. As usual we started by creating the Spark application using only a subset of the full dataset. This subset is usually small enough to test the Spark application locally on our laptops. Then, after creating a locally working Spark application, we scale the application up using an AWS Elastic Map Reduce (EMR) cluster to process the full dataset. However, this is where we ran into some inconvenient issues. The original MapReduce data pipeline was also built in Python using the MRjob module. MRjob takes away the trouble of uploading your local code to an AWS cluster by using its built-in functions. However, MRJob does not support Spark applications (yet?) and therefore we have to get our own hands dirty this time... The interactive method using the AWS CLI Using the awscli module we can quickly spin up an AWS EMR cluster with Spark pre-installed using the commandline. aws emr create-cluster \ --name "Spark Example" \ --release-label emr-4.4.0 \ --applications Name=Hadoop Name=Spark --ec2-attributes KeyName=keypair\ --instance-groups Name=EmrMaster,InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m3.xlarge,BidPrice=0.05 \ Name=EmrCore,InstanceGroupType=CORE,InstanceCount=2,InstanceType=m3.xlarge,BidPrice=0.05 \ We need to place our code onto the cluster because we do not want the run the SparkContext on our local computer due to increased latency and availability. Therefore, we SSH into the cluster. On the cluster we create a Python file, e.g. run.py, and copy/paste the code for the Spark application. aws emr ssh --cluster-id j-XXXX --key-pair-file keypair.pem sudo nano run.py -- copy/paste local code to cluster We logout of the cluster and add a new step to the EMR cluster to start our Spark application via spark-submit. aws emr add-steps \ --cluster-id j-XXXXX \ --steps Type=CUSTOM_JAR,Name="Spark Program",Jar="command-runner.jar",ActionOnFailure=CONTINUE,Args=["spark-submit",home/hadoop/run.py] Note that Amazons EMR clusters have access to S3 buckets (if the IAM roles are configured properly though). Therefore, we do not need to add other steps to copy our data back and forth between S3 and the cluster. We can just specify the proper S3 bucket in our Spark application by using for example data = ("s3://input_bucket/*") data = saveAsTextFile("s3://output_bucket/") Unfortunately, this S3 connection only works within our Spark application. We cannot run a Spark Python script hosted on S3 by spark-submit s3://bucket/spark_code.py... This is still too much work... Using the AWS command-line interface and the above commands we can interactively move our local Spark application to an AWS cluster. However, this interactive method is not easy to schedule daily as it requires some manual steps. Especially SSHing into the cluster and copy-pasting our local code to the cluster itself is tricky. It also does not fit well in our current Python data pipeline. Therefore, we prefer a more programmatic method in Python. This Python script can then be easily scheduled to run daily/weekly/monthly. The final solution Therefore we developed a simple Python script to execute all the necessary steps. The biggest challenge was how to 'copy/paste' our local code onto the cluster without using SSH? The solution for this problem turned out to be relatively easy. That is, we compress our local Spark script in a single file, upload this file to a temporary S3 bucket and add a Bootstrap action to the cluster that downloads and decompresses this file. Hence, the final solution consists of the following steps executed in Python using Boto3 (an AWS SDK for Python): • Define a S3 bucket to store our files temporarily and check if it exists def temp_bucket_exists(self, s3): except botocore.exceptions.ClientError as e: # If a client error is thrown, then check that it was a 404 error. # If it was a 404 error, then the bucket does not exist. error_code = int(e.response['Error']['Code']) if error_code == 404: terminate("Bucket for temporary files does not exist") terminate("Error while connecting to Bucket") return true • Compress the Python files of the Spark application to a .tar file. def tar_python_script(self): # Create tar.gz file t_file = tarfile.open("files/script.tar.gz", 'w:gz') # Add Spark script path to tar.gz file files = os.listdir(self.path_script) for f in files: t_file.add(self.path_script + f, arcname=f) • Upload the tar file to the S3 bucket for temporary files. def upload_temp_files(self, s3): # Shell file: setup (download S3 files to local machine) s3.Object(self.s3_bucket_temp_files, self.job_name + '/setup.sh').put( Body=open('files/setup.sh', 'rb'), ContentType='text/x-sh' # Shell file: Terminate idle cluster s3.Object(self.s3_bucket_temp_files, self.job_name + '/terminate_idle_cluster.sh').put( Body=open('files/terminate_idle_cluster.sh', 'rb'), ContentType='text/x-sh' # Compressed Python script files (tar.gz) s3.Object(self.s3_bucket_temp_files, self.job_name + '/script.tar.gz').put( Body=open('files/script.tar.gz', 'rb'), ContentType='application/x-tar' • Spin up an AWS EMR cluster with Hadoop and Spark as application plus two bootstrap actions. One bootstrap action is a shell script which downloads the tar file from our temporary files S3 bucket and decompresses the tar file on the remote cluster. The other bootstrap action ensures that the cluster is terminated after an hour of inactivity to prevent high unexpected AWS charges. # Parse arguments # Download compressed script tar file from S3 aws s3 cp $s3_bucket_script/home/hadoop/script.tar.gz # Untar file tar zxvf "/home/hadoop/script.tar.gz" -C /home/hadoop/ # Install requirements for additional Python modules (uncomment if needed) # sudo python2.7 -m pip install pandas def start_spark_cluster(self, c): response = c.run_job_flow( 'InstanceGroups': [ {'Name': 'EmrMaster', 'Market': 'SPOT', 'InstanceRole': 'MASTER', 'BidPrice': '0.05', 'InstanceType': 'm3.xlarge', 'InstanceCount': 1}, {'Name': 'EmrCore', 'Market': 'SPOT', 'InstanceRole': 'CORE', 'BidPrice': '0.05', 'InstanceType': 'm3.xlarge', 'InstanceCount': 2} 'Ec2KeyName': self.ec2_key_name, 'KeepJobFlowAliveWhenNoSteps': False Applications=[{'Name': 'Hadoop'}, {'Name': 'Spark'}], {'Name': 'setup', 'ScriptBootstrapAction': { 'Path': 's3n://{}/{}/setup.sh'.format(self.s3_bucket_temp_files, self.job_name), 'Args': ['s3://{}/{}'.format(self.s3_bucket_temp_files, self.job_name)]}}, {'Name': 'idle timeout', 'ScriptBootstrapAction': { 'Path': 's3n://{}/{}/terminate_idle_cluster.sh'.format(self.s3_bucket_temp_files, self.job_name), 'Args': ['3600', '300'] • Add a step to the EMR cluster to run the Spark application using spark-submit. def step_spark_submit(self, c, arguments): response = c.add_job_flow_steps( 'Name': 'Spark Application', 'ActionOnFailure': 'CANCEL_AND_WAIT', 'HadoopJarStep': { 'Jar': 'command-runner.jar', 'Args': ["spark-submit", "/home/hadoop/run.py", arguments] • Describe status of cluster until all steps are finished and cluster is terminated. def describe_status_until_terminated(self, c): stop = False while stop is False: description = c.describe_cluster(ClusterId=self.job_flow_id) if state == 'TERMINATED' or state == 'TERMINATED_WITH_ERRORS': stop = True • Remove the temporary files from the S3 bucket when the cluster is terminated. def remove_temp_files(self, s3): bucket = s3.Bucket(self.s3_bucket_temp_files) for key in bucket.objects.all(): if key.key.startswith(self.job_name) is True: • Grab a beer and start analyzing the output data of your Spark application. Final notes An example code of the full Python code can be found on GitHub. Note that my expertise is not building high-performance data pipelines and that the above code therefore probably could be improved in several ways. My interest comes from quickly (read: lazily) deploying and scaling up our models to provide new and better insights for our clients. If you have any tips to make this easier, please leave them in the comments. :) Tip: Note that in this example we defined the cluster to terminate after all steps are completed. However, when developing a Spark application we often want the cluster to wait for more steps. This however introduces the risk of high unexpected AWS charges if we forget to terminate the cluster. Therefore, we always add terminate_idle_cluster.sh as a bootstrap action when starting the cluster. This small script is developed by MRjob and terminates a cluster after a specified period of inactivity, better to be safe than sorry! <![CDATA[A recommendation system for blogs: Content-based similarity (part 2)]]>https://www.themarketingtechnologist.co/a-recommendation-system-for-blogs-content-based-similarity-part-2/59fe1539f7390837b438a5f2Thu, 11 Feb 2016 16:47:58 GMTA recommendation system for blogs: Content-based similarity (part 2) In this second post in a series of posts about a content recommendation system for The Marketing Technologist (TMT) website we are going to elaborate on the concept of content-based recommendation systems. In the first post we described the benefits of recommendation systems and we roughly divided them in two different types of recommenders: content-based and collaborative filtering. The first post also described the prerequisites in order to set-up both types of recommenders. If you haven’t read this first post yet, it is recommended to do this first before you continue. In this article we take our first steps in content-based recommendation systems by describing a quantified approach to express the similarity of articles. The final code of this article can be found on my Github. The concept behind content-based recommendation The goal is to provide our readers with recommendations for other TMT articles. We assume that a reader who has fully read an article liked that article and wants to read more similar articles. Therefore we want to build a content-based recommender that is going to recommend new similar articles based on the users historical reading behaviour. To achieve accurate and useful recommendations we want to use a mathematical and quantified approach to find the best possible recommendations. Otherwise we are going to lose the interest of our reader and he will leave TMT. Therefore we want to find articles that are similar to each other and thus lie “close to each other”. That is, for which the “distance” between the articles is small, where a smaller distance implies a higher similarity. Example of visualizing TMT articles in a 2-dimensional space So, how does this distance concept work? Assume that we can plot any TMT article in a two-dimensional space. The figure above provides an example of 76 TMT articles plotted in a 2-dimensional space. Furthermore we assume that the closer two points lie to each other the more similar they are. Therefore, if a user is reading an article, other articles that lie close to this point in the 2D space can be seen as a good recommendation as they are similar. How close points lie to each other can be calculated using the Euclidean distance formula. In a 2-dimensional space this distance formula simply comes down to the Pythagorean theorem. Note that this distance formula also works for higher dimensions, for example in a 100-dimensional space (although we cannot visualize this when we have more than 3 dimensions). Euclidean distance formula A different distance formula to measure similarity of two points is cosine similarity. The cosine similarity function uses the difference in the direction that two articles go, i.e. the difference in angle between two article directions. Imagine that an article can be assigned a direction to which it tends. For example, in a 2-dimensional case one article goes North and the other article goes West. The difference in directions is then -90 degrees. This difference in angle is normalized to the interval [-1, 1], where 1 implies the same direction and thus perfect similarity and -1 the complete opposite direction and thus no similarity. (Cosine similarity; Image from Dataconomy.com) We use the cosine similarity metric for measuring the similarity of TMT articles as the direction of articles is more important than the exact distance between them. Additionally, we tried both metrics (Euclidean distance and cosine similarity) and the cosine similarity metric simply performs better. :) Two new challenges Above we described the concept of similarity between articles using a quantified approach. However, now two new challenges arise: • How can we plot each post in a 2-dimensional space? • How do we plot these posts such that the distance between the points gives an indication about the similarity of the articles? An article can be plotted in a 2-dimensional space by assigning it coordinates, i.e. an x and y coordinate. This means we first need to translate our articles to a numeric format and then reduce it to two values, i.e. the x and y coordinate. Therefore, we are first going to elaborate on a scientific approach to quantify the text in the TMT articles by applying feature extraction. Note that the feature extraction method we discuss below is specifically designed for dealing with text in TMT articles. You can imagine that if you're building a content-based recommender for telephones you probably need a different method to translate the properties (content) of telephones to a numerical format. Converting TMT articles to a numeric format The TMT articles, consisting of large phrases of words and punctuation, need to be translated to numerical vectors without losing the content of the article in the process. Preferably we also want vectors of a fixed size. Why do we want a fixed size? Recall the 2-dimensional example above. It would be strange to compare a point in a 2 dimensional space to a point in a 100-dimensional space right? Additionally, if we want to say something about the similarity between articles we also need to express them in a similar manner. To obtain vectors of a fixed size we are going to create features, e.g. measurable article properties. In text analysis a feature often refers to words, phrases, numbers or symbols. All articles can then be measured and expressed in terms of the same set of features, resulting in fixed-size numerical vectors for all articles. The whole process of converting text to numerical vectors is called feature extraction and is often done in three steps: tokenization, counting and weighting. Example of vectorizing text Step 1: tokenization The first step in obtaining numerical features is tokenization. In text analysis, tokenization is described as obtaining meaningful basic units from large samples of text. For example, in physics speed can be expressed in meters per second. In text analysis, large strings of text can be expressed in tokens. These tokens often correspond to words. Therefore, a simple tokenization method to obtain tokens for the sentence I am happy, because the sun shines is by splitting them on whitespaces. This splitting results in seven tokens, i.e. I, am, happy,, because, the, sun, shines. After tokenization it is possible to express the original sentence in terms of these tokens. This simple tokenization method however provides several new problems: • For one, this method does not filter out any punctuation and thus the token happy, contains a comma at the end. This implies that the token happy, and happy are two different tokens, although both tokens imply the same word. Therefore, we filter out all types of punctuation, because punctuation is almost never relevant for the meaning of a word in our articles. Note that punctuation can be relevant in other situations. For example, when analyzing Twitter messages punctuation can be important as they are often used to create smiley's which express a lot of sentiment. The smiley example emphasizes the fact that every data source needs its own feature extraction method. • Second, using the simple tokenization method it is possible to obtain the tokens works and working. However, these tokens are just different forms of the same word, i.e. to work. The same argument holds for tokens where one is the plural form of the other. For our content-based recommendation system, we assume that both forms of these words imply the same word. Therefore, the tokens can be reduced to their stem and used as a single token. To do this, a stemming algorithm that reduces every word to its stem is required. Note that such a stemming algorithm is language specific. Luckily there are several freely available packages such as the NLTK library that can do this for us. • A third problem that typically occurs in text analytics is how to deal with combinations or negations of words. For example, just using the individual tokens Google and Analytics may not always imply that we are talking about the product Google Analytics. Therefore, we also create tokens of two or three consecutive words, called respectively bi-grams and tri-grams. The sentence I like big data then translates to the tokens I, like, big, data, I like, like big, big data, I like big and like big data. Note that this tokenization method does not take into account the position and the order of the words. For example, after tokenization it cannot be said at what position in the original sentence or article the token big occurred. Also, the token itself does not mention anything about the words in front or after it. Therefore, we lose some information about the original sentence during tokenization. The art is to capture as much information about the original sentence while retaining a workable set of tokens. In our tokenization method we lose information about the structure of the sentences. There are other tokenization methods which take the position and order of words into account as well. For example, Part-Of-Speech (POS) tagging also adds additional information such as the word-class of a token, e.g. whether a token occurs as a verb, adjective, noun or direct object. However, we assume that POS tagging does not greatly increase the performance of our recommender because the order of words within sentences is not of great importance for making recommendations. Step 2: token frequency counts In the second step, the frequency of each token in each article is counted. These frequencies are used in the next step for assigning weights to tokens in articles. Additionally these counts are used to later on perform a basic feature selection, i.e. to reduce the number of features. Note that a typical property of text analysis is that the majority of the tokens are only used in a couple of articles. Therefore, the frequency of most tokens in an article is zero. Step 3: token weights In the last step the tokens and token frequency counts from the previous steps are used to convert all articles to a numerical format. This is done by encoding each article to a numeric vector whose elements represent the tokens from step 1. Moreover, a token weighting procedure is applied using the frequency counts from step 2. After a token is weighted, it is not any more referred to as a token but as a feature. Hence, a feature represents a token and the value of a feature for an article is a weight assigned by a weighting method. There are several possible feature weighting methods: • The most basic weighting method is Feature Frequency (FF). FF simply uses the frequency of a token in an article as the weight for a token. For example, given the token set {mad, happy}, the sentence I am not mad, but happy, very happy is weighted as the vector [1 2]. • Feature Presence (FP) is a similar basic weighting method. In FP the weight of a token is simply given by a binary variable which is 1 if the token occurs in an article and 0 otherwise. The sentence from the previous example would be represented as the vector [1 1] when using FP, because both tokens are present in this sentence. An additional advantage of FP as weighting method is that a binary dataset is obtained, which does not suffer scaling problems. The latter can occur in algorithms when for example calculating complicated values such as eigenvalues or Hessian. • A more complex feature weighting procedure is the Term Frequency and Inverse Document Frequency' (TF-IDF) weighting method. This method uses two scores, i.e. the term frequency score and the inverse document frequency score. The term frequency score is calculated by taking the frequency of a token in an article. The inverse document frequency score is calculated by the logarithm of dividing the total number of articles by the number of articles in which the token occurs. When multiplying these two scores, a value is obtained that is high for features that occur frequently in a small number of articles, and is low for features that occur often in many articles. For our content-based recommendation system we are going to use the FP weighting method because it is fast and does not perform worse than the other weighting methods. Additionally, it results in a sparse matrix which has additional computational benefits. Reducing the dimensionality After applying the above feature recommendation method we are left with a list of features with which we can numerically express each TMT article. We could calculate the similarity between each article in this very high dimensional space but we prefer not to. Features that barely express similarity between articles, such as is and a, can be removed from the feature set to significantly reduce the number of features and to improve the quality of the recommendations. Additionally, a smaller dataset improves the speed of the recommendation system. Document Frequency (DF) selection is the simplest feature selection method to reduce dimensionality and is a must for many text analysis problems. We first remove the most common English stop words, e.g. the words the, a, is, et cetera, which do not give much information about the similarity between articles. After that, all features with a very high and very low document frequency are removed from the data set as these features are also not likely to help in differentiating articles. Recall that at the beginning of this article we visualized the articles in a 2-dimensional space. However, after applying DF we are still in a very high dimensional space. Therefore we are going to apply an algorithm that reduces the high dimensional space to the 2-dimensional space in which we can neatly visualize the articles. Moreover, it is much easier to understand how the principle of recommendation systems work in a 2-dimensional space as the distance concept then intuitively works well. The algorithm that we use to bring us back to the 2-dimensional space is Singular Value Decomposition (SVD). A different algorithm one can use is Principal Component Analysis (PCA). We do not extensively explain in this article how these algorithms work. In short though, the essence of both is to find the most meaningful basis with which we can reconstruct the original dataset and capture as much of the original variance as possible. Fortunately, scikit-learn already has a built-in version of SVD and PCA which we can therefore easily use. There are more methods to reduce the dimensionality such as the Information Gain and Chi Square criterion or Random Mapping but for sake of simplicity we stick to the DF feature selection method and PCA dimensionality reduction method. An example of SVD for dimensionality reduction on the Iris dataset. SVD is applied to reduce the dimensionality from 3D to 2D without losing much information. Making recommendations After we have applied all of the above we reduced all TMT articles to coordinates in a 2-dimensional space. For any article we can now calculate the distance between the two coordinates. The only thing we still need is a function that given the current article as input returns a fixed number of TMT articles that have the lowest distance to this article! Using the Euclidean distance formula this function is trivial to write. Let's run some scenarios to test our content-based recommender. Suppose we are a user who just finished reading the article Caching $http requests in AngularJS. Our content-based recommender system provides the following TMT article suggestion for follow-up: Angular 2: Where have my factories, services, constants and values gone?. Sounds reasonable, right? The table below provides the results of more scenarios. Current article Recommendation Caching $http requests in AngularJS Angular 2: Where have my factories, services, constants and values gone? Track content performance using Google Analytics Enhanced Ecommerce report How article size helps you understand your content performance Data collection and strange values in CSV format Calculating ad stocks in a fast and readable way in Python How npm 3 solves WebStorm's performance issues Webstorm 10 improves the performance of file indexing Final remarks We would like to conclude with a few remarks about our first steps in content-based recommendation systems: • In our final recommendation system we used SVD to reduce the dimensionality to 30 features instead of 2. This was done because too much information about the features was lost when we reduced it to a 2-dimensional space. Therefore, the similarity metric of articles was also less reliable. We only used the 2-dimensional space for visualization purposes. • We also applied feature extraction on the title and tags of the TMT articles. This drastically improved the quality of the recommendations. • The parameters in our recommendation system were chosen intuitively and are not optimized. This is something for a future article! We hope you learned a few things from this article! Moreover, if you liked this article we recommend to continue reading on TMT with Optimizing media spends using S-response curves... a shameless attempt to improve my author value on TMT. ;-) <![CDATA[A recommendation system for blogs: Setting up the prerequisites (part 1)]]> The goal of data science is typically described as creating value from Big Data. However, data science should also meet a second goal, that is, avoiding an information overload. One particular type of projects that really meet these two goals are recommendation engines. Online stores such as Amazon but also https://www.themarketingtechnologist.co/building-a-recommendation-engine-for-geek-setting-up-the-prerequisites-13/59fe1539f7390837b438a5e2Thu, 19 Nov 2015 20:47:53 GMTA recommendation system for blogs: Setting up the prerequisites (part 1) The goal of data science is typically described as creating value from Big Data. However, data science should also meet a second goal, that is, avoiding an information overload. One particular type of projects that really meet these two goals are recommendation engines. Online stores such as Amazon but also streaming services such as Netflix suffer from information overload. Customers can easily get lost in their large variety (millions) of products or movies. Recommendation engines help users narrow down the large variety by presenting possible options. Of course, these recommenders can randomly present options to users but this does not really decrease the information overload. Therefore these recommenders apply statistics and science to present ‘better’ solutions which are more likely to meet the expectations of the user. For example, a Netflix user who watched the movie Frozen gets similar children movies from Pixar as a recommendation to watch. Example of Netflix's recommendation system In a series of three blog posts we will elaborate on how we can build a recommendation engine for our readers on The Marketing Technologist (TMT). TMT currently has over fifty blog posts covering varying topics from Data Science to coding in ReactJS. Browsing through all the blog posts is time consuming, especially as the number of posts is still increasing. Also chances are readers are only interested in a select few blog posts that lie in their area of interest. If a recommendation engine is able to select those articles an user is interested in then this can definitely be classified as creating value from data and preventing information overload. Two types of recommendation systems Roughly speaking we can divide recommendation engines in two different types: collaborative filtering and content-based. As Wikipedia states “collaborative filtering is the process of filtering for information or patterns using techniques involving collaboration among multiple agents, viewpoints, data sources”. In our TMT case, this implies finding patterns among multiple readers. If several readers are interested in a particular set of articles it is very likely that a reader who starts reading one of these articles is also interested in the other articles from this set. Therefore, based upon the reading behavior of other users, suggestions are made to similar users. Content-based recommendation engines are different as they base their recommendations on the properties of the product. In our case the products are TMT blog posts and the properties are the words within these posts. If a user is reading an article containing the words 'Google Analytics' and 'Tag Manager', chances are that this user also likes reading other articles containing these words. Therefore, a content-based recommendation engine will recommend articles containing these words. Note that since the recent change from Geek to The Marketing Technologist a very simple content-based filtering approach is integrated in TMT. That is, below each article five other related articles are shown to the user as suggestion for continued reading. The suggested articles are the five most recently published articles that contain any of the tags of the article the user is currently reading. In this simple example the tags of the articles can be seen as the properties of the product. The principle behind collaborative and content-based filtering (Icons made by Freepik from www.flaticon.com) Both systems have their pros and their cons. Content-based recommendation systems are limited in their possibilities as the recommended articles will be close to the article on which a set of recommendations is based. For example, a post about a specific feature in 'Google Analytics' will give recommendations based upon similar words in other articles. However, an article about a specific feature in Snowplow, which is a similar analytics tool, is less likely to be recommended. Chances are though that users are interested in both posts as they both cover the theme analytics. Therefore, content-based recommendation systems are not good at finding hidden patterns. Collaborative filtering outperforms content-based recommendation systems for discovering hidden patterns. Collaborative filtering looks at the reading behaviour of users and not specificly at the content of these articles. So if users reading blog posts about data science are also reading posts about conversion rate optimization (CRO), even when the content of the CRO articles is very different, collaborative filtering will recommend data science readers also CRO articles. The big con of collaborative filtering is that it needs a lot of historical user reading behaviour data in order to find these patterns. Content-based recommendation can be done with none to few historical data and are therefore easier to implement. Prerequisites for collaborative filtering In the next two blog posts we are going to implement content-based and collorative filtering and analyse the results. However, in order to be able to do so we first need to set-up some prerequisites. For collaborative filtering this implies implementing a method with which we can measure the articles a user has read. Our colleague Erik Driessen has implemented a method to track user reading behaviour in Google Analytics by using the client ID. Simo Ahava explains in his blog post Storing client ID in Google Analytics how this can be done in an effective manner. Additionaly, because Erik has implemented Enhanced Ecommerce for content, we can track whether a user has fully read an article. Finally, in Google Analytics a custom report can be created which shows the client id and the posts the users has read. Note that for now no cross-device solution is implemented yet. Therefore, if a user continues reading articles on a different device or removes his cookies, this behaviour cannot be connected to his earlier reading behaviour. Example of a custom report in Google Analytics which shows reading behaviour on a user level Prerequisites for content-based For content-based recommendations we are obviously going to need the content of all TMT articles. There are multiple methods to do so. One of these would be to extract the text of the articles directly from the database. However, as we are Geeks, it is more fun to create a Python script that automatically retrieves the articles and corresponding metrics such as author and category. Therefore we created a Python script that scrapes the articles from the TMT website in two steps. The code for step 1 and step 2 can be found here. Step 1: Create a list of all TMT articles. In Python the source of webpages can be loaded with the library urllib2. Using the command urllib2.urlopen("http://www.themarketingtechnologist.co") we can therefore load the source of the frontpage of our very own TMT blog. This frontpage always shows the ten most recent posts. Using the BeautifulSoup library we can then easily search through the DOM and extract all article elements with class="post" and store them in a Pandas dataframe. Additionally, within each of these elements we can search for author name and tags by searching for the corresponding elements in the DOM. Because only the ten most recent blog posts are shown on the frontpage we also need to check whether there is an Older posts > button on the bottom of the page for more posts. Again, this can be done by searching for the proper DOM element, i.e. an a element with class="older-posts". From the older posts link the URL to the next page can be extracted by using the get function to extract the value from the href attribute. We repeat the above process for each of the pages. In the end, we have a dataframe with the names, tags and author of all articles + a link to the article content. Step 2: Retrieving the content of each article. In step 1 we stored a direct link to each article so we can download the full content of each article. There is only one particular problem, i.e. the content of each blog post is loaded via JavaScript. Therefore, if we use urllib2 to load the static source of the article we don't get the content of the article. In order to execute JavaScript to load the content of the articles we actually need to render the posts by opening it in a web browser. Luckily this can be done using the popular Selenium library. Using a few lines of Python code in combination with Selenium a Firefox browser can be opened, directed to the proper URL and render the page. The DOM can then be searched for the information we want, e.g. the content of a blog post. Note that because all JavaScript is executed using Selenium this also implies that Google Analytics is executed. Therefore, it is wise to take measures to prevent data pollution. For example, by adding your IP address to the list of filters in GA or by installing the Google Analytics Opt-out Addon by Google. Also note that we do not actually need to visually render each page in a Firefox webbrowser. You can also use a headless driver such as PhantomJS which renders the page in the background without the visual overhead. That's it for now with respect to setting up the prerequisites. For the next months we collect reading behaviour on a user level for our collaborative filtering model. Therefore, in the next blog post we start by creating a content-based recommendation system and analyse its results. <![CDATA[Is there time for coffee? Your execution time is ticking in Python!]]>https://www.themarketingtechnologist.co/progress-timer-in-python/59fe1539f7390837b438a60cMon, 16 Nov 2015 18:11:15 GMTIs there time for coffee? Your execution time is ticking in Python! Last month I was working on a machine learning project. If you make use of grid search to find the optimum parameters, it is nice to know how much time an iterating process costs, so I do not waste my time. In this blog you’ll learn how to: • Install the progress bar library in Windows • The disadvantage of the progress bar library • Write your own progress timer class in Python • Use three lines of code to time your code progress Install the progress bar library in Windows Data Science life can be easy sometimes. If you make use of Anaconda (I strongly recommend) then it is the following line of code in de command line: conda install progressbar Otherwise you could you do the trick by: pip install progressbar The disadvantage of the progress bar library The progress bar library is really nice, but the progress bar library has metrics that are easy to forget. Documentation can be found here. Each time I would like to time my code progress, I simply like to initialize, start and finish the job. Now you need for example to initialize the widgets of the progress bar and define the timer. The progress library works as follows: #import libraries import progressbar as pb #initialize widgets widgets = ['Time for loop of 1 000 000 iterations: ', pb.Percentage(), ' ', pb.Bar(marker=pb.RotatingMarker()), ' ', pb.ETA()] #initialize timer timer = pb.ProgressBar(widgets=widgets, maxval=1000000).start() #for loop example for i in range(0,1000000): Time for loop of 1 000 000 iterations: 100% |||||||||||||||||||||||| Time: 0:00:07 Write your own progress timer class in Python Therefore it is easier to write your own class that you can easily use multiple times. First you initialize the progress timer. From now on, when you initialize you only need to give information how many iterations the code takes. You will also get the opportunity to add a description to the progress timer. #import libraries import progressbar as pb #define progress timer class class progress_timer: def __init__(self, n_iter, description="Something"): self.n_iter = n_iter self.iter = 0 self.description = description + ': ' self.timer = None def initialize(self): #initialize timer widgets = [self.description, pb.Percentage(), ' ', self.timer = pb.ProgressBar(widgets=widgets, maxval=self.n_iter).start() def update(self, q=1): #update timer self.iter += q def finish(self): #end timer Three lines of code to time your code progress Now it is simple to show the progress of your code. You only need three lines of code to initialize, update and finish the progress timer. In case of a for loop it works as follows: pt = progress_timer(description= 'For loop example', n_iter=1000000) #for loop example for i in range(0,1000000): So you will receive feedback in how much time your function or for loop is finished. Time for loop of 1 000 000 iterations: 37% |//////// | ETA: 0:00:04 And finally you will see how much time it took to do the job. I hope this will help you to perfectly time when to take a coffee. I'm sure there is a lot to improve in this approach, and maybe you got a better way to do execution time tracking. I'd like to hear your ideas! <![CDATA[Slashception with regexp_extract in Hive]]> As a Data Scientist I frequently need to work with regular expressions. Though the capabilities and power of regular expressions are enormous, I just cannot seem to like them a lot. That is because when they do not function as expected they can be a really time-consuming nightmare. In this https://www.themarketingtechnologist.co/slashception-with-regexp_extract-in-hive/59fe1539f7390837b438a5cfThu, 24 Sep 2015 17:06:27 GMT As a Data Scientist I frequently need to work with regular expressions. Though the capabilities and power of regular expressions are enormous, I just cannot seem to like them a lot. That is because when they do not function as expected they can be a really time-consuming nightmare. In this blogpost I will describe the hours I lost last week because of something I now call slashception. On most of our clients websites we have our own datalogger Snowplow running next to the Google or Adobe Analytics implementation. Snowplow enables us to do much more in depth analysis on log level data than we are able to do with only Google or Adobe Analytics. For a particular project Snowplow was storing a JSON-object in its database. This JSON-object contained a key-value pair for which the value was another JSON-object. This nested JSON-object was however stored as a string value. To ensure that the whole JSON-object is syntactically correct, the string-formatted JSON-object contains several backslash-characters (\) to escape the quote-characters (“). An illustration of the JSON-object we are talking about is given below: What's in a name? For our analysis we were interested in the value in the key 'name'. In the example above that value would have been Thom (just a random name for illustration purposes... or maybe not thát random.). Our idea was to use the regexp_extract function in Hive to extract the name key-value pair and store the value in a new column denoted by 'name'. In that way, we could use the name column in all our subsequent queries on the database. Confident and with high hopes we ran the following Hive query: insert overwrite table data partition(run) regexp_extract(original_json_object, 'name\\":\\"(.*?)\\', 1) as name, from data; Note that the regular expression we used was regexp_extract(original_json_object, 'name\\":\\"(.*?)\\', 1). We used two slashes, because we have to escape the protected backslash character in regular expressions. The first backslash tells the regular expression we want to use the second backslash character literally. Unfortunately, this regular expression did not do the trick as it returned ... Confused by the result of NULL instead of Thom we called for help from the internet. Regex101.com is a site we regularly use to create and test our regular expressions. However, also Regex101 said that this regular expression was correct. That is, Regex101 stated that the test string name\":\"Thom\" is a match to the regular expression name\\":\\"(.*?)\\". After spending already too much time looking for the solution on the internet and its community, it was time to call in the colleagues. Even they hadn't experienced this issue before and mentioned that two backslashes should be enough. I kind of suspect that the first colleague that reads this post immediately knows the solution to this problem but that he or she wasn't around at the time. We're taking the regex to Isengard As a last resort we applied a trial and error approach. That is, we let our knowledge about escaping with slashes in regular expressions go and just tried what would happen if we widened our search, i.e. regexp_extract(original_json_object, 'name(.*?),', 1). Funny enough, this returned \":\"Thom\". Now we were getting somewhere. The regexp_extract function was function correctly, we only needed to narrow down the search. The next try was regexp_extract(original_json_object, 'name\(.*?),', 1). Note that we did not escape the backslash as this didn't work in our earlier attempts and we are now doing trial runs to locate the problem. The output of this regexp_extract function was \":\"Thom\". So after adding the extra slash we still got the same output. It was almost like our added slash disappeared... We tried adding an additional slash then! Unfortunately, regexp_extract(original_json_object, 'name\\(.*?),', 1) returned an error: Caused by: java.util.regex.PatternSyntaxException: Unmatched closing ')' near index 16 at java.util.regex.Pattern.error(Pattern.java:1924) at java.util.regex.Pattern.compile(Pattern.java:1669) at java.util.regex.Pattern.<init>(Pattern.java:1337) at java.util.regex.Pattern.compile(Pattern.java:1022) at org.apache.hadoop.hive.ql.udf.UDFRegExpExtract.evaluate(UDFRegExpExtract.java:51) ... 23 more If you look at the error message you see that the regular expression used by Hive was name\(.*?),. So the second slash disappeared and only one slash was used. As discussed before, our regular expression knowledge tells us that this regular expression will not work. That is because the backslash now escapes the opening bracket and thus states that this opening bracket has to be interpreted literally. Therefore, the closing bracket has no matching opening bracket anymore and the regular expression crashes. So if two slashes translate to one slash in the regular expression, what happens when we use four then? That should translate to the regular expression name\\(.*?),. Therefore we tried regexp_extract(original_json_object, 'name\\\\(.*?),', 1) which returned ":\"Thom\". Hurray, we found the solution to deal with the first slash: four slashes. Using the same logic we then used regexp_extract(original_json_object, 'name\\\\:\\\\"(.*?)\\\\"', 1) which returned Thom. Hurrah! Why tell me why We were able to succesfully complete the task using the latter regexp_extract function. A few hours later I was still wondering though why we needed four slashes to escape one single slash. Because we now knew the solution to the problem it was much easier to find other people with the same problem via Google. This Stack Overflow post perfectly describes why we needed four slashes: You need to escape twice, once for Java, once for the regex. Java code is \\\\ makes a regex string of \\, i.e. two chars. But the regex needs an escape too, so it turns into \, i.e. one symbol ~Peter Lawrey and additionally @Peter Lawrey's answer describes the mechanics. Basically, the "problem" is that backslash is an escape character in both Java string literals, and in the mini-language of regexes. So when you use a string literal to represent a regex, there are two sets of escaping to consider, depending on what you want the regex to mean. ~Stephen C If you found this blogpost because you are experiencing a similar problem, I hope we saved you a lot of time! <![CDATA[The GAM approach to spend your money more efficiently!]]>https://www.themarketingtechnologist.co/the-gam-approach-to-spend-your-money-more-efficiently/59fe1539f7390837b438a5d5Tue, 15 Sep 2015 18:32:37 GMT In an earlier blogpost we described how Blue Mango Interactive optimizes the media spend of clients using S-curves. S-curves are used to find the S-shaped relationship of a particular media driver on a KPI such as sales. Moreover, when a S-curve is obtained, we can determine the optimal point that prevents under- or overspending. Hence, we spend our money more efficiently! The previous method however required quite some manual steps and hassle. Inspired by an awesome blogpost (GAM: The Predictive Modeling Silver Bullet) we return this time with a brand new state-of-the art modelling technique: GAM! {<1>}Example of a S-shaped response curve for the effect of radio on sales The previous blog post described how an ordinary least-squares (OLS) regression can be used to find the S-curve of a particular media driver. In a fictional example we estimated the S-curve for radio in terms of GRPs. The OLS regression technique comes from the family of generalized linear modelling (GLM) techniques. One of the reasons GLM techniques are so popular nowadays is that they provide an interpretable and computationally fast method to find the effect of independent variables (e.g. years of education, age) on a dependent variable (e.g. wage). For example, an OLS regression could return that the relationship between education and wage is that for every year of education followed you’ll earn €500 p/m more. Adding more variables to this OLS regression such as age and gender will result in their quantified effects on wage. The L is for linear relationships Unfortunately, as the name already suggests, OLS typically only returns linear relationships between the independent variables (e.g. years of education) and dependent variable (e.g. wage). Using the previous wage example, one year of education would imply earning €500 p/m, but ten years of education would imply earning €5000 p/m. Linear relationships are however very limited when modelling nonlinear relationships. For example, assume that the first year of education results in earning €500 p/m more, but the second year only gives you €400 more on top of that, and the third year €300, and so on... If the marginal effect of every additional year of education is decreasing like this, then a really worse fit is obtained when using OLS. In practice, such non-linear relationships are often tackled by applying transformations on the data. For example, it is possible to capture the diminishing effect of each additional year of education by applying a square root on the years of education. Assuming that after a square root transformation on education the quantified relationship between education and wage is still €500, then ten years of education would imply earning √10 = 3.16 × 500 = €1550 p/m. I’ve got 99 problems and linear relationships ain’t one So, what is the problem if data transformations can be used to model non-linear relationships using GLM techniques? Well, in most cases, we have to perform many OLS regressions to find the best-fitting data transformation. This process involves trying a lot of different transformations. For example, we need to check whether the transformation x0.4 fits the data better than x0.6 or x0.5 . Moreover, we might be overfitting our data. It could be that the true relationship is y=x0.5, but that y=x0.54 fits our random sample dataset better. Preferably, we need some a priori knowledge about the type of transformation we need. In the previous blog post we described a method that finds the S-response curve of a media driver in several steps. Assume that we want to estimate the S-shaped effect of radio GRPs. This required the following steps: 1. The continuous radio GRP variable was replaced by dummies, each representing a specific continuous interval. 2. An OLS regression was used to estimate the effect of each dummy and thus of each interval. 3. A S-curve was then estimated that fits with the estimated effects of each interval. 4. A S-curve transformation was then applied to transform the continuous radio variable. 5. Finally, the OLS regression was performed again. This time however, the transformed continuous radio variable was used instead of the interval dummies. Because the radio variable is transformed, the coefficient returned by the OLS regression now didn’t denote a linear relationship anymore, but a S-shaped relationship. Wouldn’t it be nice if we could skip all these (manual) steps and use a more mathematical approach to find the best-fitting S-curve? GAM modelling to the rescue! {<2>}GAM modelling to the rescue (Photo by Coast Guard News on Flickr) GAM modelling to the rescue! Generalized additive models (GAM) is an additive modelling technique where the effect of the dependent variables (e.g. wage) is captured through smooth functions on the independent variables (e.g. years of education, age, gender). Note that these smooth functions do not need to be linear as is the case in GLMs! An example of variables in a GAM model is given below, where s1 and s2 are smooth non-linear functions with respective input x1 and x2. Note that s3 is a smooth linear function as is normally returned by OLS. {<3>}Example of flexible smooth (non-)linear functions. GAM models therefore have the same easy and intuitive interpretation property of OLS models, but also have the flexibility to model nonlinear relationships. The latter makes it possible to find hidden patterns in our data, which would have gone unnoticed otherwise. Additionally, GAM uses a regularization parameter to prevent overfitting the data! I would like to refer again to the awesome blogpost about GAM modelling (GAM: The Predictive Modeling Silver Bullet) for the mathematical point of view. Furthermore it also explains how, for example, the best-fitting smooth functions are obtained using an algorithm. Additionally, it also explains how GAM prevents overfitting using a regularization parameter. In the remainder of this blog post, I would like to focus on the advantages and disadvantages of using GAM models to find the S-response curve for a fictional radio example. R versus Python Let’s consider again the radio example of the previous blogpost. This time, however, we switch to R as programming language. That is because Python does not yet provide a good library for GAM modelling. Statsmodels does contain GAM modelling in its sandbox functionality, but GAM modelling in R is more advanced and widely supported. The two main packages in R that can be used to fit generalized additive models are gam and mgcv. We use mgcv because it uses a more general approach. The R code to create a fictional dataset with sales and radio GRPs can be found on Github. Note that the number of sales on any given day depends on the day of the week (monday,..., sunday), the number of radio GRPs on that day and some normally distributed noise. Moreover, the effect of radio GRPs on sales is logistically distributed and thus follows an S-shape. Let the GAM(es) begin We can now formulate the problem as a GAM problem by {<4>}Equation (1) where xmonday,…,xsaturday are day dummy variables and s1(x) is a smooth function. The mcgv package in R is used to solve the above GAM problem. # Initialize GAM model with 1 smooth function for radio_grp b1 <- mgcv::gam(sales_total ~ s(radio_grp, bs='ps', sp=0.5) + seasonality_monday + seasonality_tuesday + seasonality_wednesday + seasonality_thursday + seasonality_friday + seasonality_saturday, # Output model results and store intercept for plotting later on summary_model <- summary(b1) model_coefficients <- summary_model$p.table model_intercept <- model_coefficients["(Intercept)", 1] # Plot the smooth predictor function to obtain the radio response curve p <- predict(b1, type="lpmatrix") beta <- coef(b1)[grepl("radio_grp", names(coef(b1)))] s <- p[,grepl("radio_grp", colnames(p))] %*% beta + model_intercept The above code returns the following summary: Family: gaussian Link function: identity sales_total ~ s(radio_grp, bs = "ps", sp = 0.5) + seasonality_monday + seasonality_tuesday + seasonality_wednesday + seasonality_thursday + seasonality_friday + seasonality_saturday Parametric coefficients: (Intercept) 1.5687 0.1325 11.84 <2e-16 *** seasonality_monday 3.8044 0.1840 20.68 <2e-16 *** seasonality_tuesday 3.0623 0.1830 16.74 <2e-16 *** seasonality_wednesday 4.0863 0.1839 22.22 <2e-16 *** seasonality_thursday 5.0454 0.1829 27.58 <2e-16 *** seasonality_friday 6.1710 0.1849 33.38 <2e-16 *** seasonality_saturday 7.8875 0.1839 42.89 <2e-16 *** Approximate significance of smooth terms: edf Ref.df F p-value s(radio_grp) 3.848 4.571 1032 <2e-16 *** R-sq.(adj) = 0.99 Deviance explained = 99.1% GCV = 0.22855 Scale est. = 0.201 n = 90 This summary obviously looks different than when using OLS. However, it still returns statistics for the dummy coefficients such as the estimate, standard error and significance. The statistics for the dummy coefficients are interpreted in a similar manner as in OLS (so in this case Tuesday sales are three units higher compared to Sunday and all coefficients are statistically significant positive). The effect of radio GRPs is difficult to explain from this summary, but a visualization is very effective though! The code for this visualization can again be found on Github. {<5>}S-curve visualization Well, look at this… the GAM model almost perfectly captured the S-shaped effect of radio on sales! And that without all the hassle and extra steps we needed in the previous blog. GAM modelling is therefore a really awesome technique to estimate nonlinear relationships. But, before we jump to conclusions, lets first do some sense checks. “Far be it from me to ever let my common sense get in the way of my stupidity. I say we press on.” The fictional dataset we created above is a somewhat ideal scenario: many radio observations and little noise/variance in the number of sales. So what happens if we use GAM modelling on datasets with less/more observations and less/more noise? The figure below shows the S-curve obtained using GAM for such different datasets. {<6>}The S-curve for different amounts of noise and radio observations It is not surprising that as the dataset contains more noise or less radio observations the estimated relationship of radio on sales fits worse. However, overall, it still provides a relatively accurate estimation. Even when the noise (standard deviation on the sales per day) is very high, the estimated curve still shows the S-curve effect of increasing returns at first and diminishing returns thereafter. GAM modelling is therefore also useful when we have noisy data or few variable observations. We want to perform a second sense check however. That is, how does GAM perform when the observations are not well-balanced. For example when we have few observations with high GRP values. Note that this is often the case in practice because of obvious budget constraints. Therefore, we created a fictional dataset where the majority of the radio observations lies below 5 GRPs and only a few above. This was done by taking random radio GRP values from a N(4, 2)-distribution instead of the U[0,10] we used earlier. The figure below shows the estimated curve by the GAM model for this dataset with again variations in noise and observations. {<7>}The S-curve for different amounts of noise and radio observations where the radio observations are clustered more at the beginning of the curve. We see that the GAM model now has more trouble to find the S-curved relationship of radio on sales. As the majority of the radio grp samples are clustered at the lower part of the curve, GAM has no troubles to estimate the increasing returns effect at the beginning of the curve. However, because of the few samples at the top of the curve, GAM has some troubles to estimate the decreasing returns effect after the inflection point. Especially for the datasets with much noise GAM experiences difficulties and tries to linearly extrapolate the effect. After two sense checks though we can conclude that GAM modelling provides an excellent method to estimate S-curves for media mix modelling. Be careful though when the observations of the variable you want to model are really dense or clustered around a few points. As GAM modelling is not restricted to S-shaped relationships that could lead to strange curves. On the other hand, GAM modelling definitely provides more freedom in the relationships we can model. Additionally it prevents all the manual steps and hassle we needed in our previous blogpost. So, happy modelling and have a GAM time! :) <![CDATA[Optimizing media spends using S-response curves]]> A key focus of our Data Science team is to help our clients understand how their marketing spend affects their KPIs. In particular, we create models to understand the effect of individual marketing channels such as television or paid search ads on KPIs such as sales, visits and footfall. Knowing https://www.themarketingtechnologist.co/optimize-media-spends-using-s-response-curves/59fe1539f7390837b438a58dWed, 29 Apr 2015 23:00:46 GMT A key focus of our Data Science team is to help our clients understand how their marketing spend affects their KPIs. In particular, we create models to understand the effect of individual marketing channels such as television or paid search ads on KPIs such as sales, visits and footfall. Knowing how marketing spend affects KPIs enables us to optimize the clients marketing spend for maximal result. In this Geek post I will give a fictional example on how we use S-curves to optimize radio spend. The final code of this fictional example can be found on Github as well. The data Let’s consider a simple example where we have the number of sales and the number of radio GRPs on a daily basis. We assume that radio only has an immediate positive effect on the number of sales the same day. This is obviously not true in real-life, because radio still shows a positive effect after several days. For now, such delayed effects are out of the scope of this post. The figure below provides an overview of how our fictional sales and radio dataset looks like. We see that in this example dataset there are two important factors that determine the number of sales on a day: the day of the week (seasonality) and the number of radio GRPs on that day (marketing spend). Simple OLS regression As we are interested in how radio affects the number of sales, we run a simple OLS regression to capture the relationship of marketing spend and seasonality on sales. The results of this regression are shown in the table below. OLS Regression Results Dep. Variable: sales R-squared: 0.920 Model: OLS Adj. R-squared: 0.896 Method: Least Squares F-statistic: 38.01 Date: Thu, 23 Apr 2015 Prob (F-statistic): 3.62e-11 Time: 15:00:50 Log-Likelihood: -47.424 No. Observations: 31 AIC: 110.8 Df Residuals: 23 BIC: 122.3 Df Model: 7 Covariance Type: nonrobust const 7.4642 0.583 12.793 0.000 6.257 8.671 radio_grp 1.1423 0.096 11.854 0.000 0.943 1.342 seasonality_monday -2.7826 0.820 -3.392 0.003 -4.480 -1.085 seasonality_tuesday -0.4115 0.869 -0.473 0.640 -2.210 1.387 seasonality_thursday -4.8585 0.903 -5.381 0.000 -6.726 -2.991 seasonality_friday -3.7702 0.886 -4.256 0.000 -5.603 -1.938 seasonality_saturday -4.9362 0.886 -5.572 0.000 -6.769 -3.104 seasonality_sunday -5.3039 0.872 -6.080 0.000 -7.109 -3.499 Omnibus: 0.646 Durbin-Watson: 1.188 Prob(Omnibus): 0.724 Jarque-Bera (JB): 0.739 Skew: -0.265 Prob(JB): 0.691 Kurtosis: 2.461 Cond. No. 24.1 The result of this regression shows that the day of the week is a very important factor for the number of sales. We used Wednesday as reference day and added binary dummy variables for all other days. Note Wednesday is taken arbitrarily. The coefficients of these day-dummies can be interpreted as the number of sales this day has more (or less) compared to the number of sales on Wednesday (the reference day). For example, the Monday coefficient of –2.8 implies that there are 2.8 less absolute sales on Monday than on Wednesday. So, as all day dummy coefficients are negative, it follows that Wednesday is the best day of the week for the sales. More interestingly, the result of the regression also gives a positive coefficient of 1.14 for the radio variable. This coefficient can be interpreted as a positive effect of 1.14 additional sales for each radio GRP used. Note that this also implies that the effect of radio is linear, i.e. 1 GRP results in 1.14 additional sales and 5 GRPs results in 5 times 1.14 (=5.7) additional sales. The S-response curve In reality, it is not often the case that radio has a linear effect on sales. KPI drivers such as television and radio, but also display and search ads tend to have diminishing returns. Wikipedia provides the following example about diminishing returns: “A common sort of example is adding more workers to a job, such as assembling a car on a factory floor. At some point, adding more workers causes problems such as workers getting in each other's way or frequently finding themselves waiting for access to a part. Producing one more unit of output per unit of time will eventually cost increasingly more, due to inputs being used less and less effectively.” In a similar manner, research has shown that initial advertising budget has little impact on sales. One possible reason for this is that a low advertising budget might result in your marketing not being noticable between all of the competitors marketing campaigns. Only after a certain budget spend threshold results are noticeable in the form of improved KPIs. Hence, the result is that the effect of marketing spend on KPI drivers such as radio typically follows an S-shaped curve. The figure below provides an example of an S-curve for radio spend. Introducing dummy variables The notion of the S-shaped curve conflicts with our earlier calculated linear effect of radio on sales. Therefore, it is very likely that we get a much more accurate model when we can account for the effect of the S-curve. One possible approach is to just try all possible transformations of our radio dataset to an S-curve. However, as the S-curve is defined by three parameter values, this implies trying a lot of different transformations. Therefore, we use a smarter approach to find the S-curve. That is, instead of adding a continuous variable that denotes the number of radio GRPs on a given day, we add binary dummies where each dummy represents an interval of GRPs. Given our example dataset where the maximum number of GRPs is 10, we add four binary dummies representing the GRP intervals [0.1-2.4], [2.5-4.9], [5-7.4] and [7.5-10]. Now, the dummy value of an interval is 1 if the number of GRPs of that day is in that interval, otherwise the value is zero. Below is a short snippet of the radio dataset we then obtain: Date time Radio GRP Radio dummy 0.1 - 2.4 Radio dummy 2.5 - 4.9 Radio dummy 5.0 - 7.4 Radio dummy 7.5 - 10 2015-06-01 4 0 1 0 0 2015-06-02 0 0 0 0 0 2015-06-03 1 1 0 0 0 2015-06-04 2 1 0 0 0 2015-06-05 3 0 1 0 0 Using the new radio dummy variables we again run a standard OLS regression. The results of this regression are shown below: OLS Regression Results Dep. Variable: sales R-squared: 0.985 Model: OLS Adj. R-squared: 0.978 Method: Least Squares F-statistic: 134.6 Date: Thu, 23 Apr 2015 Prob (F-statistic): 4.28e-16 Time: 15:42:39 Log-Likelihood: -21.183 No. Observations: 31 AIC: 64.37 Df Residuals: 20 BIC: 80.14 Df Model: 10 Covariance Type: nonrobust const 8.1073 0.298 27.175 0.000 7.485 8.730 radio_dummy_0.1_2.5 0.2485 0.334 0.745 0.465 -0.448 0.945 radio_dummy_2.5_5 1.9554 0.407 4.806 0.000 1.107 2.804 radio_dummy_5_7.5 8.7667 0.415 21.146 0.000 7.902 9.632 radio_dummy_7.5_10 9.3869 0.494 19.001 0.000 8.356 10.417 seasonality_monday -2.9030 0.405 -7.171 0.000 -3.747 -2.059 seasonality_tuesday -0.5771 0.401 -1.439 0.165 -1.413 0.259 seasonality_thursday -4.6460 0.430 -10.793 0.000 -5.544 -3.748 seasonality_friday -4.3809 0.441 -9.929 0.000 -5.301 -3.461 seasonality_saturday -5.2752 0.419 -12.602 0.000 -6.148 -4.402 seasonality_sunday -5.9470 0.422 -14.096 0.000 -6.827 -5.067 Omnibus: 1.184 Durbin-Watson: 1.621 Prob(Omnibus): 0.553 Jarque-Bera (JB): 0.590 Skew: 0.334 Prob(JB): 0.744 Kurtosis: 3.107 Cond. No. 8.27 The interpretation of the coefficients of the new radio dummy variables is slightly different than in our first regression. The coefficient of each dummy represents the additional sales when the number of GRPs is in the corresponding interval. For example, consider a given day on which 6 radio GRPs are used. This implies that the value of the dummy for the interval [5 – 7.4] is one (and all other dummies are zero) and that this results in an additional 8.5 sales. GRP interval Additional sales 0.1 - 2.4 0.3 2.5 - 4.9 2.5 5 - 7.4 8.5 7.5 - 10 10.1 Finding the S-curve using the dummy variables When plotting the additional sales against the GRP intervals we obtain the points in Figure 4. The shape of the S-curve can already be seen in these points. It then is a simple task to find the S-curve that best fits these points. Note that in our example the S-curve we predicted fits the true S-curve (which we used for creating our fictional dataset) quite good because it is nearly the same logistic function. The small difference in curves can be explained due to the fact that we added noise to our fictional dataset to simulate reality. Final results So, since we found our best S-curve transformation, we run the first OLS regression again but this time with our radio data transformed to fit the new S-curve. The resulted predictions of the model are plotted in Figure 5. Note that this regression now fits our dataset much better than the first regression we run. Therefore, we are able to obtain much better predictions of the effect of radio on sales! :-) OLS Regression Results Dep. Variable: sales R-squared: 0.987 Model: OLS Adj. R-squared: 0.984 Method: Least Squares F-statistic: 258.4 Date: Thu, 23 Apr 2015 Prob (F-statistic): 2.56e-20 Time: 16:38:53 Log-Likelihood: -18.808 No. Observations: 31 AIC: 53.62 Df Residuals: 23 BIC: 65.09 Df Model: 7 Covariance Type: nonrobust const 8.1540 0.230 35.381 0.000 7.677 8.631 radio_grp 0.9904 0.031 31.828 0.000 0.926 1.055 seasonality_monday -2.9477 0.326 -9.040 0.000 -3.622 -2.273 seasonality_tuesday -0.7462 0.347 -2.153 0.042 -1.463 -0.029 seasonality_thursday -4.8023 0.357 -13.463 0.000 -5.540 -4.064 seasonality_friday -4.0407 0.353 -11.454 0.000 -4.771 -3.311 seasonality_saturday -5.3139 0.353 -15.034 0.000 -6.045 -4.583 seasonality_sunday -6.0026 0.346 -17.364 0.000 -6.718 -5.287 Omnibus: 0.371 Durbin-Watson: 1.502 Prob(Omnibus): 0.831 Jarque-Bera (JB): 0.525 Skew: -0.047 Prob(JB): 0.769 Kurtosis: 2.370 Cond. No. 27.0 Optimize marketing spend A particular useful property of the S-curve is that it has several useful characteristics for optimization. For one, it is easier to find the point where your return on investment is maximized. The inflection point of the S-curve helps to find this optimal spend value. At the inflection point the derivative value of the S-curve is maximized. This implies that at this point the S-curve changes from increasing returns (i.e. increasing spend by 1% leads to an >1% increase in sales) into diminishing returns (i.e. increasing spend by 1% leads to an <1% increase in sales). The inflection point is therefore used as the minimum spend value because all spends below this value imply underspending as you can easily increase your ROI if you increase the spend up to the inflection point. In our example, the inflection point lies at 5 GRPs. Using less than 5 GRPs implies underspending because you can get more sales per euro spend if you use 5 GRPs. In a similar manner we can also find the overspending value. Recall that above the inflection point the S-curve shows diminishing returns. This implies that for every additional euro you spend more, fewer absolute additional sales are generated. In our example, using more than 7 or 8 GRPs is obvious overspending as the additional sales hardly increase when using more GRPs. Finally, when we have response curves for each of the individual KPI drivers (such as TV, radio, display, paid search, etc.) it is possible to find the optimal spend for each individual driver using an easy-to-solve optimization problem. The result is an optimal marketing mix that maximizes the chosen KPIs. Final remarks This post provided a simple illustration of how we use S-curves to optimize the marketing spends of our clients. In practice however, the datasets are not as simple as in this illustration. For example, in reality various media channels show lagged effects (ad-stocks) or only show diminishing returns. We use advanced modelling and time series techniques such as ARIMA and VAR models to create Marketing Mix Models (MMM) that capture these effects and help our clients understand how their marketing spend can be optimized. We will elaborate more on the advanced techniques in future Geek posts!
global_01_local_1_shard_00001926_processed.jsonl/62822
Vladimir Putin Why Do Your Enemies End Up Dead? 7/17/2018 6:53 AM PDT FOX News' Chris Wallace Asks Vladimir Putin Why His Political Rivals End Up Dead Vladimir Putin was asked straight-up why so many of his political rivals end up dead ... and his response shows he has plenty of knowledge about what's going on in America. Maybe too much. The Putin grilling came NOT courtesy of President Trump -- as some hoped it would -- but instead from FOX News correspondent Chris Wallace. The Russian President's answer covered a lot of ground ... including calling out the U.S. for the assassinations of JFK and MLK. Most interestingly ... he pointed out the battles of "several ethnic groups" in the U.S. with police -- making it clear he's plugged into societal issues here.  As for Wallace's actual question -- Putin's chalking it all up to random criminal activity while Russia's "statehood matures."
global_01_local_1_shard_00001926_processed.jsonl/62823
Welcome to my website, where you can find all sorts of information about me.  As you’ll see, I do a lot of different things related to books (I write them, translate them, and teach about them).  I also visit schools to talk about my books, along with writing in general.   If you want to contact me, I can be reached at