diff --git "a/validation.csv" "b/validation.csv" new file mode 100644--- /dev/null +++ "b/validation.csv" @@ -0,0 +1,7532 @@ +Q_CreationDate,Title,Question,Answer,Score,Is_accepted,N_answers,Q_Id +2018-03-26 05:57:47.947,Text Categorization Test NLTK python,"I have using nltk packages and train a model using Naive Bayes. I have save the model to a file using pickle package. Now i wonder how can i use this model to test like a random text not in the dataset and the model will tell if the sentence belong to which categorize? +Like my idea is i have a sentence : "" Ronaldo have scored 2 goals against Egypt"" And pass it to the model file and return categorize ""sport"".","Just saving the model will not help. You should also save your VectorModel (like tfidfvectorizer or countvectorizer what ever you have used for fitting the train data). You can save those the same way using pickle. Also save all those models you used for pre-processing the train data like normalization/scaling models, etc. For the test data repeat the same steps by loading the pickle models that you saved and transform the test data in train data format that you used for model building and then you will be able to classify.",1.2,True,1,5420 +2018-03-26 12:36:28.430,"How does Python internally distinguish ""from package import module"" between ""from module import function""","If I understand correctly, the python syntax from ... import ... can be used in two ways + +from package-name import module-name +from module-name import function-name + +I would like to know a bit of how Python internally treats the two different forms. Imagine, for example, that the interpreter gets ""from A import B"", does the interpreter actually try to determine whether A is a package-name/ module-name, or does it internally treat packages and modules as the same class of objects (something like Linux treats files and directories very similarly)?","First of all, a module is a python file that contains classes and functions. when you say From A Import B python searches for A(a module) in the standard python library and then imports B(the function or class) which is the module if it finds A. If it doesn't it goes out and starts searching in the directory were packages are stored and searches for the package name( A ) and then if it finds it, it imports the Module name(B). If it fails in the past 2 processes it returns an error. +Hope this helps.",-0.3869120172231254,False,1,5421 +2018-03-26 14:38:24.260,What is a good crawling speed rate?,I'm crawling web pages to create a search engine and have been able to crawl close to 9300 pages in 1 hour using Scrapy. I'd like to know how much more can I improve and what value is considered as a 'good' crawling speed.,"I'm no expert but I would say that your speed is pretty slow. I just went to google, typed in the word ""hats"", pressed enter and: about 650,000,000 results (0.63 seconds). That's gonna be tough to compete with. I'd say that there's plenty of room to improve.",-0.1352210990936997,False,2,5422 +2018-03-26 14:38:24.260,What is a good crawling speed rate?,I'm crawling web pages to create a search engine and have been able to crawl close to 9300 pages in 1 hour using Scrapy. I'd like to know how much more can I improve and what value is considered as a 'good' crawling speed.,It really depends but you can always check your crawling benchmarks for your hardware by typing scrapy bench on your command line,0.0,False,2,5422 +2018-03-27 05:06:56.187,Verify mountpoint in the remote server,"os.path.ismount() will verify whether the given path is mounted on the local linux machine. Now I want to verify whether the path is mounted on the remote machine. Could you please help me how to achieve this. +For example: my dev machine is : xx:xx:xxx +I want to verify whether the '/path' is mounted on yy:yy:yyy. +How can achieve this by using os.path.ismount() function","If you have access to both machines, then one way could be to leverage python's sockets. The client on the local machine would send a request to the server on the remote machine, then the server would do os.path.ismount('/path') and send back the return value to the client.",0.0,False,1,5423 +2018-03-27 22:23:28.003,How to parse a c/c++ header with llvmlite in python,"I'd like to parse a c and/or c++ header file in python using llvmlite. Is this possible? And if so, how do I create an IR representation of the header's contents?","llvmlite is a python binding for LLVM, which is independent from C or C++ or any other language. To parse C or C++, one option is to use the python binding for libclang.",0.0,False,1,5424 +2018-03-28 05:18:16.253,Are framework and libraries the more important bit of coding?,"Coding is entirely new to me. +Right now, I am teaching myself Python. As of now, I am only going over algorithms. I watched a few crash courses online about the language. Based on that, I don't feel like I am able to code any sort of website or software which leads me wonder if the libraries and frameworks of any programming language are the most important bit? +Should I spend more time teaching myself how to code with frameworks and libraries? +Thanks","First of all, you should try to be comfortable with every Python mechanisms (classes, recursion, functions... everything you usually find in any book or complete tutorial). It could be useful for any problem you want to solve. +Then, you should start your own project using the suitable libraries and frameworks. You must set a clear goal, do you want to build a website or a software ? You won't use the same libraries/framework for any purpose. Some of them are really often used so you could start by reading their documentation. +Anyhow, to answer your question, framework and libraries are not the most important bit of coding. They are just your tools, whereas the way you think to solve problems and build your algorithms is your art. +The most important thing to be a painter is not knowing how to use a brush (even if, of course, it's really useful)",1.2,True,1,5425 +2018-03-29 07:20:01.590,Keras rename model and layers,"1) I try to rename a model and the layers in Keras with TF backend, since I am using multiple models in one script. +Class Model seem to have the property model.name, but when changing it I get ""AttributeError: can't set attribute"". +What is the Problem here? +2) Additionally, I am using sequential API and I want to give a name to layers, which seems to be possibile with Functional API, but I found no solution for sequential API. Does anonye know how to do it for sequential API? +UPDATE TO 2): Naming the layers works, although it seems to be not documented. Just add the argument name, e.g. model.add(Dense(...,...,name=""hiddenLayer1""). Watch out, Layers with same name share weights!","for 1), I think you may build another model with right name and same structure with the exist one. then set weights from layers of the exist model to layers of the new model.",-0.1794418372930847,False,2,5426 +2018-03-29 07:20:01.590,Keras rename model and layers,"1) I try to rename a model and the layers in Keras with TF backend, since I am using multiple models in one script. +Class Model seem to have the property model.name, but when changing it I get ""AttributeError: can't set attribute"". +What is the Problem here? +2) Additionally, I am using sequential API and I want to give a name to layers, which seems to be possibile with Functional API, but I found no solution for sequential API. Does anonye know how to do it for sequential API? +UPDATE TO 2): Naming the layers works, although it seems to be not documented. Just add the argument name, e.g. model.add(Dense(...,...,name=""hiddenLayer1""). Watch out, Layers with same name share weights!","To rename a keras model in TF2.2.0: +model._name = ""newname"" +I have no idea if this is a bad idea - they don't seem to want you to do it, but it does work. To confirm, call model.summary() and you should see the new name.",0.4247838355242418,False,2,5426 +2018-03-29 16:33:58.400,Heroku Python import local functions,"I'm developing a chatbot using heroku and python. I have a file fetchWelcome.py in which I have written a function. I need to import the function from fetchWelcome into my main file. +I wrote ""from fetchWelcome import fetchWelcome"" in main file. But because we need to mention all the dependencies in the requirement file, it shows error. I don't know how to mention user defined requirement. +How can I import the function from another file into the main file ? Both the files ( main.py and fetchWelcome.py ) are in the same folder.","If we need to import function from fileName into main.py, write ""from .fileName import functionName"". Thus we don't need to write any dependency in requirement file.",0.0,False,1,5427 +2018-03-29 17:21:53.613,How to choose RandomState in train_test_split?,"I understand how random state is used to randomly split data into training and test set. As Expected, my algorithm gives different accuracy each time I change it. Now I have to submit a report in my university and I am unable to understand the final accuracy to mention there. Should I choose the maximum accuracy I get? Or should I run it with different RandomStates and then take its average? Or something else?","For me personally, I set random_state to a specific number (usually 42) so if I see variation in my programs accuracy I know it was not caused by how the data was split. +However, this can lead to my network over fitting on that specific split. I.E. I tune my network so it works well with that split, but not necessarily on a different split. Because of this, I think it's best to use a random seed when you submit your code so the reviewer knows you haven't over fit to that particular state. +To do this with sklearn.train_test_split you can simply not provide a random_state and it will pick one randomly using np.random.",0.2012947653214861,False,1,5428 +2018-03-30 10:14:52.273,"Python application freezes, only CTRL-C helps","I have a Python app that uses websockets and gevent. It's quite a big application in my personal experience. +I've encountered a problem with it: when I run it on Windows (with 'pipenv run python myapp'), it can (suddenly but very rarily) freeze, and stop accepting messages. If I then enter CTRL+C in cmd, it starts reacting to all the messages, that were issued when it was hanging. +I understand, that it might block somewhere, but I don't know how to debug theses types of errors, because I don't see anything in the code, that could do it. And it happens very rarily on completely different stages of the application's runtime. +What is the best way to debug it? And to actually see what goes behind the scenes? My logs show no indication of a problem. +Could it be an error with cmd and not my app?","Your answer may be as simple as adding timeouts to some of your spawns or gevent calls. Gevent is still single threaded, and so if an IO bound resource hangs, it can't context switch until it's been received. Setting a timeout might help bypass these issues and move your app forward?",0.0,False,1,5429 +2018-03-30 14:45:10.337,How to compare date (yyyy-mm-dd) with year-Quarter (yyyyQQ) in python,"I am writing a sql query using pandas within python. In the where clause I need to compare a date column (say review date 2016-10-21) with this value '2016Q4'. In other words if the review dates fall in or after Q4 in 2016 then they will be selected. Now how do I convert the review date to something comparable to 'yyyyQ4' format. Is there any python function for that ? If not, how so I go about writing one for this purpose ?","Once you are able to get the month out into a variable: mon +you can use the following code to get the quarter information: +for mon in range(1, 13): + print (mon-1)//3 + 1, +print +which would return: + +for months 1 - 3 : 1 +for months 4 - 6 : 2 +for months 7 - 9 : 3 +for months 10 - 12 : 4",1.2,True,1,5430 +2018-03-31 04:10:29.847,Measurement for intersection of 2 irregular shaped 3d object,"I am trying to implement an objective function that minimize the overlap of 2 irregular shaped 3d objects. While the most accurate measurement of the overlap is the intersection volume, it's too computationally expensive as I am dealing with complex objects with 1000+ faces and are not convex. +I am wondering if there are other measurements of intersection between 3d objects that are much faster to compute? 2 requirements for the measurement are: 1. When the measurement is 0, there should be no overlap; 2. The measurement should be a scalar(not a boolean value) indicating the degree of overlapping, but this value doesn't need to be very accurate. +Possible measurements I am considering include some sort of 2D surface area of intersection, or 1D penetration depth. Alternatively I can estimate volume with a sample based method that sample points inside one object and test the percentage of points that exist in another object. But I don't know how computational expensive it is to sample points inside a complex 3d shape as well as to test if a point is enclosed by such a shape. +I will really appreciate any advices, codes, or equations on this matter. Also if you can suggest any libraries (preferably python library) that accept .obj, .ply...etc files and perform 3D geometry computation that will be great! I will also post here if I find out a good method. +Update: +I found a good python library called Trimesh that performs all the computations mentioned by me and others in this post. It computes the exact intersection volume with the Blender backend; it can voxelize meshes and compute the volume of the co-occupied voxels; it can also perform surface and volumetric points sampling within one mesh and test points containment within another mesh. I found surface point sampling and containment testing(sort of surface intersection) and the grid approach to be the fastest.","A sample-based approach is what I'd try first. Generate a bunch of points in the unioned bounding AABB, and divide the number of points in A and B by the number of points in A or B. (You can adapt this measure to your use case -- it doesn't work very well when A and B have very different volumes.) To check whether a given point is in a given volume, use a crossing number test, which Google. There are acceleration structures that can help with this test, but my guess is that the number of samples that'll give you reasonable accuracy is lower than the number of samples necessary to benefit overall from building the acceleration structure. +As a variant of this, you can check line intersection instead of point intersection: Generate a random (axis-aligned, for efficiency) line, and measure how much of it is contained in A, in B, and in both A and B. This requires more bookkeeping than point-in-polyhedron, but will give you better per-sample information and thus reduce the number of times you end up iterating through all the faces.",1.2,True,2,5431 +2018-03-31 04:10:29.847,Measurement for intersection of 2 irregular shaped 3d object,"I am trying to implement an objective function that minimize the overlap of 2 irregular shaped 3d objects. While the most accurate measurement of the overlap is the intersection volume, it's too computationally expensive as I am dealing with complex objects with 1000+ faces and are not convex. +I am wondering if there are other measurements of intersection between 3d objects that are much faster to compute? 2 requirements for the measurement are: 1. When the measurement is 0, there should be no overlap; 2. The measurement should be a scalar(not a boolean value) indicating the degree of overlapping, but this value doesn't need to be very accurate. +Possible measurements I am considering include some sort of 2D surface area of intersection, or 1D penetration depth. Alternatively I can estimate volume with a sample based method that sample points inside one object and test the percentage of points that exist in another object. But I don't know how computational expensive it is to sample points inside a complex 3d shape as well as to test if a point is enclosed by such a shape. +I will really appreciate any advices, codes, or equations on this matter. Also if you can suggest any libraries (preferably python library) that accept .obj, .ply...etc files and perform 3D geometry computation that will be great! I will also post here if I find out a good method. +Update: +I found a good python library called Trimesh that performs all the computations mentioned by me and others in this post. It computes the exact intersection volume with the Blender backend; it can voxelize meshes and compute the volume of the co-occupied voxels; it can also perform surface and volumetric points sampling within one mesh and test points containment within another mesh. I found surface point sampling and containment testing(sort of surface intersection) and the grid approach to be the fastest.","By straight voxelization: +If the faces are of similar size (if needed triangulate the large ones), you can use a gridding approach: define a regular 3D grid with a spacing size larger than the longest edge and store one bit per voxel. +Then for every vertex of the mesh, set the bit of the cell it is included in (this just takes a truncation of the coordinates). By doing this, you will obtain the boundary of the object as a connected surface. You will obtain an estimate of the volume by means of a 3D flood filling algorithm, either from an inside or an outside pixel. (Outside will be easier but be sure to leave a one voxel margin around the object.) +Estimating the volumes of both objects as well as intersection or union is straightforward with this machinery. The cost will depend on the number of faces and the number of voxels.",0.0,False,2,5431 +2018-03-31 08:19:33.750,How executed code block Science mode in Pycharm,"Like Spyder, you can execute code block. how can i do in Pycharm in science mode. in spyder you use +# In[] +How can i do this in pycharm","you can just import numpy to actvate science mode. +import numpy as np",0.2012947653214861,False,2,5432 +2018-03-31 08:19:33.750,How executed code block Science mode in Pycharm,"Like Spyder, you can execute code block. how can i do in Pycharm in science mode. in spyder you use +# In[] +How can i do this in pycharm","pycharm use code cell. you can do with this +'#%% '",1.2,True,2,5432 +2018-03-31 10:41:21.540,Building WSN topology integrated with SDN controller (mininet-wifi),"In mininet-wifi examples, I found a sample (6LowPAN.py) that creates a simple topology contains 3 nodes. +Now, I intend to create another topology as follows: + +1- Two groups of sensor nodes such that each group connects to a 'Sink + node' +2- Connect each 'Sink node' to an 'ovSwitch' +3- Connect the two switches to a 'Controller' + +Is that doable using mininet-wifi? Any tips how to do it?? +Many thanks in advance :)","Yes, you can do this with 6LowPAN.py. You then add switches and controller into the topology with their links.",0.3869120172231254,False,1,5433 +2018-04-01 01:28:54.353,Neural Network - Input Normalization,"It is a common practice to normalize input values (to a neural network) to speed up the learning process, especially if features have very large scales. +In its theory, normalization is easy to understand. But I wonder how this is done if the training data set is very large, say for 1 million training examples..? If # features per training example is large as well (say, 100 features per training example), 2 problems pop up all of a sudden: +- It will take some time to normalize all training samples +- Normalized training examples need to be saved somewhere, so that we need to double the necessary disk space (especially if we do not want to overwrite the original data). +How is input normalization solved in practice, especially if the data set is very large? +One option maybe is to normalize inputs dynamically in the memory per mini batch while training.. But normalization results will then be changing from one mini batch to another. Would it be tolerable then? +There is maybe someone in this platform having hands on experience on this question. I would really appreciate if you could share your experiences. +Thank you in advance.","A large number of features makes it easier to parallelize the normalization of the dataset. This is not really an issue. Normalization on large datasets would be easily GPU accelerated, and it would be quite fast. Even for large datasets like you are describing. One of my frameworks that I have written can normalize the entire MNIST dataset in under 10 seconds on a 4-core 4-thread CPU. A GPU could easily do it in under 2 seconds. Computation is not the problem. While for smaller datasets, you can hold the entire normalized dataset in memory, for larger datasets, like you mentioned, you will need to swap out to disk if you normalize the entire dataset. However, if you are doing reasonably large batch sizes, about 128 or higher, your minimums and maximums will not fluctuate that much, depending upon the dataset. This allows you to normalize the mini-batch right before you train the network on it, but again this depends upon the network. I would recommend experimenting based on your datasets, and choosing the best method.",1.2,True,1,5434 +2018-04-01 15:08:50.007,Finding the eyeD3 executable,"I just installed the abcde CD utility but it's complaining that it can't find eyeD3, the Python ID3 program. This appears to be a well-known and unresolved deficiency in the abcde dependencies, and I'm not a Python programmer, so I'm clueless. +I have the Python 2.7.12 came with Mint 18, and something called python3 (3.5.2). If I try to install eyeD3 with pip (presumably acting against 2.7.12), it says it's already installed (in /usr/lib/python2.7/dist-packages/eyeD3). I don't know how to force pip to install under python3. +If I do a find / -name eyeD3, the only other thing it turns up is /usr/share/pyshared/eyeD3. But both of those are only directories, and both just contain Python libraries, not executables. +There isn't any other file called eyeD3 anywhere on disk. +Does anyone know what it's supposed to be called, where it's supposed to live, and how I can install it? +P","I don't know how to force pip to install under python3. + +python3 -m pip install eyeD3 will install it for Python3.",0.2012947653214861,False,2,5435 +2018-04-01 15:08:50.007,Finding the eyeD3 executable,"I just installed the abcde CD utility but it's complaining that it can't find eyeD3, the Python ID3 program. This appears to be a well-known and unresolved deficiency in the abcde dependencies, and I'm not a Python programmer, so I'm clueless. +I have the Python 2.7.12 came with Mint 18, and something called python3 (3.5.2). If I try to install eyeD3 with pip (presumably acting against 2.7.12), it says it's already installed (in /usr/lib/python2.7/dist-packages/eyeD3). I don't know how to force pip to install under python3. +If I do a find / -name eyeD3, the only other thing it turns up is /usr/share/pyshared/eyeD3. But both of those are only directories, and both just contain Python libraries, not executables. +There isn't any other file called eyeD3 anywhere on disk. +Does anyone know what it's supposed to be called, where it's supposed to live, and how I can install it? +P","Gave up...waste of my time and everyone else's sorry. +What I apparently needed was the eyed3 (lowercase 'd') non-python utility.",0.0,False,2,5435 +2018-04-02 22:28:00.080,python pair multiple field entries from csv,"Trying to take data from a csv like this: +col1 col2 +eggs sara +bacon john +ham betty +The number of items in each column can vary and may not be the same. Col1 may have 25 and col2 may have 3. Or the reverse, more or less. +And loop through each entry so its output into a text file like this +breakfast_1 +breakfast_item eggs +person sara +breakfast_2 +breakfast_item bacon +person sara +breakfast_3 +breakfast_item ham +person sara +breakfast_4 +breakfast_item eggs +person john +breakfast_5 +breakfast_item bacon +person john +breakfast_6 +breakfast_item ham +person john +breakfast_7 +breakfast_item eggs +person betty +breakfast_8 +breakfast_item bacon +person betty +breakfast_9 +breakfast_item ham +person betty +So the script would need to add the ""breakfast"" number and loop through each breakfast_item and person. +I know how to create one combo but not how to pair up each in a loop? +Any tips on how to do this would be very helpful.","First, get a distinct of all breakfast items. +A pseudo code like below +Iterate through each line +Collect item and person in 2 different lists +Do a set on those 2 lists +Say persons, items + +Counter = 1 +for person in persons: + for item in items: + Print ""breafastitem"", Counter + Print person, item",0.0,False,1,5436 +2018-04-03 07:25:09.530,How to find out Windows network interface name in Python?,"Windows command netsh interface show interface shows all network connections and their names. A name could be Wireless Network Connection, Local Area Network or Ethernet etc. +I would like to change an IP address with netsh interface ip set address ""Wireless Network Connection"" static 192.168.1.3 255.255.255.0 192.168.1.1 1 with Python script, but I need a network interface name. +Is it possible to have this information like we can have a hostname with socket.gethostname()? Or I can change an IP address with Python in other way?","I don't know of a Python netsh API. But it should not be hard to do with a pair of subprocess calls. First issue netsh interface show interface, parse the output you get back, then issue your set address command. +Or am I missing the point?",0.6730655149877884,False,1,5437 +2018-04-03 11:57:26.050,how do I install my modual onto my local copy of python on windows?,"I'm reading headfirst python and have just completed the section where I created a module for printing nested list items, I've created the code and the setup file and placed them in a file labeled ""Nester"" that is sitting on my desktop. The book is now asking for me to install this module onto my local copy of Python. The thing is, in the example he is using the mac terminal, and I'm on windows. I tried to google it but I'm still a novice and a lot of the explanations just go over my head. Can someone give me clear thorough guide?.","On Windows systems, third-party modules (single files containing one or more functions or classes) and third-party packages (a folder [a.k.a. directory] that contains more than one module (and sometimes other folders/directories) are usually kept in one of two places: c:\\Program Files\\Python\\Lib\\site-packages\\ and c:\\Users\\[you]\\AppData\\Roaming\\Python\\. +The location in Program Files is usually not accessible to normal users, so when PIP installs new modules/packages on Windows it places them in the user-accessible folder in the Users location indicated above. You have direct access to that, though by default the AppData folder is ""hidden""--not displayed in the File Explorer list unless you set FE to show hidden items (which is a good thing to do anyway, IMHO). You can put the module you're working on in the AppData\\Roaming\\Python\\ folder. +You still need to make sure the folder you put it in is in the PATH environment variable. PATH is a string that tells Windows (and Python) where to look for needed files, in this case the module you're working on. Google ""set windows path"" to find how to check and set your path variable, then just go ahead and put your module in a folder that's listed in your path. +Of course, since you can add any folder/directory you want to PATH, you could put your module anywhere you wanted--including leaving it on the Desktop--as long as the location is included in PATH. You could, for instance, have a folder such as Documents\\Programming\\Python\\Lib to put your personal modules in, and use Documents\\Programming\\Python\\Source for your Python programs. You'd just need to include those in the PATH variable. +FYI: Personally, I don't like the way python is (by default) installed on Windows (because I don't have easy access to c:\\Program Files), so I installed Python in a folder off the drive root: c:\Python36. In this way, I have direct access to the \\Lib\\site-packages\\ folder.",0.0,False,1,5438 +2018-04-03 19:38:45.600,Django : how to give user/group permission to view model instances for a specified period of time,"I am fairly new to Django and could not figure out by reading the docs or by looking at existing questions. I looked into Django permissions and authentication but could not find a solution. +Let's say I have a Detail View listing all instances of a Model called Item. For each Item, I want to control which User can view it, and for how long. In other words, for each User having access to the Item, I want the right/permission to view it to expire after a specified period of time. After that period of time, the Item would disapear from the list and the User could not access the url detailing the Item. +The logic to implement is pretty simple, I know, but the ""per user / per object"" part confuses me. Help would be much appreciated!","Information about UserItemExpiryDate has to be stored in a separate table (Model). I would recommend using your coding in Django. +There are few scenarios to consider: +1) A new user is created, and he/she should have access to items. +In this case, you add entries to UserItemExpiry with new User<>Item combination (as key) and expiry date. Then, for logged in user you look for items from Items that has User<>Item in UserItemExpiry in the future. +2) A new item is created, and it has to be added to existing users. +In such case, you add entries to UserItemExpiry with ALL users<> new Item combination (as key) and expiry date. And logic for ""selecting"" valid items is the same as in point 1. +Best of luck, +Radek Szwarc",1.2,True,1,5439 +2018-04-04 13:46:20.373,how to read text from excel file in python pandas?,"I am working on a excel file with large text data. 2 columns have lot of text data. Like descriptions, job duties. +When i import my file in python df=pd.read_excel(""form1.xlsx""). It shows the columns with text data as NaN. +How do I import all the text in the columns ? +I want to do analysis on job title , description and job duties. Descriptions and Job Title are long text. I have over 150 rows.","Try converting the file from .xlsx to .CSV +I had the same problem with text columns so i tried converting to CSV (Comma Delimited) and it worked. Not very helpful, but worth a try.",0.2012947653214861,False,1,5440 +2018-04-04 17:20:14.923,make a web server in localhost with flask,"I want to know that if I can make a web server with Flask in my pc like xampp apache (php) for after I can access this page in others places across the internet. Or even in my local network trough the wifi connection or lan ethernet. Is it possible ? I saw some ways to do this, like using ""uwsgi"".. something like this... but I colud never do it. +OBS: I have a complete application in Flask already complete, with databases and all things working. The only problem is that I don't know how to start the server and access by the others pc's.","Yes, you can. +Just like you said, you can use uwsgi to run your site efficiently. There are other web servers like uwsgi: I usually use Gunicorn. But note that Flask can run without any of these, it will simply be less efficient (but if it is just for you then it should not be a problem). +You can find tutorials on the net with a few keywords like ""serving flask app"". +If you want to access your site from the internet (outside of your local network), you will need to configure your firewall and router/modem to accept connections on port 80 (HTTP) or 443 (HTTPS). +Good luck :)",0.3869120172231254,False,1,5441 +2018-04-04 18:48:40.377,Python - How do I make a window along with widgets without using modules like Tkinter?,"I have been wanting to know how to make a GUI without using a module on Python, I have looked into GUI's in Python but everything leads to Tkinter or other Python GUI modules. The reason I do not want to use Tkinter is because I want to understand how to do it myself. I have looked at the Tkinter modules files but it imports like 4 other Modules. +I don't mind the modules like system, os or math just not modules which I will use and not understand. If you do decide to answer my question please include as much detail and information on the matter. Thanks -- Darrian Penman","You cannot write a GUI in Python without importing either a GUI module or importing ctypes. The latter would require calling OS-specific graphics primitives, and would be far worse than doing the same thing in C. (EDIT: see Roland comment below for X11 systems.) +The python-coded tkinter mainly imports the C-coded _tkinter, which interfaces to the tcl- and C- coded tk GUI package. There are separate versions of tcl/tk for Windows, *nix, and MacOS.",1.2,True,2,5442 +2018-04-04 18:48:40.377,Python - How do I make a window along with widgets without using modules like Tkinter?,"I have been wanting to know how to make a GUI without using a module on Python, I have looked into GUI's in Python but everything leads to Tkinter or other Python GUI modules. The reason I do not want to use Tkinter is because I want to understand how to do it myself. I have looked at the Tkinter modules files but it imports like 4 other Modules. +I don't mind the modules like system, os or math just not modules which I will use and not understand. If you do decide to answer my question please include as much detail and information on the matter. Thanks -- Darrian Penman","For the same reason that you can't write to a database without using a database module, you can't create GUIs without a GUI module. There simply is no way to draw directly on the screen in a cross-platform way without a module. +Writing GUIs is very complex. These modules exist to reduce the complexity.",0.2012947653214861,False,2,5442 +2018-04-04 21:19:39.857,Regular Expression in python how to find paired words,"I'm doing the cipher for python. I'm confused on how to use Regular Expression to find a paired word in a text dictionary. +For example, there is dictionary.txt with many English words in it. I need to find word paired with ""th"" at the beginning. Like they, them, the, their ..... +What kind of Regular Expression should I use to find ""th"" at the beginning? +Thank you!","^(th\w*) + +gives you all results where the string begins with th . If there is more than one word in the string you will only get the first. + +(^|\s)(th\w*) + +wil give you all the words begining with th even if there is more than one word begining with th",0.0,False,1,5443 +2018-04-04 22:36:48.643,pycharm ctrl+v copies the item in console instead paste when highlighted,"This has been a very annoying problem for me and I couldn't find any keymaps or settings that could cause this behavior. +Setup: + +Pycharm Professional 2018.1 installed on redhat linux +I remote into the linux machine using mobaX and launch pycharm with window forwarding + +Scenario 1: +I open a browser on windows, copy some text, go to editor or console, paste it somewhere without highlighting any text, hit ctrl+v, it pastes fine +Scenario 2: +I open a browser on windows, copy some text, go to editor or console, highlight some text there, hit ctrl+v in attempt to replace the highlighted text with what's in my clipboard. The text didn't change. I leave pycharm and paste somewhere else, the text in clipboard has now become the text I highlighted. +Edit: +ok I just realized this: as soon as I highlight the text, it gets copied...I've turned this feature off for terminal, but couldn't find a global settings for the editor etc. Anyone know how?","I figured it out: it's caused by the copy-on-select setting of my linux system. To turn it off, go to mobax-settings-configurations-x11-clipboard-disable 'copy on select'",1.2,True,1,5444 +2018-04-05 01:37:04.467,"In Keras, how to send each item in a batch through a model?","I have a model that starts with a Conv2D layer and so it must take input of shape (samples, rows, cols, channels) (and the model must ultimately output a shape of (1)). However, for my purposes one full unit of input needs to be some (fixed) number of samples, so the overall input shape sent into this model when given a batch of input ends up being (batch_size, samples, rows, cols, channels) (which is expected and correct, but...). How do I send each item in the batch through this model so that I end up with an output of shape (batch_size, 1)? +What I have tried so far: +I tried creating an inner model containing the Conv2D layer et al then wrapping the entire thing in a TimeDistributed wrapper, followed by a Dense(units=1) layer. This compiled, but resulted in an output shape of (batch_size, samples, 1). I feel like I am missing something simple...","At the moment you are returning a 3D array. +Add a Flatten() layer to convert the array to 2D, and then add a Dense(1). This should output (batch_size, 1).",0.1352210990936997,False,1,5445 +2018-04-05 06:45:15.460,How to add report_tensor_allocations_upon_oom to RunOptions in Keras,"I'm trying to train a neural net on a GPU using Keras and am getting a ""Resource exhausted: OOM when allocating tensor"" error. The specific tensor it's trying to allocate isn't very big, so I assume some previous tensor consumed almost all the VRAM. The error message comes with a hint that suggests this: + +Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. + +That sounds good, but how do I do it? RunOptions appears to be a Tensorflow thing, and what little documentation I can find for it associates it with a ""session"". I'm using Keras, so Tensorflow is hidden under a layer of abstraction and its sessions under another layer below that. +How do I dig underneath everything to set this option in such a way that it will take effect?","OOM means out of memory. May be it is using more memory at that time. +Decrease batch_size significantly. I set to 16, then it worked fine",0.2012947653214861,False,1,5446 +2018-04-05 12:22:11.997,Django multilanguage text and saving it on mysql,"I have a problem with multilanguage and multi character encoded text. +Project use OpenGraph and it will save in mysql database some information from websites. But database have problem with character encoding. I tryed encoding them to byte. That is problem, becouse in admin panel text show us bute and it is not readable. +Please help me. How can i save multilanguage text in database and if i need encode to byte them how can i correctly decode them in admin panel and in views",You should encode all data as UTF-8 which is unicode.,0.0,False,1,5447 +2018-04-05 18:38:56.977,How to Install requests[security] in virtualenv in IntelliJ,I'm using python 2.7.10 virtualenv when running python codes in IntelliJ. I need to install requests[security] package. However I'm not sure how to add that [security] option/config when installing requests package using the Package installer in File > Project Structure settings window.,"Was able to install it by doing: + +Activating the virtualenv in the 'Terminal' tool window: +source /bin/activate +Executing a pip install requests[security]",0.0,False,1,5448 +2018-04-06 07:42:50.770,Use HermiT in Python,"We have an ontology but we need to use the reasoner HermiT to infer the sentiment of a given expression. We have no idea how to use and implement a reasoner in python and we could not find a good explanation on the internet. We found that we can use sync_reasoner() for this, but what does this do exactly? And do we have to call the reasoner manually everytime or does it happen automatically?","You do not need to implement the reasoner. The sync_reasoner() function already calls HermiT internally and does the reasoning for you. +A reasoner will reclassify individuals and classes for you which means it creates a parent-child hierarchy of classes and individuals. When you load an ontology only explicit parent-child relations are represented. However, when you call the reasoner, the parent-child hierarchy is updated to include inferred relations as well. +An example of this is provided in Owlready2-0.5/doc/intro.rst. Before calling sync_reasoner() calling test_pizza.__class__ prints onto.Pizza, which is explicit information. However, after calling sync_reasoner() calling test_pizza.__class__ prints onto.NonVegetarianPizza, which is the inferred information.",1.2,True,1,5449 +2018-04-06 17:50:58.693,Saving data to MacOS python application,"I am using Pyinstaller to create my Python app from a set of scripts. This script uses a library that saves downloaded data to the '~/' directory (using the os.join function). +I was wondering how to edit the code in the library so that when it runs, it saves data to inside the app (like in the package, the Contents/Resources maybe)?","I was wondering how to edit the code in the library so that when it runs, it saves data to inside the app + +Don't do that. This isn't a standard practice in macOS applications, and will fail in some standard system configurations. For example, it will fail if the application is used by a non-administrator user, or if the application is run from a read-only disk image or network share. +More importantly, it'll also make it difficult or impossible to sign the application bundle with a developer certificate.",1.2,True,1,5450 +2018-04-09 02:59:03.377,How are PyTorch's tensors implemented?,"I am building my own Tensor class in Rust, and I am trying to make it like PyTorch's implementation. +What is the most efficient way to store tensors programmatically, but, specifically, in a strongly typed language like Rust? Are there any resources that provide good insights into how this is done? +I am currently building a contiguous array, so that, given dimensions of 3 x 3 x 3, my array would just have 3^3 elements in it, which would represent the tensor. However, this does make some of the mathematical operations and manipulations of the array harder. +The dimension of the tensor should be dynamic, so that I could have a tensor with n dimensions.","Contiguous array +The commonly used way to store such data is in a single array that is laid out as a single, contiguous block within memory. More concretely, a 3x3x3 tensor would be stored simply as a single array of 27 values, one after the other. +The only place where the dimensions are used is to calculate the mapping between the (many) coordinates and the offset within that array. For example, to fetch the item [3, 1, 1] you would need to know if it is a 3x3x3 matrix, a 9x3x1 matrix, or a 27x1x1 matrix - in all cases the ""storage"" would be 27 items long, but the interpretation of ""coordinates"" would be different. If you use zero-based indexing, the calculation is trivial, but you need to know the length of each dimension. +This does mean that resizing and similar operations may require copying the whole array, but that's ok, you trade off the performance of those (rare) operations to gain performance for the much more common operations, e.g. sequential reads.",1.2,True,1,5451 +2018-04-10 09:23:35.627,Pycharm - Cannot find declaration to go to,"I changed my project code from python 2.7 to 3.x. +After these changes i get a message ""cannot find declaration to go to"" when hover over any method and press ctrl +I'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist. +Do you have any idea how can i fix it?","Right click on the folders where you believe relevant code is located ->Mark Directory as-> Sources Root +Note that the menu's wording ""Sources Root"" is misleading: the indexing process is not recursive. You need to mark all the relevant folders.",1.2,True,5,5452 +2018-04-10 09:23:35.627,Pycharm - Cannot find declaration to go to,"I changed my project code from python 2.7 to 3.x. +After these changes i get a message ""cannot find declaration to go to"" when hover over any method and press ctrl +I'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist. +Do you have any idea how can i fix it?","I had a case where the method was implemented in a base class and Pycharm couldn't find it. +I solved it by importing the base class into the module I was having trouble with.",0.0814518047658113,False,5,5452 +2018-04-10 09:23:35.627,Pycharm - Cannot find declaration to go to,"I changed my project code from python 2.7 to 3.x. +After these changes i get a message ""cannot find declaration to go to"" when hover over any method and press ctrl +I'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist. +Do you have any idea how can i fix it?",What worked for me was right-click on the folder that has the manage.py > Mark Directory as > Source Root.,0.3869120172231254,False,5,5452 +2018-04-10 09:23:35.627,Pycharm - Cannot find declaration to go to,"I changed my project code from python 2.7 to 3.x. +After these changes i get a message ""cannot find declaration to go to"" when hover over any method and press ctrl +I'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist. +Do you have any idea how can i fix it?","The solution for me: remember to add an interpreter to the project, it usually says in the bottom right corner if one is set up or not. Just an alternate solution than the others. +This happened after reinstalling PyCharm and not fully setting up the ide.",0.0814518047658113,False,5,5452 +2018-04-10 09:23:35.627,Pycharm - Cannot find declaration to go to,"I changed my project code from python 2.7 to 3.x. +After these changes i get a message ""cannot find declaration to go to"" when hover over any method and press ctrl +I'm tryinig update pycharm from 2017.3 to 18.1, remove directory .idea but my issue still exist. +Do you have any idea how can i fix it?","I had same issue and invalidating cache or reinstalling the app didn't help. +As it turned out the problem was next: for some reasons *.py files were registered as a text files, not python ones. After I changed it, code completion and other IDE features started to work again. +To change file type go Preferences -> Editor -> File types",0.3869120172231254,False,5,5452 +2018-04-11 09:36:40.700,VScode run code selection,"I just made the transition from Spyder to VScode for my python endeavours. Is there a way to run individual lines of code? That's how I used to do my on-the-spot debugging, but I can't find an option for it in VScode and really don't want to keep setting and removing breakpoints. +Thanks.","In my ver of VSCode (1.25), shift+enter will run selection. Note that you will want to have your integrated terminal running python.",0.2401167094949473,False,2,5453 +2018-04-11 09:36:40.700,VScode run code selection,"I just made the transition from Spyder to VScode for my python endeavours. Is there a way to run individual lines of code? That's how I used to do my on-the-spot debugging, but I can't find an option for it in VScode and really don't want to keep setting and removing breakpoints. +Thanks.","I'm still trying to figure out how to make vscode do what I need (interactive python plots), but I can offer a more complete answer to the question at hand than what has been given so far: +1- Evaluate current selection in debug terminal is an option that is not enabled by default, so you may want to bind the 'editor.debug.action.selectionToRepl' action to whatever keyboard shortcut you choose (I'm using F9). As of today, there still appears to be no option to evaluate current line while debugging, only current selection. +2- Evaluate current line or selection in python terminal is enabled by default, but I'm on Windows where this isn't doing what I would expect - it evaluates in a new runtime, which does no good if you're trying to debug an existing runtime. So I can't say much about how useful this option is, or even if it is necessary since anytime you'd want to evaluate line-by-line, you'll be in debug mode anyway and sending to debug console as in 1 above. The Windows issue might have something to do with the settings.json entry +""terminal.integrated.inheritEnv"": true, +not having an affect in Windows as of yet, per vscode documentation.",0.0,False,2,5453 +2018-04-11 12:21:36.483,How to run a django project without manage.py,"Basically I downloaded django project from SCM, Usually I run the project with with these steps + +git clone repository +extract +change directory to project folder +python manage.py runserver + +But this project does not contains manage.py , how to run this project in my local machine??? +br","Most likely, this is not supposed to be a complete project, but a plugin application. You should create your own project in the normal way with django-admin.py startproject and add the downloaded app to INSTALLED_APPS.",0.4701041941942874,False,1,5454 +2018-04-11 21:08:57.980,Python - Subtracting the Elements of Two Arrays,"I am new to Python programming and stumbled across this feature of subtracting in python that I can't figure out. I have two 0/1 arrays, both of size 400. I want to subtract each element of array one from its corresponding element in array 2. +For example say you have two arrays A = [0, 1, 1, 0, 0] and B = [1, 1, 1, 0, 1]. +Then I would expect A - B = [0 - 1, 1 - 1, 1 - 1, 0 - 0, 0 - 1] = [-1, 0, 0, 0, -1] +However in python I get [255, 0, 0, 0, 255]. +Where does this 255 come from and how do I get -1 instead? +Here's some additional information: +The real variables I'm working with are Y and LR_predictions. +Y = array([[0, 0, 0, ..., 1, 1, 1]], dtype=uint8) +LR_predictions = array([0, 1, 1, ..., 0, 1, 0], dtype=uint8) +When I use either Y - LR_predictions or numpy.subtract(Y, LR_predictions) +I get: array([[ 0, 255, 255, ..., 1, 0, 1]], dtype=uint8) +Thanks",I can't replicate this but it looks like the numbers are 8 bit and wrapping some how,0.0,False,1,5455 +2018-04-12 00:14:16.700,"How do I save a text file in python, to my File Explorer?","I've been using Python for a few months, but I'm sort of new to Files. I would like to know how to save text files into my Documents, using "".txt"".",If you do not like to overwrite existing file then use a or a+ mode. This just appends to existing file. a+ is able to read the file as well,0.0,False,1,5456 +2018-04-12 19:41:16.037,Send data from Python backend to Highcharts while escaping quotes for date,"I would highly appreciate any help on this. I'm constructing dynamic highcharts at the backend and would like to send the data along with html to the frontend. +In highcharts, there is a specific field to accept Date such as: +x:Date.UTC(2018,01,01) +or x:2018-01-01. However, when I send dates from the backend, it is always surrounded by quotes,so it becomes: x:'Date.UTC(2018,01,01)' +and x:'2018-01-01', which does not render the chart. Any suggestions on how to escape these quotes?","Highcharts expects the values on datetime axes to be timestamps (number of miliseconds from 01.01.1970). Date.UTC is a JS function that returns a timestamp as Number. Values surrounded by apostrophes are Strings. +I'd rather suggest to return a timestamp as a String from backend (e.g. '1514764800000') and then convert it to Number in JS (you can use parseInt function for that.)",0.0,False,1,5457 +2018-04-13 03:56:24.947,Google Cloud - What products for time series data cleaning?,"I have around 20TB of time series data stored in big query. +The current pipeline I have is: +raw data in big query => joins in big query to create more big query datasets => store them in buckets +Then I download a subset of the files in the bucket: +Work on interpolation/resampling of data using Python/SFrame, because some of the time series data have missing times and they are not evenly sampled. +However, it takes a long time on a local PC, and I'm guessing it will take days to go through that 20TB of data. + +Since the data are already in buckets, I'm wondering what would the best Google tools for interpolation and resampling? +After resampling and interpolation I might use Facebook's Prophet or Auto ARIMA to create some forecasts. But that would be done locally. + +There's a few services from Google that seems are like good options. + +Cloud DataFlow: I have no experience in Apache Beam, but it looks like the Python API with Apache Beam have missing functions compared to the Java version? I know how to write Java, but I'd like to use one programming language for this task. +Cloud DataProc: I know how to write PySpark, but I don't really need any real time processing or stream processing, however spark has time series interpolation, so this might be the only option? +Cloud Dataprep: Looks like a GUI for cleaning data, but it's in beta. Not sure if it can do time series resampling/interpolation. + +Does anyone have any idea which might best fit my use case? +Thanks","I would use PySpark on Dataproc, since Spark is not just realtime/streaming but also for batch processing. +You can choose the size of your cluster (and use some preemptibles to save costs) and run this cluster only for the time you actually need to process this data. Afterwards kill the cluster. +Spark also works very nicely with Python (not as nice as Scala) but for all effects and purposes the main difference is performance, not reduced API functionality. +Even with the batch processing you can use the WindowSpec for effective time serie interpolation +To be fair: I don't have a lot of experience with DataFlow or DataPrep, but that's because out use case is somewhat similar to yours and Dataproc works well for that",1.2,True,1,5458 +2018-04-16 06:29:31.313,Nested list comprehension to flatten nested list,"I'm quite new to Python, and was wondering how I flatten the following nested list using list comprehension, and also use conditional logic. +nested_list = [[1,2,3], [4,5,6], [7,8,9]] +The following returns a nested list, but when I try to flatten the list by removing the inner square brackets I get errors. +odds_evens = [['odd' if n % 2 != 0 else 'even' for n in l] for l in nested_list]","To create a flat list, you need to have one set of brackets in comprehension code. Try the below code: +odds_evens = ['odd' if n%2!=0 else 'even' for n in l for l in nested_list] +Output: +['odd', 'odd', 'odd', 'even', 'even', 'even', 'odd', 'odd', 'odd']",-0.1016881243684853,False,1,5459 +2018-04-17 09:44:50.407,Password protect a Python Script that is Scheduled to run daily,"I have a python script that is scheduled to run at a fixed time daily +If I am not around my colleague will be able to access my computer to run the script if there is any error with the windows task scheduler +I like to allow him to run my windows task scheduler but also to protect my source code in the script... is there any good way to do this, please? +(I have read methods to use C code to hide it but I am only familiar with Python) +Thank you","Compile the source to the .pyc bytecode, and then move the source somewhere inaccessible. + +Open a terminal window in the directory containing your script +Run python -m py-compile (you should get a yourfile.pyc file) +Move somewhere secure +your script can now be run as python + +Note that is is not necessarily secure as such - there are ways to decompile the bytecode - but it does obfuscate it, if that is your requirement.",1.2,True,1,5460 +2018-04-18 09:29:24.593,Scrapy - order of crawled urls,"I've got an issue with scrapy and python. +I have several links. I crawl data from each of them in one script with the use of loop. But the order of crawled data is random or at least doesn't match to the link. +So I can't match url of each subpage with the outputed data. +Like: crawled url, data1, data2, data3. +Data 1, data2, data3 => It's ok, because it comes from one loop, but how can I add to the loop current url or can I set the order of link's list? Like first from the list is crawled as first, second is crawled as second...",time.sleep() - would it be a solution?,0.0,False,2,5461 +2018-04-18 09:29:24.593,Scrapy - order of crawled urls,"I've got an issue with scrapy and python. +I have several links. I crawl data from each of them in one script with the use of loop. But the order of crawled data is random or at least doesn't match to the link. +So I can't match url of each subpage with the outputed data. +Like: crawled url, data1, data2, data3. +Data 1, data2, data3 => It's ok, because it comes from one loop, but how can I add to the loop current url or can I set the order of link's list? Like first from the list is crawled as first, second is crawled as second...","Ok, It seems that the solution is in settings.py file in scrapy. +DOWNLOAD_DELAY = 3 +Between requests. +It should be uncommented. Defaultly it's commented.",-0.1352210990936997,False,2,5461 +2018-04-18 20:24:57.843,gcc error when installing pyodbc,"I am installing pyodbc on Redhat 6.5. Python 2.6 and 2.7.4 are installed. I get the following error below even though the header files needed for gcc are in the /usr/include/python2.6. +I have updated every dev package: yum groupinstall -y 'development tools' +Any ideas on how to resolve this issue would be greatly appreciated??? +Installing pyodbc... +Processing ./pyodbc-3.0.10.tar.gz +Installing collected packages: pyodbc + Running setup.py install for pyodbc ... error + Complete output from command /opt/rh/python27/root/usr/bin/python -u -c ""import setuptools, tokenize;file='/tmp/pip-JAGZDD-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))"" install --record /tmp/pip-QJasL0-record/install-record.txt --single-version-externally-managed --compile: + running install + running build + running build_ext + building 'pyodbc' extension + creating build + creating build/temp.linux-x86_64-2.7 + creating build/temp.linux-x86_64-2.7/tmp + creating build/temp.linux-x86_64-2.7/tmp/pip-JAGZDD-build + creating build/temp.linux-x86_64-2.7/tmp/pip-JAGZDD-build/src + gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DPYODBC_VERSION=3.0.10 -DPYODBC_UNICODE_WIDTH=4 -DSQL_WCHART_CONVERT=1 -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.8.sdk/usr/include -I/opt/rh/python27/root/usr/include/python2.7 -c /tmp/pip-JAGZDD-build/src/cnxninfo.cpp -o build/temp.linux-x86_64-2.7/tmp/pip-JAGZDD-build/src/cnxninfo.o -Wno-write-strings + In file included from /tmp/pip-JAGZDD-build/src/cnxninfo.cpp:8: + ** +**/tmp/pip-JAGZDD-build/src/pyodbc.h:41:20: error: Python.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:42:25: error: floatobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:43:24: error: longobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:44:24: error: boolobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:45:27: error: unicodeobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:46:26: error: structmember.h: No such file or directory +** + In file included from /tmp/pip-JAGZDD-build/src/pyodbc.h:137, + from /tmp/pip-JAGZDD-build/src/cnxninfo.cpp:8: + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:61:28: error: stringobject.h: No such file or directory + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:62:25: error: intobject.h: No such file or directory + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:63:28: error: bufferobject.h: No such file or directory + In file included from /tmp/pip-JAGZDD-build/src/cnxninfo.cpp:8: + /tmp/pip-JAGZDD-build/src/pyodbc.h: In function ‘void _strlwr(char*)’: + /tmp/pip-JAGZDD-build/src/pyodbc.h:92: error: ‘tolower’ was not declared in this scope + In file included from /tmp/pip-JAGZDD-build/src/pyodbc.h:137, + from /tmp/pip-JAGZDD-build/src/cnxninfo.cpp:8: + /tmp/pip-JAGZDD-build/src/pyodbccompat.h: At global scope: + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:71: error: expected initializer before ‘*’ token + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:81: error: ‘Text_Buffer’ declared as an ‘inline’ variable + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:81: error: ‘PyObject’ was not declared in this scope + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:81: error: ‘o’ was not declared in this scope + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:82: error: expected ‘,’ or ‘;’ before ‘{’ token + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:93: error: ‘Text_Check’ declared as an ‘inline’ variable + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:93: error: ‘PyObject’ was not declared in this scope + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:93: error: ‘o’ was not declared in this scope + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:94: error: expected ‘,’ or ‘;’ before ‘{’ token + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:104: error: ‘PyObject’ was not declared in this scope + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:104: error: ‘lhs’ was not declared in this scope + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:104: error: expected primary-expression before ‘const’ + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:104: error: initializer expression list treated as compound expression + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:109: error: ‘Text_Size’ declared as an ‘inline’ variable + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:109: error: ‘PyObject’ was not declared in this scope + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:109: error: ‘o’ was not declared in this scope + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:110: error: expected ‘,’ or ‘;’ before ‘{’ token + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘TextCopyToUnicode’ declared as an ‘inline’ variable + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘Py_UNICODE’ was not declared in this scope + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘buffer’ was not declared in this scope + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘PyObject’ was not declared in this scope + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘o’ was not declared in this scope + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: initializer expression list treated as compound expression + /tmp/pip-JAGZDD-build/src/pyodbccompat.h:119: error: expected ‘,’ or ‘;’ before ‘{’ token + error: command 'gcc' failed with exit status 1",The resolution was to re-install Python2.7,0.0,False,1,5462 +2018-04-18 21:14:22.503,Count number of nodes per level in a binary tree,"I've been searching for a bit now and haven't been able to find anything similar to my question. Maybe i'm just not searching correctly. Anyways this is a question from my exam review. Given a binary tree, I need to output a list such that each item in the list is the number of nodes on a level in a binary tree at the items list index. What I mean, lst = [1,2,1] and the 0th index is the 0th level in the tree and the 1 is how many nodes are in that level. lst[1] will represent the number of nodes (2) in that binary tree at level 1. The tree isn't guaranteed to be balanced. We've only been taught preorder,inorder and postorder traversals, and I don't see how they would be useful in this question. I'm not asking for specific code, just an idea on how I could figure this out or the logic behind it. Any help is appreciated.","The search ordering doesn't really matter as long as you only count each node once. A depth-first search solution with recursion would be: + +Create a map counters to store a counter for each level. E.g. counters[i] is the number of nodes found so far at level i. Let's say level 0 is the root. +Define a recursive function count_subtree(node, level): Increment counters[level] once. Then for each child of the given node, call count_subtree(child, level + 1) (the child is at a 1-deeper level). +Call count_subtree(root_node, 0) to count starting at the root. This will result in count_subtree being run exactly once on each node because each node only has one parent, so counters[level] will be incremented once per node. A leaf node is the base case (no children to call the recursive function on). +Build your final list from the values of counters, ordered by their keys ascending. + +This would work with any kind of tree, not just binary. Running time is O(number of nodes in tree). Side note: The depth-first search solution would be easier to divide and run on parallel processors or machines than a similar breadth-first search solution.",0.3869120172231254,False,1,5463 +2018-04-19 08:06:38.817,Going back to previous line in Spyder,"I am using the Spyder editor and I have to go back and forth from the piece of code that I am writing to the definition of the functions I am calling. I am looking for shortcuts to move given this issue. I know how to go to the function definition (using Ctrl + g), but I don't know how to go back to the piece of code that I am writing. Is there an easy way to do this?","(Spyder maintainer here) You can use the shortcuts Ctrl+Alt+Left and Ctrl+Alt+Right to move to the previous/next cursor position, respectively.",1.2,True,1,5464 +2018-04-19 12:15:18.167,clean up python versions mac osx,"I tried to run a python script on my mac computer, but I ended up in troubles as it needed to install pandas as a dependency. +I tried to get this dependency, but to do so I installed different components like brew, pip, wget and others including different versions of python using brew, .pkg package downloaded from python.org. +In the end, I was not able to run the script anyway. +Now I would like to sort out the things and have only one version of python (3 probably) working correctly. +Can you suggest me the way how to get the overview what I have installed on my computer and how can I clean it up? +Thank you in advance","Use brew list to see what you've installed with Brew. And Brew Uninstall as needed. Likewise, review the logs from wget to see where it installed things. Keep in mind that MacOS uses Python 2.7 for system critical tasks; it's baked-into the OS so don't touch it. +Anything you installed with pip is saved to the /site-packages directory of the Python version in which you installed it so it will disappear when you remove that version of Python. +The .pkg files installed directly into your Applications folder and can be deleted safely like any normal app.",0.999329299739067,False,1,5465 +2018-04-19 20:49:21.823,Python/Flask: only one user can call a endpoint at one time,"I have a API build using Python/Flask, and I have a endpoint called /build-task that called by the system, and this endpoint takes about 30 minutes to run. +My question is that how do I lock the /build-task endpoint when it's started and running already? So so other user, or system CANNOT call this endpoint.","You have some approaches for this problem: +1 - You can create a session object, save a flag in the object and check if the endpoint is already running and respond accordingly. +2 - Flag on the database, check if the endpoint is already running and respond accordingly.",0.3869120172231254,False,1,5466 +2018-04-19 22:13:32.410,"After delay() is called on a celery task, it takes more than 5 to 10 seconds for the tasks to even start executing with redis as the server","I have Redis as my Cache Server. When I call delay() on a task,it takes more than 10 tasks to even start executing. Any idea how to reduce this unnecessary lag? +Should I replace Redis with RabbitMQ?","It's very difficult to say what the cause of the delay is without being able to inspect your application and server logs, but I can reassure you that the delay is not normal and not an effect specific to either Celery or using Redis as the broker. I've used this combination a lot in the past and execution of tasks happens in a number of milliseconds. +I'd start by ensuring there are no network related issues between your client creating the tasks, your broker (Redis) and your task consumers (celery workers). +Good luck!",1.2,True,1,5467 +2018-04-21 12:34:46.780,add +1 hour to datetime.time() django on forloop,"I have code like this, I want to check in the time range that has overtime and sum it. +currently, am trying out.hour+1 with this code, but didn't work. + + + overtime_all = 5 + overtime_total_hours = 0 + out = datetime.time(14, 30) + + while overtime_all > 0: + overtime200 = object.filter(time__range=(out, out.hour+1)).count() + overtime_total_hours = overtime_total_hours + overtime200 + overtime_all -=1 + + print overtime_total_hours + + +how to add 1 hour every loop?...","Timedelta (from datetime) can be used to increment or decrement a datatime objects. Unfortunately, it cannot be directly combined with datetime.time objects. +If the values that are stored in your time column are datetime objects, you can use them (e.g.: my_datetime + timedelta(hours=1)). If they are time objects, you'll need to think if they represent a moment in time (in that case, they should be converted to datetime objects) or a duration (in that case, it's probably easier to store it as an integer representing the total amount of minutes, and to perform all operations on integers).",1.2,True,2,5468 +2018-04-21 12:34:46.780,add +1 hour to datetime.time() django on forloop,"I have code like this, I want to check in the time range that has overtime and sum it. +currently, am trying out.hour+1 with this code, but didn't work. + + + overtime_all = 5 + overtime_total_hours = 0 + out = datetime.time(14, 30) + + while overtime_all > 0: + overtime200 = object.filter(time__range=(out, out.hour+1)).count() + overtime_total_hours = overtime_total_hours + overtime200 + overtime_all -=1 + + print overtime_total_hours + + +how to add 1 hour every loop?...","I found the solution now, and this is work. + + + overtime_all = 5 + overtime_total_hours = 0 + out = datetime.time(14, 30) + + while overtime_all > 0: + overtime200 = object.filter(time__range=(out,datetime.time(out.hour+1, 30))).count() + overtime_total_hours = overtime_total_hours + overtime200 + overtime_all -=1 + + print overtime_total_hours + +i do change out.hour+1 to datetime.time(out.hour+1, 30) its work fine now, but i dont know maybe there more compact/best solution. +thank you guys for your answer.",0.2012947653214861,False,2,5468 +2018-04-22 02:32:28.877,k-means clustering multi column data in python,"I Have data-set for which consist 2000 lines in a text file. +Each line represents x,y,z (3D coordinates location) of 20 skeleton joint points of human body (eg: head, shoulder center, shoulder left, shoulder right,......, elbow left, elbow right). I want to do k-means clustering of this data. +Data is separated by 'spaces ', each joint is represented by 3 values (Which represents x,y,z coordinates). Like head and shoulder center represented by +.0255... .01556600 1.3000... .0243333 .010000 .1.3102000 .... +So basically I have 60 columns in each row, which which represents 20 joints and each joins consist of three points. +My question is how do I format or use this data for k-means clustering,","You don't need to reformat anything. +Each row is a 60 dimensional vector of continous values with a comparable scale (coordinates), as needed for k-means. +You can just run k-means on this. +But assuming that the measurements were taken in sequence, you may observe a strong correlation between rows, so I wouldn't expect the data to cluster extremely well, unless you set up the use to do and hold certain poses.",1.2,True,1,5469 +2018-04-22 11:28:39.070,How to get the quantity of products in specified date in odoo 10,"I want to create table in odoo 10 with the following columns: quantity_in_the_first_day_of_month,input_quantity,output_quantity,quantity_in_the_last_day_of_the_month. +but i don't know how to get the quantity of the specified date","You can join the sale order and sale order line to get specified date. +select + sum(sol.product_uom_qty) +from + sale_order s,sale_order_line sol +where + sol.order_id=s.id and + DATE(s.date_order) = '2018-01-01'",0.0,False,1,5470 +2018-04-24 04:53:51.450,How do CPU cores get allocated to python processes in multiprocessing?,"Let's say I am running multiple python processes(not threads) on a multi core CPU (say 4). GIL is process level so GIL within a particular process won't affect other processes. +My question here is if the GIL within one process will take hold of only single core out of 4 cores or will it take hold of all 4 cores? +If one process locks all cores at once, then multiprocessing should not be any better than multi threading in python. If not how do the cores get allocated to various processes? + +As an observation, in my system which is 8 cores (4*2 because of + hyperthreading), when I run a single CPU bound process, the CPU usage + of 4 out of 8 cores goes up. + +Simplifying this: +4 python threads (in one process) running on a 4 core CPU will take more time than single thread doing same work (considering the work is fully CPU bound). Will 4 different process doing that amount of work reduce the time taken by a factor of near 4?",Process to CPU/CPU core allocation is handled by the Operating System.,0.0,False,1,5471 +2018-04-24 13:49:41.587,"How to read back the ""random-seed"" from a saved model of Dynet","I have a model already trained by dynet library. But i forget the --dynet-seed parameter when training this model. +Does anyone know how to read back this parameter from the saved model? +Thank you in advance for any feedback.","You can't read back the seed parameter. Dynet model does not save the seed parameter. The obvious reason is, it is not required at testing time. Seed is only used to set fixed initial weights, random shuffling etc. for different experimental runs. At testing time no parameter initialisation or shuffling is required. So, no need to save seed parameter. +To the best of my knowledge, none of the other libraries like tensorflow, pytorch etc. save the seed parameter as well.",1.2,True,1,5472 +2018-04-24 20:57:16.490,Django/Python - Serial line concurrency,"I'm currently working on gateway with an embedded Linux and a Webserver. The goal of the gateway is to retrieve data from electrical devices through a RS485/Modbus line, and to display them on a server. +I'm using Nginx and Django, and the web front-end is delivered by ""static"" files. Repeatedly, a Javascript script file makes AJAX calls that send CGI requests to Nginx. These CGI requests are answered with JSON responses thanks to Django. The responses are mostly data that as been read on the appropriate Modbus device. +The exact path is the following : +Randomly timed CGI call -> urls.py -> ModbusCGI.py (import an other script ModbusComm.py)-> ModbusComm.py create a Modbus client and instantly try to read with it. +Next to that, I wanted to implement a Datalogger, to store data in a DB at regular intervals. I made a script that also import the ModbusComm.py script, but it doesn't work : sometime multiple Modbus frames are sent at the same time (datalogger and cgi scripts call the same function in ModbusComm.py ""files"" at the same time) which results in an error. +I'm sure this problem would also occur if there are a lot of users on the server (CGI requests sent at the same time). Or not ? (queue system already managed for CGI requests? I'm a bit lost) +So my goal would be to make a queue system that could handle calls from several python scripts => make them wait while it's not their turn => call a function with the right arguments when it's their turn (actually using the modbus line), and send back the response to the python script so it can generate the JSON response. +I really don't know how to achieve that, and I'm sure there are better way to do this. +If I'm not clear enough, don't hesitate to make me aware of it :)","I had the same problem when I had to allow multiple processes to read some Modbus (and not only Modbus) data through a serial port. I ended up with a standalone process (“serial port server”) that exclusively works with a serial port. All other processes work with that port through that standalone process via some inter processes communication mechanism (we used Unix sockets). +This way when an application wants to read a Modbus register it connects to the “serial port server”, sends its request and receives the response. All the actual serial port communication is done by the “serial port server” in sequential way to ensure consistency.",0.0,False,1,5473 +2018-04-24 22:23:26.923,Make Python 3 default on Mac OS?,"I would like to ask if it is possible to make Python 3 a default interpreter on Mac OS 10 when typing python right away from the terminal? If so, can somebody help how to do it? I'm avoiding switching between the environments. +Cheers","You can do that by changing alias, typing in something like $ alias python=python3 in the terminal. +If you want the change to persist open ~/.bash_profile using nano and then add alias python=python3. CTRL+O to save and CTRL+X to close. +Then type $ source ~./bash_profile in the terminal.",0.2012947653214861,False,1,5474 +2018-04-25 00:38:39.330,can't import more than 50 contacts from csv file to telegram using Python3,"Trying to Import 200 contacts from CSV file to telegram using Python3 Code. It's working with first 50 contacts and then stop and showing below: +telethon.errors.rpc_error_list.FloodWaitError: A wait of 101 seconds is required +Any idea how I can import all list without waiting?? Thanks!!","You can not import a large number of people in sequential. ُThe telegram finds you're sperm. +As a result, you must use ‍sleep between your requests",0.0,False,1,5475 +2018-04-25 07:54:39.583,Grouping tests in pytest: Classes vs plain functions,"I'm using pytest to test my app. +pytest supports 2 approaches (that I'm aware of) of how to write tests: + +In classes: + + +test_feature.py -> class TestFeature -> def test_feature_sanity + + +In functions: + + +test_feature.py -> def test_feature_sanity + +Is the approach of grouping tests in a class needed? Is it allowed to backport unittest builtin module? +Which approach would you say is better and why?","There are no strict rules regarding organizing tests into modules vs classes. It is a matter of personal preference. Initially I tried organizing tests into classes, after some time I realized I had no use for another level of organization. Nowadays I just collect test functions into modules (files). +I could see a valid use case when some tests could be logically organized into same file, but still have additional level of organization into classes (for instance to make use of class scoped fixture). But this can also be done just splitting into multiple modules.",1.2,True,2,5476 +2018-04-25 07:54:39.583,Grouping tests in pytest: Classes vs plain functions,"I'm using pytest to test my app. +pytest supports 2 approaches (that I'm aware of) of how to write tests: + +In classes: + + +test_feature.py -> class TestFeature -> def test_feature_sanity + + +In functions: + + +test_feature.py -> def test_feature_sanity + +Is the approach of grouping tests in a class needed? Is it allowed to backport unittest builtin module? +Which approach would you say is better and why?","Typically in unit testing, the object of our tests is a single function. That is, a single function gives rise to multiple tests. In reading through test code, it's useful to have tests for a single unit be grouped together in some way (which also allows us to e.g. run all tests for a specific function), so this leaves us with two options: + +Put all tests for each function in a dedicated module +Put all tests for each function in a class + +In the first approach we would still be interested in grouping all tests related to a source module (e.g. utils.py) in some way. Now, since we are already using modules to group tests for a function, this means that we should like to use a package to group tests for a source module. +The result is one source function maps to one test module, and one source module maps to one test package. +In the second approach, we would instead have one source function map to one test class (e.g. my_function() -> TestMyFunction), and one source module map to one test module (e.g. utils.py -> test_utils.py). +It depends on the situation, perhaps, but the second approach, i.e. a class of tests for each function you are testing, seems more clear to me. Additionally, if we are testing source classes/methods, then we could simply use an inheritance hierarchy of test classes, and still retain the one source module -> one test module mapping. +Finally, another benefit to either approach over just a flat file containing tests for multiple functions, is that with classes/modules already identifying which function is being tested, you can have better names for the actual tests, e.g. test_does_x and test_handles_y instead of test_my_function_does_x and test_my_function_handles_y.",0.9999092042625952,False,2,5476 +2018-04-25 08:16:18.483,How to calculate a 95 credible region for a 2D joint distribution?,"Suppose we have a joint distribution p(x_1,x_2), and we know x_1,x_2,p. Both are discrete, (x_1,x_2) is scatter, its contour could be drawn, marginal as well. I would like to show the area of 95% quantile (a scale of 95% data will be contained) of the joint distribution, how can I do that?","If you are interested in finding a pair x_1, x_2 of real numbers such that +P(X_1<=x_1, X_2<=x_2) = 0.95 and your distribution is continuous then there will be infinitely many of these pairs. You might be better of just fixing one of them and then finding the other",0.0,False,2,5477 +2018-04-25 08:16:18.483,How to calculate a 95 credible region for a 2D joint distribution?,"Suppose we have a joint distribution p(x_1,x_2), and we know x_1,x_2,p. Both are discrete, (x_1,x_2) is scatter, its contour could be drawn, marginal as well. I would like to show the area of 95% quantile (a scale of 95% data will be contained) of the joint distribution, how can I do that?","As the other points out, there are infinitely many solutions to this problem. A practical one is to find the approximate center of the point cloud and extend a circle from there until it contains approximately 95% of the data. Then, find the convex hull of the selected points and compute its area. +Of course, this will only work if the data is sort of concentrated in a single area. This won't work if there are several clusters.",0.2012947653214861,False,2,5477 +2018-04-25 12:20:28.077,queires and advanced operations in influxdb,"Recently started working on influxDB, can't find how to add new measurements or make a table of data from separate measurements, like in SQL we have to join table or so. +The influxdb docs aren't that clear. I'm currently using the terminal for everything and wouldn't mind switching to python but most of it is about HTTP post schemes in the docs, is there any other alternative? +I would prefer influxDB in python if the community support is good","The InfluxDB query language does not support joins across measurements. +It instead needs to be done client side after querying data. Querying, without join, data from multiple measurements can be done with one query.",1.2,True,1,5478 +2018-04-26 22:40:24.603,Run external python file with Mininet,"I try to write a defense system by using mininet + pox. +I have l3_edited file to calculate entropy. I understand when a host attacked. +I have my myTopo.py file that create a topo with Mininet. +Now my question: +I want to change hosts' ips when l3_edited detect an attack. Where should I do it? +I believe I should write program and run it in mininet. (not like custom topo but run it after create mininet, in command line). If it's true, how can I get hosts' objest? If I can get it, I can change their IPs. +Or should I do it on my myTopo.py ??? Then, how can I run my defense code, when I detect an attack?","If someone looking for answer... +You can use your custom topology file to do other task. Multithread solved my problem.",1.2,True,1,5479 +2018-04-27 12:58:39.440,Select columns periodically on pandas DataFrame,"I'm working on a Dataframe with 1116 columns, how could I select just the columns in a period of 17 ? +More clearly select the 12th, 29th,46th,63rd... columns","df.iloc[:,[i*17 for i in range(0,65)]]",0.0,False,1,5480 +2018-04-27 15:23:38.983,How to create different Python Wheel distributions for Ubuntu and RedHat,"I have a Cython-based package which depends on other C++ SO libraries. Those libraries are binary different between Ubuntu (dev) and RedHat (prod). So the SO file generated by Cython has to be different as well. If I use Wheel to package it the file name is same for both environments: +package-version-cp27-cp27mu-linux_x86_64.whl +So if I upload it to pypi it will conflict with RedHat based distribution of the same package. I have to upload it to pypi because the project is then PEX-ed (via Pants) and PEX tries to download from pypi and fails if it does not find it with the following exception. +Exception caught: 'pex.resolver.Unsatisfiable' +Any ideas how to resolve it? +Thx.","I found a solution by using a different PyPi instance. So our DEV Ubuntu environment and PROD RedHat just use two different PyPi sources. +To do that I had to make two configurations ~/.pypic and ~/.pip/pip.conf to upload.",0.0,False,1,5481 +2018-04-28 20:06:07.330,Why use zappa/chalice in serverless python apps?,"I am new to python and thought it would be great to have my very first python project running on AWS infrastructure. Given my previous node.js experience with lambdas, I thought that every function would have its own code and the app is only glued together by the persistence layer, everything else are decoupled separate functions. +In Python lambdas there are serverless microframeworks like Chalice or Zappa that seem to be an accepted practice. For me though it feels like they are hacking around the concept of serverless approach. You still have a full-blown app build on let's say Flask, or even Django, and that app is served through lambda. There is still one application that has all the routing, configs, boilerplate code, etc instead of small independent functions that just do their job. I currently do not see how and if this makes like any easier. + +What is the benefit / reason for having the whole code base served through lambdas as opposed to individual functions? +Is there an execution time penalty if using flask/django/whatever else with serverless apps? +If this depends on the particular project, what would be the guidance when to use framework, and when to use individual functions?","Benefits. You can use known concept, and adopt it in serverless. +Performance. The smaller code is the less ram it takes. It must be loaded, processed, and so on. Just to process single request? For me that was always too much. +Let's say you have diango project, that is working on elastic beanstalk, and you need some lamdas to deal with limited problems. Now. Do you want to have two separate configurations? What about common functions? + +Serverless looks nice, but... let's assume that you have permissions, so your app, for every call will pull that stuff. Perhaps you have it cached - in redis, as a hole permissions for user... Other option is dynamodb, which is even more expensive. Yes there is nice SLA, but API is quite strange, also if you plan keeping more data there... the more data you have the slower it work - for same money. In other words - if you put more data, fetching will cost more - if you want same speed.",0.0,False,1,5482 +2018-04-29 13:47:46.340,How to preprocess audio data for input into a Neural Network,"I'm currently developing a keyword-spotting system that recognizes digits from 0 to 9 using deep neural networks. I have a dataset of people saying the numbers(namely the TIDIGITS dataset, collected at Texas Instruments, Inc), however the data is not prepared to be fed into a neural network, because not all the audio data have the same audio length, plus some of the files contain several digits being spoken in sequence, like ""one two three"". +Can anyone tell me how would I transform these wav files into 1 second wav files containing only the sound of one digit? Is there any way to automatically do this? Preparing the audio files individually would be time expensive. +Thank you in advance!","I would split each wav by the areas of silence. Trim the silence from beginning and end. Then I'd run each one through a FFT for different sections. Smaller ones at the beginning of the sound. Then I'd normalise the frequencies against the fundamental. Then I'd feed the results into the NN as a 3d array of volumes, frequencies and times.",0.2012947653214861,False,1,5483 +2018-04-29 20:58:03.330,How would i generate a random number in python without duplicating numbers,"I was wondering how to generate a random 4 digit number that has no duplicates in python 3.6 +I could generate 0000-9999 but that would give me a number with a duplicate like 3445, Anyone have any ideas +thanks in advance","Generate a random number +check if there are any duplicates, if so go back to 1 +you have a number with no duplicates + +OR +Generate it one digit at a time from a list, removing the digit from the list at each iteration. + +Generate a list with numbers 0 to 9 in it. +Create two variables, the result holding value 0, and multiplier holding 1. +Remove a random element from the list, multiply it by the multiplier variable, add it to the result. +multiply the multiplier by 10 +go to step 3 and repeat for the next digit (up to the desired digits) +you now have a random number with no repeats.",-0.3869120172231254,False,1,5484 +2018-04-30 15:12:36.730,Keras Neural Network. Preprocessing,"I have this doubt when I fit a neural network in a regression problem. I preprocessed the predictors (features) of my train and test data using the methods of Imputers and Scale from sklearn.preprocessing,but I did not preprocessed the class or target of my train data or test data. +In the architecture of my neural network all the layers has relu as activation function except the last layer that has the sigmoid function. I have choosen the sigmoid function for the last layer because the values of the predictions are between 0 and 1. +tl;dr: In summary, my question is: should I deprocess the output of my neuralnet? If I don't use the sigmoid function, the values of my output are < 0 and > 1. In this case, how should I do it? +Thanks","Usually, if you are doing regression you should use a linear' activation in the last layer. A sigmoid function will 'favor' values closer to 0 and 1, so it would be harder for your model to output intermediate values. +If the distribution of your targets is gaussian or uniform I would go with a linear output layer. De-processing shouldn't be necessary unless you have very large targets.",0.0,False,1,5485 +2018-05-01 02:43:55.537,How to calculate the HMAC(hsa256) of a text using a public certificate (.pem) as key,"I'm working on Json Web Tokens and wanted to reproduce it using python, but I'm struggling on how to calculate the HMAC_SHA256 of the texts using a public certificate (pem file) as a key. +Does anyone know how I can accomplish that!? +Tks","In case any one found this question. The answer provided by the host works, but the idea is wrong. You don't use any RSA keys with HMAC method. The RSA key pair (public and private) are used for asymmetric algorithm while HMAC is symmetric algorithm. +In HMAC, the two sides of the communication keep the same secret text(bytes) as the key. It can be a public_cert.pem as long as you keep it secretly. But a public.pem is usually shared publicly, which makes it unsafe.",0.3869120172231254,False,1,5486 +2018-05-01 05:40:22.107,How to auto scale in JES,"I'm coding watermarking images in JES and I was wondering how to Watermark a picture by automatically scaling a watermark image? +If anyone can help me that would be great. +Thanks.","Ill start by giving you a quote from the INFT1004 assignment you are asking for help with. +""In particular, you should try not to use code or algorithms from external sources, and not to obtain help from people other than your instructors, as this can prevent you from mastering these concepts"" +It specifically says in this assignment that you should not ask people online or use code you find or request online, and is a breach of the University of Newcastle academic integrity code - you know the thing you did a module on before you started the course. A copy of this post will be sent along to the course instructor.",0.0,False,1,5487 +2018-05-01 13:33:54.250,Multi-label classification methods for large dataset,"I realize there's another question with a similar title, but my dataset is very different. +I have nearly 40 million rows and about 3 thousand labels. Running a simply sklearn train_test_split takes nearly 20 minutes. +I initially was using multi-class classification models as that's all I had experience with, and realized that since I needed to come up with all the possible labels a particular record could be tied to, I should be using a multi-label classification method. +I'm looking for recommendations on how to do this efficiently. I tried binary relevance, which took nearly 4 hours to train. Classifier chains errored out with a memory error after 22 hours. I'm afraid to try a label powerset as I've read they don't work well with a ton of data. Lastly, I've got adapted algorithm, MlkNN and then ensemble approaches (which I'm also worried about performance wise). +Does anyone else have experience with this type of problem and volume of data? In addition to suggested models, I'm also hoping for advice on best training methods, like train_test_split ratios or different/better methods.","20 minutes for this size of a job doesn't seem that long, neither does 4 hours for training. +I would really try vowpal wabbit. It excels at this sort of multilabel problem and will probably give unmatched performance if that's what you're after. It requires significant tuning and will still require quality training data, but it's well worth it. This is essentially just a binary classification problem. An ensemble will of course take longer so consider whether or not it's necessary given your accuracy requirements.",1.2,True,1,5488 +2018-05-01 20:23:34.880,PULP: Check variable setting against constraints,"I'm looking to set up a constraint-check in Python using PULP. Suppose I had variables A1,..,Xn and a constraint (AffineExpression) A1X1 + ... + AnXn <= B, where A1,..,An and B are all constants. +Given an assignment for X (e.g. X1=1, X2=4,...Xn=2), how can I check if the constraints are satisfied? I know how to do this with matrices using Numpy, but wondering if it's possible to do using PULP to let the library handle the work. +My hope here is that I can check specific variable assignments. I do not want to run an optimization algorithm on the problem (e.g. prob.solve()). +Can PULP do this? Is there a different Python library that would be better? I've thought about Google's OR-Tools but have found the documentation is a little bit harder to parse through than PULP's.","It looks like this is possible doing the following: + +Define PULP variables and constraints and add them to an LpProblem +Make a dictionary of your assignments in the form {'variable name': value} +Use LpProblem.assignVarsVals(your_assignment_dict) to assign those values +Run LpProblem.valid() to check that your assignment meets all constraints and variable restrictions + +Note that this will almost certainly be slower than using numpy and Ax <= b. Formulating the problem might be easier, but performance will suffer due to how PULP runs these checks.",1.2,True,1,5489 +2018-05-02 15:18:32.357,How to find if there are wrong values in a pandas dataframe?,"I am quite new in Python coding, and I am dealing with a big dataframe for my internship. +I had an issue as sometimes there are wrong values in my dataframe. For example I find string type values (""broken leaf"") instead of integer type values as (""120 cm"") or (NaN). +I know there is the df.replace() function, but therefore you need to know that there are wrong values. So how do I find if there are any wrong values inside my dataframe? +Thank you in advance","""120 cm"" is a string, not an integer, so that's a confusing example. Some ways to find ""unexpected"" values include: +Use ""describe"" to examine the range of numerical values, to see if there are any far outside of your expected range. +Use ""unique"" to see the set of all values for cases where you expect a small number of permitted values, like a gender field. +Look at the datatypes of columns to see whether there are strings creeping in to fields that are supposed to be numerical. +Use regexps if valid values for a particular column follow a predictable pattern.",0.0,False,1,5490 +2018-05-03 09:34:04.367,Read raw ethernet packet using python on Raspberry,"I have a device which is sending packet with its own specific construction (header, data, crc) through its ethernet port. +What I would like to do is to communicate with this device using a Raspberry and Python 3.x. +I am already able to send Raw ethernet packet using the ""socket"" Library, I've checked with wireshark on my computer and everything seems to be transmitted as expected. +But now I would like to read incoming raw packet sent by the device and store it somewhere on my RPI to use it later. +I don't know how to use the ""socket"" Library to read raw packet (I mean layer 2 packet), I only find tutorials to read higher level packet like TCP/IP. +What I would like to do is Something similar to what wireshark does on my computer, that is to say read all raw packet going through the ethernet port. +Thanks, +Alban","Did you try using ettercap package (ettercap-graphical)? +It should be available with apt. +Alternatively you can try using TCPDump (Java tool) or even check ip tables",0.0,False,1,5491 +2018-05-04 02:07:04.457,Host command and ifconfig giving different ips,"I am using server(server_name.corp.com) inside a corporate company. On the server i am running a flask server to listen on 0.0.0.0:5000. +servers are not exposed to outside world but accessible via vpns. +Now when i run host server_name.corp.com in the box i get some ip1(10.*.*.*) +When i run ifconfig in the box it gives me ip2(10.*.*.*). +Also if i run ping server_name.corp.com in same box i get ip2. +Also i can ssh into server with ip1 not ip2 +I am able to access the flask server at ip1:5000 but not on ip2:5000. +I am not into networking so fully confused on why there are 2 different ips and why i can access ip1:5000 from browser not ip2:5000. +Also what is equivalent of host command in python ( how to get ip1 from python. I am using socktet.gethostbyname(server_name.corp.com) which gives me ip2)","Not quite clear about the network status by your statements, I can only tell that if you want to get ip1 by python, you could use standard lib subprocess, which usually be used to execute os command. (See subprocess.Popen)",0.0,False,1,5492 +2018-05-05 02:23:02.700,how to use python to check if subdomain exists?,"Does anyone know how to check if a subdomain exists on a website? +I am doing a sign up form and everyone gets there own subdomain, I have some javascript written on the front end but I need to find a way to check on the backend.","Put the assigned subdomain in a database table within unique indexed column. It will be easier to check from python (sqlalchemy, pymysql ect...) if subdomain has already been used + will automatically prevent duplicates to be assigned/inserted.",0.0,False,2,5493 +2018-05-05 02:23:02.700,how to use python to check if subdomain exists?,"Does anyone know how to check if a subdomain exists on a website? +I am doing a sign up form and everyone gets there own subdomain, I have some javascript written on the front end but I need to find a way to check on the backend.","Do a curl or http request on subdomain which you want to verify, if you get 404 that means it doesn't exists, if you get 200 it definitely exists",0.2012947653214861,False,2,5493 +2018-05-05 14:24:30.920,How to use visual studio code >after< installing anaconda,"If you have never installed anaconda, it seems to be rather simple. In the installation process of Anaconda, you choose to install visual studio code and that is it. +But I would like some help in my situation: +My objective: I want to use visual studio code with anaconda + +I have a mac with anaconda 1.5.1 installed. +I installed visual studio code. +I updated anaconda (from the terminal) now it is 1.6.9 + +From there, I don't know how to proceed. +any help please","You need to select the correct python interpreter. When you are in a .py file, there's a blue bar in the bottom of the window (if you have the dark theme), there you can select the anaconda python interpreter. +Else you can open the command window with ctrl+p or command+p and type '>' for running vscode commands and search '> Python Interpreter'. +If you don't see anaconda there google how to add a new python interpreter to vscode",0.3869120172231254,False,1,5494 +2018-05-05 16:40:27.703,Calling Python scripts from Java. Should I use Docker?,"We have a Java application in our project and what we want is to call some Python script and return results from it. What is the best way to do this? +We want to isolate Python execution to avoid affecting Java application at all. Probably, Dockerizing Python is the best solution. I don't know any other way. +Then, a question is how to call it from Java. +As far as I understand there are several ways: + +start some web-server inside Docker which accepts REST calls from Java App and runs Python scripts and returns results to Java via REST too. +handle request and response via Docker CLI somehow. +use Java Docker API to send REST request to Docker which then converted by Docker to Stdin/Stdout of Python script inside Docker. + +What is the most effective and correct way to connect Java App with Python, running inside Docker?","You don’t need docker for this. There are a couple of options, you should choose depending on what your Java application is doing. + +If the Java application is a client - based on swing, weblaunch, or providing UI directly - you will want to turn the python functionality to be wrapped in REST/HTTP calls. +If the Java application is a server/webapp - executing within Tomcat, JBoss or other application container - you should simply wrap the python scrip inside a exec call. See the Java Runtime and ProcessBuilder API for this purpose.",1.2,True,1,5495 +2018-05-05 21:56:31.143,Unintuitive solidity contract return values in ethereum python,"I'm playing around with ethereum and python and I'm running into some weird behavior I can't make sense of. I'm having trouble understanding how return values work when calling a contract function with the python w3 client. Here's a minimal example which is confusing me in several different ways: +Contract: + +pragma solidity ^0.4.0; + +contract test { + function test(){ + + } + + function return_true() public returns (bool) { + return true; + } + + function return_address() public returns (address) { + return 0x111111111111111111111111111111111111111; + } +} + +Python unittest code + +from web3 import Web3, EthereumTesterProvider +from solc import compile_source +from web3.contract import ConciseContract +import unittest +import os + + +def get_contract_source(file_name): + with open(file_name) as f: + return f.read() + + +class TestContract(unittest.TestCase): + CONTRACT_FILE_PATH = ""test.sol"" + DEFAULT_PROPOSAL_ADDRESS = ""0x1111111111111111111111111111111111111111"" + + def setUp(self): + # copied from https://github.com/ethereum/web3.py/tree/1802e0f6c7871d921e6c5f6e43db6bf2ef06d8d1 with MIT licence + # has slight modifications to work with this unittest + contract_source_code = get_contract_source(self.CONTRACT_FILE_PATH) + compiled_sol = compile_source(contract_source_code) # Compiled source code + contract_interface = compiled_sol[':test'] + # web3.py instance + self.w3 = Web3(EthereumTesterProvider()) + # Instantiate and deploy contract + self.contract = self.w3.eth.contract(abi=contract_interface['abi'], bytecode=contract_interface['bin']) + # Get transaction hash from deployed contract + tx_hash = self.contract.constructor().transact({'from': self.w3.eth.accounts[0]}) + # Get tx receipt to get contract address + tx_receipt = self.w3.eth.getTransactionReceipt(tx_hash) + self.contract_address = tx_receipt['contractAddress'] + # Contract instance in concise mode + abi = contract_interface['abi'] + self.contract_instance = self.w3.eth.contract(address=self.contract_address, abi=abi, + ContractFactoryClass=ConciseContract) + + def test_return_true_with_gas(self): + # Fails with HexBytes('0xd302f7841b5d7c1b6dcff6fca0cd039666dbd0cba6e8827e72edb4d06bbab38f') != True + self.assertEqual(True, self.contract_instance.return_true(transact={""from"": self.w3.eth.accounts[0]})) + + def test_return_true_no_gas(self): + # passes + self.assertEqual(True, self.contract_instance.return_true()) + + def test_return_address(self): + # fails with AssertionError: '0x1111111111111111111111111111111111111111' != '0x0111111111111111111111111111111111111111' + self.assertEqual(self.DEFAULT_PROPOSAL_ADDRESS, self.contract_instance.return_address()) + +I have three methods performing tests on the functions in the contract. In one of them, a non-True value is returned and instead HexBytes are returned. In another, the contract functions returns an address constant but python sees a different value from what's expected. In yet another case I call the return_true contract function without gas and the True constant is seen by python. + +Why does calling return_true with transact={""from"": self.w3.eth.accounts[0]} cause the return value of the function to be HexBytes(...)? +Why does the address returned by return_address differ from what I expect? + +I think I have some sort of fundamental misunderstanding of how gas affects function calls.","The returned value is the transaction hash on the blockchain. When transacting (i.e., when using ""transact"" rather than ""call"") the blockchain gets modified, and the library you are using returns the transaction hash. During that process you must have paid ether in order to be able to modify the blockchain. However, operating in read-only mode costs no ether at all, so there is no need to specify gas. +Discounting the ""0x"" at the beginning, ethereum addresses have a length of 40, but in your test you are using a 39-character-long address, so there is a missing a ""1"" there. Meaning, tests are correct, you have an error in your input. + +Offtopic, both return_true and return_address should be marked as view in Solidity, since they are not actually modifying the state. I'm pretty sure you get a warning in remix. Once you do that, there is no need to access both methods using ""transact"" and paying ether, and you can do it using ""call"" for free. +EDIT +Forgot to mention: in case you need to access the transaction hash after using transact you can do so calling the .hex() method on the returned HexBytes object. That'll give you the transaction hash as a string, which is usually way more useful than as a HexBytes. +I hope it helps!",0.6730655149877884,False,1,5496 +2018-05-05 22:40:06.727,Colaboratory: How to install and use on local machine?,"Google Colab is awesome to work with, but I wish I can run Colab Notebooks completely locally and offline, just like Jupyter notebooks served from the local? +How do I do this? Is there a Colab package which I can install? + +EDIT: Some previous answers to the question seem to give methods to access Colab hosted by Google. But that's not what I'm looking for. +My question is how do I pip install colab so I can run it locally like jupyter after pip install jupyter. Colab package doesn't seem to exist, so if I want it, what do I do to install it from the source?","Google Colab is a cloud computer,it only runs through Internet,you can design your Python script,and run the Python script through Colab,run Python will use Google Colab hardware,Google will allocate CPU, RAM, GPU and etc for your Python script,your local computer just submit Python code to Google Colab,and run,then Google Colab return the result to your local computer,cloud computation is stronger than local +computation if your local computer hardware is limited,see this question link will inspire you,asked by me,https://stackoverflow.com/questions/48879495/how-to-apply-googlecolab-stronger-cpu-and-more-ram/48922199#48922199",-0.4961739557460144,False,1,5497 +2018-05-06 09:13:56.887,Predicting binary classification,"I have been self-learning machine learning lately, and I am now trying to solve a binary classification problem (i.e: one label which can either be true or false). I was representing this as a single column which can be 1 or 0 (true or false). +Nonetheless, I was researching and read about how categorical variables can reduce the effectiveness of an algorithm, and how one should one-hot encode them or translate into a dummy variable thus ending with 2 labels (variable_true, variable_false). +Which is the correct way to go about this? Should one predict a single variable with two possible values or 2 simultaneous variables with a fixed unique value? +As an example, let's say we want to predict whether a person is a male or female: +Should we have a single label Gender and predict 1 or 0 for that variable, or Gender_Male and Gender_Female?","it's basically the same, when talking about binary classification, you can think of a final layer for each model that adapt the output to other model +e.g if the model output 0 or 1 than the final layer will translate it to vector like [1,0] or [0,1] and vise-versa by a threshold criteria, usually is >= 0.5 +a nice byproduct of 2 nodes in the final layer is the confidence level of the model in it's predictions [0.80, 0.20] and [0.55, 0.45] will both yield [1,0] classification but the first prediction has more confidence +this can be also extrapolate from 1 node output by the distance of the output from the fringes 1 and 0 so 0.1 will be considered with more confidence than 0.3 as a 0 prediction",1.2,True,1,5498 +2018-05-06 21:22:33.530,Does gRPC have the ability to add a maximum retry for call?,"I haven't found any examples how to add a retry logic on some rpc call. Does gRPC have the ability to add a maximum retry for call? +If so, is it a built-in function?",Retries are not a feature of gRPC Python at this time.,1.2,True,1,5499 +2018-05-07 02:06:48.980,Tensorflow How can I make a classifier from a CSV file using TensorFlow?,"I need to create a classifier to identify some aphids. +My project has two parts, one with a computer vision (OpenCV), which I already conclude. The second part is with Machine Learning using TensorFlow. But I have no idea how to do it. +I have these data below that have been removed starting from the use of OpenCV, are HuMoments (I believe that is the path I must follow), each line is the HuMoments of an aphid (insect), I have 500 more data lines that I passed to one CSV file. +How can I make a classifier from a CSV file using TensorFlow? + +HuMoments (in CSV file): + 0.27356047,0.04652453,0.00084231,7.79486673,-1.4484489,-1.4727380,-1.3752532 + 0.27455502,0.04913969,3.91102408,1.35705980,3.08570234,2.71530819,-5.0277362 + 0.20708829,0.01563241,3.20141907,9.45211423,1.53559373,1.08038279,-5.8776765 + 0.23454372,0.02820523,5.91665789,6.96682467,1.02919203,7.58756583,-9.7028848","You can start with this tutorial, and try it first without changing anything; I strongly suggest this unless you are already familiar with Tensorflow so that you gain some familiarity with it. +Now you can modify the input layer of this network to match the dimensions of the HuMoments. Next, you can give a numeric label to each type of aphid that you want to recognize, and adjust the size of the output layer to match them. +You can now read the CSV file using python, and remove any text like ""HuMoments"". If your file has names of aphids, remove them and replace them with numerical class labels. Replace the training data of the code in the above link, with these data. +Now you can train the network according to the description under the title ""Train the Model"". +One more note. Unless it is essential to use Tensorflow to match your project requirements, I suggest using Keras. Keras is a higher level library that is much easier to learn than Tensorflow, and you have more sample code online.",0.0,False,1,5500 +2018-05-07 23:22:30.577,How can you fill in an open dialog box in headless chrome in Python and Selenium?,"I'm working with Python and Selenium to do some automation in the office, and I need to fill in an ""upload file"" dialog box (a windows ""open"" dialog box), which was invoked from a site using a headless chrome browser. Does anyone have any idea on how this could be done? +If I wasn't using a headless browser, Pywinauto could be used with a line similar to the following, for example, but this doesn't appear to be an option in headless chrome: +app.pane.open.ComboBox.Edit.type_keys(uploadfilename + ""{ENTER}"") +Thank you in advance!","This turned out to not be possible. I ended up running the code on a VM and setting a registry key to allow automation to be run while the VM was minimized, disconnected, or otherwise not being interacted with by users.",0.0,False,1,5501 +2018-05-08 10:55:31.387,"How to ""compile"" a python script to an ""exe"" file in a way it would be run as background process?","I know how to run a python script as a background process, but is there any way to compile a python script into exe file using pyinstaller or other tools so it could have no console or window ?","If you want to run it in background without ""console and ""window"" you have to run it as a service.",0.0,False,1,5502 +2018-05-08 12:08:02.053,(Django) Running asynchronous server task continously in the background,"I want to let a class run on my server, which contains a connected bluetooth socket and continously checks for incoming data, which can then by interpreted. In principle the class structure would look like this: +Interpreter: +-> connect (initializes the class and starts the loop) +-> loop (runs continously in the background) +-> disconnect (stops the loop) +This class should be initiated at some point and then run continously in the background, from time to time a http request would perhaps need data from the attributes of the class, but it should run on its own. +I don't know how to accomplish this and don't want to get a description on how to do it, but would like to know where I should start, like how this kind of process is called.","Django on its own doesn't support any background processes - everything is request-response cycle based. +I don't know if what you're trying to do even has a dedicated name. But most certainly - it's possible. But don't tie yourself to Django with this solution. +The way I would accomplish this is I'd run a separate Python process, that would be responsible for keeping the connection to the device and upon request return the required data in some way. +The only difficulty you'd have is determining how to communicate with that process from Django. Since, like I said, django is request based, that secondary app could expose some data to your Django app - it could do any of the following: + +Expose a dead-simple HTTP Rest API +Expose an UNIX socket that would just return data immediatelly after connection +Continuously dump data to some file/database/mmap/queue that Django could read",1.2,True,1,5503 +2018-05-08 18:49:32.583,Replace character with a absolute value,"When searching my db all special characters work aside from the ""+"" - it thinks its a space. Looking on the backend which is python, there is no issues with it receiving special chars which I believe it is the frontend which is Javascript +what i need to do is replace ""+"" == ""%2b"". Is there a way for me to use create this so it has this value going forth?","You can use decodeURIComponent('%2b'), or encodeUriComponent('+'); +if you decode the response from the server, you get the + sign- +if you want to replace all ocurrence just place the whole string insde the method and it decodes/encodes the whole string.",1.2,True,1,5504 +2018-05-08 21:02:22.097,How to deal with working on one project on different machines (paths)?,"This is my first time coding a ""project"" (something more than solving exercises in single files). A number of my .py files have variables imported from a specific path. I also have a main ""Run"" file where I import things I've written in other files and execute the project as a whole. +Recently I've started working on this project on several different machines (home, work, laptop etc) and have just started learning how to use GitHub. +My question is, how do I deal with the fact that every time I open up my code on a different machine I need to go around changing all the paths to fit the new machine, and then change them back again when I'm home? I started writing a Run file for each location I work at so that my sys.path commands are ok with that machine, but it doesn't solve the problem of my other modules importing variables from specific paths that vary from machine to machine. Is there a way round this or is the problem in how I'm setting up the project itself? +In an ideal world it would all just work without me having to change something before I run, depending on the machine I'm working on, but I don't know if that's possible. +My current thoughts are whether there is some command I'm not aware of that can set variables inside a .py file from my main Run.py file - that way I can just have a run file for each machine. +Any suggestions are gladly taken! Whatever it is, it must be better than commenting back in the correct file path each time I open it on a different machine!","You should always use relative paths, not static which I assume you have got. +Assuming your in an index file and you need to access images folder, you probably have something like /users/username/project/images/image.png +Instead you want something like ../images/image.png, this tells your index file to go backwards one folder to say the root of the project, then proceed into our images folder etc. +Relative paths mean you create a path from where your file exists, and not an entire path from ground up.",0.0,False,1,5505 +2018-05-10 01:53:40.577,Document similarity in production environment,"We are having n number of documents. Upon submission of new document by user, our goal is to inform him about possible duplication of existing document (just like stackoverflow suggests questions may already have answer). +In our system, new document is uploaded every minute and mostly about the same topic (where there are more chance of duplication). +Our current implementation includes gensim doc2vec model trained on documents (tagged with unique document ids). We infer vector for new document and find most_similar docs (ids) with it. Reason behind choosing doc2vec model is that we wanted to take advantage of semantics to improve results. As far as we know, it does not support online training, so we might have to schedule a cron or something that periodically updates the model. But scheduling cron will be disadvantageous as documents come in a burst. User may upload duplicates while model is not yet trained for new data. Also given huge amount of data, training time will be higher. +So i would like to know how such cases are handled in big companies. Are there any better alternative? or better algorithm for such problem?","You don't have to take the old model down to start training a new model, so despite any training lags, or new-document bursts, you'll always have a live model doing the best it can. +Depending on how much the document space changes over time, you might find retraining to have a negligible benefit. (One good model, built on a large historical record, might remain fine for inferring new vectors indefinitely.) +Note that tuning inference to use more steps (especially for short documents), or a lower starting alpha (more like the training default of 0.025) may give better results. +If word-vectors are available, there is also the ""Word Mover's Distance"" (WMD) calculation of document similarity, which might be ever better at identifying close duplicates. Note, though, it can be quite expensive to calculate – you might want to do it only against a subset of likely candidates, or have to add many parallel processors, to do it in bulk. There's another newer distance metric called 'soft cosine similarity' (available in recent gensim) that's somewhere between simple vector-to-vector cosine-similarity and full WMD in its complexity, that may be worth trying. +To the extent the vocabulary hasn't expanded, you can load an old Doc2Vec model, and continue to train() it – and starting from an already working model may help you achieve similar results with fewer passes. But note: it currently doesn't support learning any new words, and the safest practice is to re-train with a mix of all known examples interleaved. (If you only train on incremental new examples, the model may lose a balanced understanding of the older documents that aren't re-presented.) +(If you chief concern is documents that duplicate exact runs-of-words, rather than just similar fuzzy topics, you might look at mixing-in other techniques, such as breaking a document into a bag-of-character-ngrams, or 'shingleprinting' as in common in plagiarism-detection applications.)",1.2,True,1,5506 +2018-05-10 02:52:36.463,Apache Airflow: Gunicorn Configuration File Not Being Read?,"I'm trying to run Apache Airflow's webserver from a virtualenv on a Redhat machine, with some configuration options from a Gunicorn config file. Gunicorn and Airflow are both installed in the virtualenv. The command airflow webserver starts Airflow's webserver and the Gunicorn server. The config file has options to make sure Gunicorn uses/accepts TLSv1.2 only, as well as a list of ciphers to use. +The Gunicorn config file is gunicorn.py. This file is referenced through an environment variable GUNICORN_CMD_ARGS=""--config=/path/to/gunicorn.py ..."" in .bashrc. This variable also sets a couple of other variables in addition to --config. However, when I run the airflow webserver command, the options in GUNICORN_CMD_ARGS are never applied. +Seeing as how Gunicorn is not called from command line, but instead by Airflow, I'm assuming this is why the GUNICORN_CMD_ARGS environment variable is not read, but I'm not sure and I'm new to both technologies... +TL;DR: +Is there another way to set up Gunicorn to automatically reference a config file, without the GUNICORN_CMD_ARGS environment variable? +Here's what I'm using: + +gunicorn 19.8.1 +apache-airflow 1.9.0 +python 2.7.5","When Gunicorn is called by Airflow, it uses ~\airflow\www\gunicorn_config.py as its config file.",1.2,True,1,5507 +2018-05-10 10:48:13.883,How to make a Python Visualization as service | Integrate with website | specially sagemaker,"I am from R background where we can use Plumber kind tool which provide visualization/graph as Image via end points so we can integrate in our Java application. +Now I want to integrate my Python/Juypter visualization graph with my Java application but not sure how to host it and make it as endpoint. Right now I using AWS sagemaker to host Juypter notebook","Amazon SageMaker is a set of different services for data scientists. You are using the notebook service that is used for developing ML models in an interactive way. The hosting service in SageMaker is creating an endpoint based on a trained model. You can call this endpoint with invoke-endpoint API call for real time inference. +It seems that you are looking for a different type of hosting that is more suitable for serving HTML media rich pages, and doesn’t fit into the hosting model of SageMaker. A combination of EC2 instances, with pre-built AMI or installation scripts, Congnito for authentication, S3 and EBS for object and block storage, and similar building blocks should give you a scalable and cost effective solution.",1.2,True,1,5508 +2018-05-11 04:04:54.463,Python - Enable TLS1.2 on OSX,"I have a virtualenv environment running python 3.5 +Today, when I booted up my MacBook, I found myself unable to install python packages for my Django project. I get the following error: + +Could not fetch URL : There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:646) - skipping + +I gather that TLS 1.0 has been discontinued, but from what I understand, newer versions of Python should be using TLS1.2, correct? Even outside of my environment, running pip3 trips the same error. I've updated to the latest version of Sierra and have updated Xcode as well. Does anyone know how to resolve this?","Here is the fix: +curl https://bootstrap.pypa.io/get-pip.py | python +Execute from within the appropriate virtual environment.",1.2,True,1,5509 +2018-05-11 21:19:50.853,python Ubuntu: too many open files [eventpoll],"Basically, it is a multi-threaded crawler program, which uses requests mainly. After running the program for a few hours, I keep getting the error ""Too many open files"". +By running: lsof -p pid, I saw a huge number of entries like below: +python 75452 xxx 396u a_inode 0,11 0 8121 [eventpoll] +I cannot figure out what it is and how to trace back to the problem. +Previously, I tried to have it running in Windows and never seen this error. +Any idea how to continue investigating this issue? thanks.","I have figured out that it is caused by Gevent. After replacing gevent with multi-thread, everything is just OK. +However, I still don't know what's wrong with gevent, which keeps opening new files(eventpoll).",0.0,False,1,5510 +2018-05-11 22:56:26.280,How to prepare Python Selenium project to be used on client's machine?,"I've recently started freelance python programming, and was hired to write a script that scraped certain info online (nothing nefarious, just checking how often keywords appear in search results). +I wrote this script with Selenium, and now that it's done, I'm not quite sure how to prepare it to run on the client's machine. +Selenium requires a path to your chromedriver file. Am I just going to have to compile the py file as an exe and accept the path to his chromedriver as an argument, then show him how to download chromedriver and how to write the path? +EDIT: Just actually had a thought while typing this out. Would it work if I sent the client a folder including a chromedriver.exe inside of said folder, so the path was always consistent?","Option 1) Deliver a Docker image if customer not to watch the browser during running and they can setup Docker environment. The Docker image should includes following items: + +Python +Dependencies for running your script, like selenium +Headless chrome browser and compatible chrome webdriver binary +Your script, put them in github and +fetch them when start docker container, so that customer can always get +your latest code + +This approach's benefit: + +You only need to focus on scripts like bug fix and improvement after delivery +Customer only need to execute same docker command + +Option 2) Deliver a Shell script to do most staff automatically. It should accomplish following items: + +Install Python (Or leave it for customer to complete) +Install Selenium library and others needed +Install latest chrome webdriver binary (which is compatible backward) +Fetch your script from code repo like github, or simply deliver as packaged folder +Run your script. + +Option 3) Deliver your script and an user guide, customer have to do many staff by self. You can supply a config file along with your script for customer to specify the chrome driver binary path after they download. Your script read the path from this file, better than enter it in cmd line every time.",0.0,False,1,5511 +2018-05-12 09:01:11.140,Using Hydrogen with Python 3,"The default version of python installed on my mac is python 2. I also have python 3 installed but can't install python 2. +I'd like to configure Hyrdrogen on Atom to run my script using python 3 instead. +Does anybody know how to do this?","I used jupyter kernelspec list and I found 2 kernels available, one for python2 and another for python3 +So I pasted python3 kernel folder in the same directory where python2 ken=rnel is installed and removed python2 kernel using 'rm -rf python2'",0.0,False,1,5512 +2018-05-12 09:48:08.917,Python 3 install location,"I am using Ubuntu 16.04 . Where is the python 3 installation directory ? +Running ""whereis python3"" in terminal gives me: + +python3: /usr/bin/python3.5m-config /usr/bin/python3 + /usr/bin/python3.5m /usr/bin/python3.5-config /usr/bin/python3.5 + /usr/lib/python3 /usr/lib/python3.5 /etc/python3 /etc/python3.5 + /usr/local/lib/python3.5 /usr/include/python3.5m + /usr/include/python3.5 /usr/share/python3 + /usr/share/man/man1/python3.1.gz + +Also where is the intrepreter i.e the python 3 executable ? And how would I add this path to Pycharm ?","you can try this : +which python3",1.2,True,1,5513 +2018-05-12 10:36:33.093,How to continue to train a model with new classes and data?,"I have trained a model successfully and now I want to continue training it with new data. If a given data with the same amount of classes it works fine. But having more data then initially it will give me the error: + +ValueError: Shapes (?, 14) and (?, 21) are not compatible + +How can I dynamically increase the number of classes in my trained model or how to make the model accept a lesser number of classes? Do I need to save the classes in a pickle file?","Best thing to do is to train your network from scratch with the output layers adjusted to the new output class size. +If retraining is an issue, then keep the trained network as it is and only drop the last layer. Add a new layer with the proper output size, initialized to random weights and then fine-tune (train) the entire network.",0.0,False,1,5514 +2018-05-13 11:51:03.607,transfer files between local machine and remote server,I want to make access from remote ubuntu server to local machine because I have multiple files in this machine and I want to transfer it periodically (every minute) to server how can I do that using python,"You can easily transfer files between local and remote or between two remote servers. If both servers are Linux-based and require to transfer multiple files and folder using single command, however, you need to follow up below steps: + +User from one remote server should have access to another remote server to corresponding directory you want to transfer the file. + +You might need to create a policy or group and assign to server list to that group +which you want to access and assign the user to that group so 2 different remote +server can talk to each other. + +Run the following scp command:- + + + scp [options] username1@source_host:directory1/filename1 + username2@destination_host:directory2/filename2",0.0,False,1,5515 +2018-05-13 19:28:44.063,Need help using Keras' model.predict,"My goal is to make an easy neural network fit by providing 2 verticies of a certain Graph and 1 if there's a link or 0 if there's none. +I fit my model, it gets loss of about 0.40, accuracy of about 83% during fitting. I then evaluate the model by providing a batch of all positive samples and several batches of negative ones (utilising random.sample). My model gets loss of ~0.35 and 1.0 accuracy for positive samples and ~0.46 loss 0.68 accuracy for negative ones. +My understanding of neural networks if extremely limited, but to my understanding the above means it theoretically always is right when it outputs 0 when there's no link, but can sometimes output 1 even if there is none. +Now for my actual problem: I try to ""reconstruct"" the original graph with my neural network via model.predict. The problem is I don't understand what the predict output means. At first I assumed values above 0.5 mean 1, else 0. But if that's the case the model doesn't even come close to rebuilding the original. +I get that it won't be perfect, but it simply returns value above 0.5 for random link candidates. +Can someone explain to me how exactly model.predict works and how to properly use it to rebuild my graph?","The model that you trained is not directly optimized w.r.t. the graph reconstruction. Without loss of generality, for a N-node graph, you need to predict N choose 2 links. And it may be reasonable to assume that the true values of the most of these links are 0. +When looking into your model accuracy on the 0-class and 1-class, it is clear that your model is prone to predict 1-class, assuming your training data is balanced. Therefore, your reconstructed graph contains many false alarm links. This is the exact reason why the performance of your reconstruction graph is poor. +If it is possible to retrain the model, I suggest you do it and use more negative samples. +If not, you need to consider applying some post-processing. For example, instead of finding a threshold to decide which two nodes have a link, use the raw predicted link probabilities to form a node-to-node linkage matrix, and apply something like the minimum spanning tree to further decide what are appropriate links.",0.0,False,1,5516 +2018-05-14 05:48:54.863,How to used a tensor in different graphs?,"I build two graphs in my code, graph1 and graph2. +There is a tensor, named embedding, in graph1. I tied to use it in graph2 by using get_variable, while the error is tensor must be from the same graph as Tensor. I found that this error occurs because they are in different graphs. +So how can I use a tensor in graph1 to graph2?","expanding on @jdehesa's comment, +embedding could be trained initially, saved from graph1 and restored to graph2 using tensorflows saver/restore tools. for this to work you should assign embedding to a name/variable scope in graph1 and reuse the scope in graph2",0.0,False,1,5517 +2018-05-14 18:25:36.107,Best practice for rollbacking a multi-purpose python script,"I'm sorry if the title is a little ambiguous. Let me explain what I mean by that : +I have a python script that does a few things : creates a row in a MySQL table, inserts a json document to a MongoDB, Updates stuff in a local file, and some other stuff, mostly related to databases. Thing is, I want the whole operation to be atomic. Means - If anything during the process I mentioned failed, I want to rollback everything I did. I thought of implementing a rollback function for every 'create' function I have. But I'd love to hear your opinion for how to make some sort of a linked list of operations, in which if any of the nodes failed, I want to discard all the changes done in the process. +How would you design such a thing? Is there a library in Python for such things?","You should implement every action to be reversible and the reverse action to be executable even if the original action has failed. Then if you have any failures, you execute every reversal.",0.0,False,1,5518 +2018-05-15 09:13:53.017,Why and how would you not use a python GUI framework and make one yourself like many applications including Blender do?,"I have looked at a few python GUI frameworks like PyQt, wxPython and Kivy, but have noticed there aren’t many popular (used widely) python applications, from what I can find, that use them. +Blender, which is pretty popular, doesn’t seem to use them. How would one go about doing what they did/what did they do and what are the potential benefits over using the previously mentioned frameworks?","I would say that python isn't a popular choice when it comes to making a GUI application, which is why you don't find many examples of using the GUI frameworks. tkinter, which is part of the python development is another option for GUI's. +Blender isn't really a good example as it isn't a GUI framework, it is a 3D application that integrates python as a means for users to manipulate it's data. It was started over 25 years ago when the choice of cross platform frameworks was limited, so making their own was an easier choice to make. Python support was added to blender about 13 years ago. One of the factors in blender's choice was to make each platform look identical. That goes against most frameworks that aim to implement a native look and feel for each target platform. +So you make your own framework when the work of starting your own framework seems easier than adjusting an existing framework to your needs, or the existing frameworks all fail to meet your needs, one of those needs may be licensing with Qt and wxWidgets both available under (L)GPL, while Qt also sells non-GPL licensing. +The benefit to using an existing framework is the amount of work that is already done, you will find there is more than you first think in a GUI framework, especially when you start supporting multiple operating systems.",1.2,True,1,5519 +2018-05-15 19:46:31.853,Installing Kivy to an alternate location,"I have Python version 3.5 which is located here C:\Program Files(x86)\Microsoft Visual Studio\Shared\Python35_64 If I install kivy and its components and add-ons with this command: python -m pip install kivy, then it does not install in the place that I need. I want to install kivy in this location C:\Program Files(x86)\ Microsoft Visual Studio\Shared\Python35_64\Lib\site-packages, how can I do this? +I did not understand how to do this from the explanations on the official website.","So it turned out that I again solved my problem myself, I have installed Python 3.5 and Python 3.6 on my PC, kiwy was installed in Python 3.6 by default, and my development environment was using Python 3.5, I replaced it with 3.6 and it all worked.",0.3869120172231254,False,1,5520 +2018-05-16 07:28:11.157,Portable application: s3 and Google cloud storage,"I want to write an application which is portable. +With ""portable"" I mean that it can be used to access these storages: + +amazon s3 +google cloud storage +Eucalyptus Storage + +The software should be developed using Python. +I am unsure how to start, since I could not find a library which supports all three storages.",You can use boto3 for accessing any services of Amazon.,0.3869120172231254,False,1,5521 +2018-05-16 14:25:25.257,How to access created nodes in a mininet topology?,"I am new in mininet. I created a custom topology with 2 linear switches and 4 nodes. I need to write a python module accessing each nodes in that topology and do something but I don't know how. +Any idea please?","try the following: +s1.cmd('ifconfig s1 192.168.1.0') +h1.cmd('ifconfig h1 192.168.2.0')",1.2,True,1,5522 +2018-05-16 16:07:12.060,Real width of detected face,"I've been researching like forever, but couldn't find an answer. I'm using OpenCV to detect faces, now I want to calculate the distance to the face. When I detect a face, I get a matofrect (which I can visualize with a rectangle). Pretty clear so far. But now: how do I get the width of the rectangle in the real world? There has to be some average values that represent the width of the human face. If I have that value (in inch, mm or whatever), I can calculate the distance using real width, pixel width and focal length. Please, can anyone help me? +Note: I'm comparing the ""simple"" rectangle solution against a Facemark based distance measuring solution, so no landmark based answers. I just need the damn average face / matofrectwidth :D +Thank you so much!","OpenCV's facial recognition is slightly larger than a face, therefore an average face may not be helpful. Instead, just take a picture of a face at different distances from the camera and record the distance from the camera along with the pixel width of the face for several distances. After plotting the two variables on a graph, use a trendline to come up with a predictive model.",0.6730655149877884,False,1,5523 +2018-05-16 17:31:21.103,Split a PDF file into two columns along a certain measurement in Python?,"I have a ton of PDF files that are laid out in two columns. When I use PyPDF2 to extract the text, it reads the entire first column (which are like headers) and the entire second column. This makes splitting on the headers impossible. It's laid out in two columns: +____ __________ +|Col1 Col2 | +|Col1 Col2 | +|Col1 Col2 | +|Col1 Col2 | +____ __________ +I think I need to split the PDF in half along the edge of the column, then read each column left to right. It's 2.26 inches width on an 8x11 PDF. I can also get the coordinates using PyPDF2. +Does anyone have any experience doing this or know how I would do it? +Edit: When I extractText using PyPDF2, the ouput has no spaces: Col1Col1Col1Col1Col2Col2Col2Col2",Using pdfminer.six successfully read from left to right with spaces in between.,0.3869120172231254,False,1,5524 +2018-05-17 16:34:01.880,how to make a copy of an sqlalchemy object (data only),"I get a db record as an sqlalchemy object and I need to consult the original values during some calculation process, so I need the original record till the end. However, the current code modifies the object as it goes and I don't want to refactor it too much at the moment. +How can I make a copy of the original data? The deepcopy seems to create a problem, as expected. I definitely prefer not to copy all the fields manually, as someone will forget to update this code when modifying the db object.","You can have many options here to copy your object.Two of them which I can think of are : + +Using __dict__ it will give the dictionary of the original sqlalchemy object and you can iterate through all the attributes using .keys() function which will give all the attributes. +You can also use inspect module and getmembers() to get all the attributes defined and set the required attributes using setattr() method.",0.0,False,1,5525 +2018-05-18 06:14:11.447,basic serial port contention,"I am using a pi3 which talks to an arduino via serial0 (ttyAMA0) +It all works fine. I can talk to it with minicom, bidirectionally. However, a python based server also wants this port. I notice when minicom is running, the python code can write to serial0 but not read from it. At least minicom reports the python server has sent a message. +Can someone let me know how this serial port handles contention, if at all? I notice running two minicom session to the same serial port wrecks both sessions. Is it possible to have multiple writers and readers if they are coordinated not to act at the same time? Or can there be multiple readers (several terms running cat /dev/serial0) +I have googled around for answers but most hits are about using multiple serial ports or getting a serial port to work at all. +Cheers","Since two minicoms can attempt to use the port and there are collisions minicom must not set an advisory lock on local writes to the serial port. I guess that the first app to read received remote serial message clears it, since serial doesn't buffer. When a local app writes to serial, minicom displays this and it gets sent. I'm going to make this assumed summary + +when a local process puts a message on the serial port everyone can +see it and it gets sent to remote. +when a remote message arrives on +serial, the first local process to get it, gets it. The others +can't see it. +for some reason, minicom has privilege over arriving +messages. This is why two minicoms break the message.",0.3869120172231254,False,1,5526 +2018-05-18 14:53:02.983,Effective passing of large data to python 3 functions,"I am coming from a C++ programming background and am wondering if there is a pass by reference equivalent in python. The reason I am asking is that I am passing very large arrays into different functions and want to know how to do it in a way that does not waste time or memory by having copy the array to a new temporary variable each time I pass it. It would also be nice if, like in C++, changes I make to the array would persist outside of the function. +Thanks in advance, +Jared","Python handles function arguments in the same manner as most common languages: Java, JavaScript, C (pointers), C++ (pointers, references). +All objects are allocated on the heap. Variables are always a reference/pointer to the object. The value, which is the pointer, is copied. The object remains on the heap and is not copied.",0.999329299739067,False,1,5527 +2018-05-19 10:36:50.560,How to find symbolic derivative using python without sympy?,"I need to make a program which will differentiate a function, but I have no idea how to do this. I've only made a part which transforms the regular expression(x ^ 2 + 2 for example ) into reverse polish notation. Can anybody help me with creating a program which will a find symbolic derivatives of expression with + * / - ^","Hint: Use a recursive routine. If an operation is unary plus or minus, leave the plus or minus sign alone and continue with the operand. (That means, recursively call the derivative routine on the operand.) If an operation is addition or subtraction, leave the plus or minus sign alone and recursively find the derivative of each operand. If the operation is multiplication, use the product rule. If the operation is division, use the quotient rule. If the operation is exponentiation, use the generalized power rule. (Do you know that rule, for u ^ v? It is not given in most first-year calculus books but is easy to find using logarithmic differentiation.) (Now that you have clarified in a comment that there will be no variable in the exponent, you can use the regular power rule (u^n)' = n * u^(n-1) * u' where n is a constant.) And at the base of the recursion, the derivative of x is 1 and the derivative of a constant is zero. +The result of such an algorithm would be very un-simplified but it would meet your stated requirements. Since this algorithm looks at an operation then looks at the operands, having the expression in Polish notation may be simpler than reverse Polish or ""regular expression."" But you could still do it for the expression in those forms. +If you need more detail, show us more of your work.",1.2,True,1,5528 +2018-05-19 21:46:47.500,how to get the distance of sequence of nodes in pgr_dijkstra pgrouting?,"I have an array of integers(nodes or destinations) i.e array[2,3,4,5,6,8] that need to be visited in the given sequence. +What I want is, to get the shortest distance using pgr_dijkstra. But the pgr_dijkstra finds the shortest path for two points, therefore I need to find the distance of each pair using pgr_dijkstra and adding all distances to get the total distance. +The pairs will be like +2,3 +3,4 +4,5 +5,6 +6,8. +Is there any way to define a function that takes this array and finds the shortest path using pgr_dijkstra. +Query is: +for 1st pair(2,3) +SELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads',2,3, false); +for 2nd pair(3,4) +SELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads'***,3,4,*** false) +for 3rd pair(4,5) +SELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads'***,4,5,*** false); +NOTE: The array size is not fixed, it can be different. +Is there any way to automate this in postgres sql may be using a loop etc? +Please let me know how to do it. +Thank you.","If you want all pairs distance then use +select * from pgr_apspJohnson ('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads)",0.0,False,1,5529 +2018-05-21 10:54:03.443,Using aws lambda to render an html page in aws lex chatbot,"I have built a chatbot using AWS Lex and lambda. I have a use case where in a user enters a question (For example: What is the sale of an item in a particular region). I want that once this question is asked, a html form/pop up appears that askes the user to select the value of region and item from dropdown menus and fills the slot of the question with the value selected by the user and then return a response. Can some one guide how can this be achieved? Thanks.","Lex has something called response cards where your can add all the possible values. These are called prompts. The user can simply select his/her choice and the slot gets filled. Lex response cards work in Facebook and slack. +In case of custom channel, you will have to custom develop the UI components.",0.0,False,1,5530 +2018-05-22 07:22:36.627,How to install image library in python 3.6.4 in windows 7?,I am new to Python and I am using Python 3.6.4. I also use PyCharm editor to write all my code. Please let me know how can I install Image library in Windows 7 and would it work in PyCharm too.,"From pycharm, + +goto settings -> project Interpreter +Click on + button on top right corner and you will get pop-up window of +Available packages. Then search for pillow, PIL image python packages. +Then click on Install package to install those packages.",1.2,True,1,5531 +2018-05-23 00:28:20.643,"I have downloaded eclipse and pydev, but I am unsure how to get install django","I am attempting to learn how to create a website using python. I have been going off the advice of various websites including stackoverflow. Currently I can run code in eclipse using pydev, but I need to install django. I have no idea how to do this and I don't know who to ask or where to begin. Please help","I would recommend the following: + +Install virtual environment + +$pip install virtualenv + +Create a new virtualenvironment + +$ virtualenv django-venv + +Activate virtual environment & use + +$ source django-venv/bin/activate + +And install django as expected + +(django-venv)$ pip install django==1.11.13 +(Replace with django version as needed)",0.0,False,1,5532 +2018-05-23 14:46:15.693,Proper way of streaming JSON with Django,"i have a webservice which gets user requests and produces (multiple) solution(s) to this request. +I want to return a solution as soon as possible, and send the remaining solutions when they are ready. +In order to do this, I thought about using Django's Http stream response. Unfortunately, I am not sure if this is the most adequate way of doing so, because of the problem I will describe below. +I have a Django view, which receives a query and answers with a stream response. This stream returns the data returned by a generator, which is always a python dictionary. +The problem is that upon the second return action of the stream, the Json content breaks. +If the python dictionary, which serves as a response, is something like {key: val}, after the second yield the returned response is {key: val} {key: val}, which is not valid Json. +Any suggestions on how to return multiple Json objects at different moments in time?","Try decoding with something like +for example + + + +import json +json.dumps( {key: val} {key: val}, separators=('}', ':')) #check it",0.0,False,1,5533 +2018-05-23 15:52:41.077,pycharm won't let me run from desktop,"I have been using pycharm for a while now, and I have to say that I am a real fan of it's features. I have one issue though, when I try to run a .py file from either the desktop or command prompt, I am instead prompted to use the run feature in pycharm. I consider this an issue because if I try to create a program for someone who doesn't know how to code, they would probably be scared off by opening pycharm. I don't, however, want to uninstall pycharm because it is so useful when writing code. Does anyone have any Ideas for me? By the way, I am using a dell Inspiron 15 7000 Gaming laptop with the current version of Windows 10 installed.","You can try running the direct path of the file, I'm not sure what you have tried. +If you wanted to run it as I just described you would do: +py C:\~AppData\Local\Programs\Python\Python36-32\hello.py +If you move the file into your current working directory when programming, you should just be able to run py hello.py.",1.2,True,1,5534 +2018-05-23 20:49:52.333,Calling database handler class in a python thread,"I'm programming a bit of server code and the MQTT side of it runs in it's own thread using the threading module which works great and no issues but now I'm wondering how to proceed. +I have two MariaDB databases, one of them is local and the other is remote (There is a good and niche reason for this.) and I'm writing a class which handles the databases. This class will start new threads of classes that submits the data to their respected databases. If conditions are true, then it tells the data to start a new thread to push data to one database, if they are false, the data will go to the other database. The MQTT thread has a instance of the ""Database handler"" class and passes data to it through different calling functions within the class. +Will this work to allow a thread to concentrate on MQTT tasks while another does the database work? There are other threads as well, I've just never combined databases and threads before so I'd like an opinion or any information that would help me out from more seasoned programmers.","Writing code that is ""thread safe"" can be tricky. I doubt if the Python connector to MySQL is thread safe; there is very little need for it. +MySQL is quite happy to have multiple connections to it from clients. But they must be separate connections, not the same connection running in separate threads. +Very few projects need multi-threaded access to the database. Do you have a particular need? If so let's hear about it, and discuss the 'right' way to do it. +For now, each of your threads that needs to talk to the database should create its own connection. Generally, such a connection can be created soon after starting the thread (or process) and kept open until close to the end of the thread. That is, normally you should have only one connection per thread.",0.0,False,1,5535 +2018-05-25 18:54:02.363,python logging multiple calls after each instantiation,"I have multiple modules and they each have their own log. The all write to the log correctly however when a class is instantiated more than once the log will write the same line multiple times depending on the number of times it was created. +If I create the object twice it will log every messages twice, create the object three times it will log every message three times, etc... +I was wondering how I could fix this without having to only create each object only once. +Any help would be appreciated.",I was adding the handler multiple times after each instantiation of a log. I checked if the handler had already been added at the instantiation and that fixed the multiple writes.,0.0,False,1,5536 +2018-05-28 15:00:34.117,using c extension library with gevent,"I use celery for doing snmp requests with easysnmp library which have a C interface. +The problem is lots of time is being wasted on I/O. I know that I should use eventlet or gevent in this kind of situations, but I don't know how to handle patching a third party library when it uses C extensions.","Eventlet and gevent can't monkey-patch C code. +You can offload blocking calls to OS threads with eventlet.tpool.execute(library.io_func)",0.3869120172231254,False,1,5537 +2018-05-29 02:13:44.043,How large data can Python Ray handle?,"Python Ray looks interesting for machine learning applications. However, I wonder how large Python Ray can handle. Is it limited by memory or can it actually handle data that exceeds memory?","It currently works best when the data fits in memory (if you're on a cluster, then that means the aggregate memory of the cluster). If the data exceeds the available memory, then Ray will evict the least recently used objects. If those objects are needed later on, they will be reconstructed by rerunning the tasks that created them.",1.2,True,1,5538 +2018-05-29 18:31:38.537,Discord bot with user specific counter,"I'm trying to make a Discord bot in Python that a user can request a unit every few minutes, and later ask the bot how many units they have. Would creating a google spreadsheet for the bot to write each user's number of units to be a good idea, or is there a better way to do this?","Using a database is the best option. If you're working with a small number of users and requests you could use something even simpler like a text file for ease of use, but I'd recommend a database. +Easy to use database options include sqlite (use the sqlite3 python library) and MongoDB (I use the mongoengine python library for my Slack bot).",0.0,False,1,5539 +2018-05-29 21:28:22.547,How execute python command within virtualenv with Visual Studio Code,"I have created virtual environment named virualenv. I have scrapy project and I am using there some programs installed in my virtualenv. When I run it from terminal in VSC I can see errors even when I set up my virtual environment via Ctrl+Shift+P -> Python: Select Interpreter -> Python 3.5.2(virtualenv). Interpreter works in some way, I can import libs without errors etc, but I am not possible to start my scrapy project from terminal. I have to activate my virtual environment first via /{path_to_virtualenv}/bin/activate. Is there a way, how to automatically activate it? Now I am using PyCharm and it is possible there, but VSC looks much better according to me.","One way I know how, +Start cmd +Start you virtual env +(helloworld) \path\etc> code . +It will start studio code in this environment. Hope it helps",0.3869120172231254,False,1,5540 +2018-05-30 15:56:33.700,"TensorFlow debug: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed","I'm new (obviously) to python, but not so new to TensorFlow +I've been trying to debug my program using breakpoint, but everytime I try to check the content of a tensor in the variable view of my Visual Studio Code debugger, the content doesn't show I get this warning in the console: + +WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed. + +I'm a bit confused on how to fix this issue. Do I have to wait for an update of TensorFlow before it works?","Probably yes you may have to wait. In the debug mode a deprecated function is being called. +You can print out the shape explicitly by calling var.shape() in the code as a workaround. I know not very convenient.",0.0,False,2,5541 +2018-05-30 15:56:33.700,"TensorFlow debug: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed","I'm new (obviously) to python, but not so new to TensorFlow +I've been trying to debug my program using breakpoint, but everytime I try to check the content of a tensor in the variable view of my Visual Studio Code debugger, the content doesn't show I get this warning in the console: + +WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed. + +I'm a bit confused on how to fix this issue. Do I have to wait for an update of TensorFlow before it works?","You can simply stop at the break point, and switch to DEBUG CONSOLE panel, and type var.shape. It's not that convenient, but at least you don't need to write any extra debug code in your code.",0.0,False,2,5541 +2018-05-30 16:38:21.447,Django storages S3 - Store existing file,"I have django 1.11 with latest django-storages, setup with S3 backend. +I am trying to programatically instantiate an ImageFile, using the AWS image link as a starting point. I cannot figure out how to do this looking at the source / documentation. +I assume I need to create a file, and give it the path derived from the url without the domain, but I can't find exactly how. +The final aim of this is to programatically create wagtail Image objects, that point to S3 images (So pass the new ImageFile to the Imagefield of the image). I own the S3 bucket the images are stored in it. +Uploading images works correctly, so the system is setup correctly. +Update +To clarify, I need to do the reverse of the normal process. Normally a physical image is given to the system, which then creates a ImageFile, the file is then uploaded to S3, and a URL is assigned to the File.url. I have the File.url and need an ImageFile object.","It turns out, in several models that expect files, when using DjangoStorages, all I had to do is instead of passing a File on the file field, pass the AWS S3 object key (so not a URL, just the object key). +When model.save() is called, a boto call is made to S3 to verify an object with the provided key is there, and the item is saved.",1.2,True,1,5542 +2018-05-31 22:09:08.750,import sklearn in python,"I installed miniconda for Windows10 successfully and then I could install numpy, scipy, sklearn successfully, but when I run import sklearn in python IDLE I receive No module named 'sklearn' in anaconda prompt. It recognized my python version, which was 3.6.5, correctly. I don't know what's wrong, can anyone tell me how do I import modules in IDLE ?","Why bot Download the full anaconda and this will install everything you need to start which includes Spider IDE, Rstudio, Jupyter and all the needed modules.. +I have been using anaconda without any error and i will recommend you try it out.",1.2,True,1,5543 +2018-06-01 01:04:30.917,Pycharm Can't install TensorFlow,"I cannot install tensorflow in pycharm on windows 10, though I have tried many different things: + +went to settings > project interpreter and tried clicking the green plus button to install it, gave me the error: non-zero exit code (1) and told me to try installing via pip in the command line, which was successful, but I can't figure out how to make Pycharm use it when it's installed there +tried changing to a Conda environment, which still would not allow me to run tensorflow since when I input into the python command line: pip.main(['install', 'tensorflow']) it gave me another error and told me to update pip +updated pip then tried step 2 again, but now that I have pip 10.0.1, I get the error 'pip has no attribute main'. I tried reverted pip to 9.0.3 in the command line, but this won't change the version used in pycharm, which makes no sense to me. I reinstalled anaconda, as well as pip, and deleted and made a new project and yet it still says that it is using pip 10.0.1 which makes no sense to me + +So in summary, I still can't install tensorflow, and I now have the wrong version of pip being used in Pycharm. I realize that there are many other posts about this issue but I'm pretty sure I've been to all of them and either didn't get an applicable answer or an answer that I understand.","what worked for is this; + +I installed TensorFlow on the command prompt as an administrator using this command pip install tensorflow +then I jumped back to my pycharm and clicked the red light bulb pop-up icon, it will have a few options when you click it, just select the one that says install tensor flow. This would not install in from scratch but basically, rebuild and update your pycharm workspace to note the newly installed tensorflow",0.0,False,1,5544 +2018-06-02 08:27:36.887,How should I move my completed Django Project in a Virtual Environment?,"I started learning django a few days back and started a project, by luck the project made is good and I'm thinking to deploy it. However I didn't initiate it in virtual environment. have made a virtual environment now and want to move project to that. I want to know how can I do that ? I have created requirements.txt whoever it has included all the irrelevant library names. How can I get rid of them and have only that are required for the project.","Django is completely unrelated to the environment you run it on. +The environment represents which python version are you using (2,3...) and the libraries installed. +To answer your question, the only thing you need to do is run your manage.py commands from the python executable in the new virtual environment. Of course install all of the necessary libraries in the new environment if you haven't already did so. +It might be a problem if you created a python3 environment while the one you created was in python2, but at that point it's a code portability issue.",1.2,True,1,5545 +2018-06-03 08:14:39.850,Train CNN model with multiple folders and sub-folders,"I am developing a convolution neural network (CNN) model to predict whether a patient in category 1,2,3 or 4. I use Keras on top of TensorFlow. +I have 64 breast cancer patient data, classified into four category (1=no disease, 2= …., 3=….., 4=progressive disease). In each patient's data, I have 3 set of MRI scan images taken at different dates and inside each MRI folder, I have 7 to 8 sub folders containing MRI images in different plane (such as coronal plane/sagittal plane etc). +I learned how to deal with basic “Cat-Dog-CNN-Classifier”, it was easy as I put all the cat & dog images into a single folder to train the network. But how do I tackle the problem in my breast cancer patient data? It has multiple folders and sub-solders. +Please suggest.",Use os.walk to access all the files in sub-directories recursively and append to the dataset.,-0.1352210990936997,False,1,5546 +2018-06-03 14:02:27.027,How can I change the default version of Python Used by Atom?,"I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the ""script"" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.","I am using script 3.18.1 in Atom 1.32.2 +Navigate to Atom (at top left) > Open Preferences > Open Config folder. +Now, Expand the tree as script > lib > grammars +Open python.coffee and change 'python' to 'python3' in both the places in command argument",0.9866142981514304,False,4,5547 +2018-06-03 14:02:27.027,How can I change the default version of Python Used by Atom?,"I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the ""script"" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.","I came up with an inelegant solution that may not be universal. Using platformio-ide-terminal, I simply had to call python3.9 instead of python or python3. Not sure if that is exactly what you're looking for.",0.0,False,4,5547 +2018-06-03 14:02:27.027,How can I change the default version of Python Used by Atom?,"I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the ""script"" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.","I would look in the atom installed plugins in settings.. you can get here by pressing command + shift + p, then searching for settings. +The only reason I suggest this is because, plugins is where I installed swift language usage accessibility through a plugin that manages that in atom. +Other words for plugins on atom would be ""community packages"" +Hope this helps.",0.0,False,4,5547 +2018-06-03 14:02:27.027,How can I change the default version of Python Used by Atom?,"I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the ""script"" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.","Yes, there is. After starting Atom, open the script you wish to run. Then open command palette and select 'Python: Select interpreter'. A list appears with the available python versions listed. Select the one you want and hit return. Now you can run the script by placing the cursor in the edit window and right-clicking the mouse. A long menu appears and you should choose the 'Run python in the terminal window'. This is towards the bottom of the long menu list. The script will run using the interpreter you selected.",0.0,False,4,5547 +2018-06-04 05:13:38.857,Line by line data from Google cloud vision API OCR,"I have scanned PDFs (image based) of bank statements. +Google vision API is able to detect the text pretty accurately but it returns blocks of text and I need line by line text (bank transactions). +Any idea how to go about it?","In Google Vision API there is a method fullTextAnnotation which returns a full text string with \n specifying the end of the line, You can try that.",0.0,False,1,5548 +2018-06-04 20:20:23.930,"XgBoost accuracy results differ on each run, with the same parameters. How can I make them constant?","The 'merror' and 'logloss' result from XGB multiclass classification differs by about 0.01 or 0.02 on each run, with the same parameters. Is this normal? +I want 'merror' and 'logloss' to be constant when I run XGB with the same parameters so I can evaluate the model precisely (e.g. when I add a new feature). +Now, if I add a new feature I can't really tell whether it had a positive impact on my model's accuracy or not, because my 'merror' and 'logloss' differ on each run regardless of whether I made any changes to the model or the data fed into it since the last run. +Should I try to fix this and if I should, how can I do it?","Managed to solve this. First I set the 'seed' parameter of XgBoost to a fixed value, as Hadus suggested. Then I found out that I used sklearn's train_test_split function earlier in the notebook, without setting the random_state parameter to a fixed value. So I set the random_state parameter to 22 (you can use whichever integer you want) and now I'm getting constant results.",0.0,False,1,5549 +2018-06-04 23:38:16.783,How to keep python programming running constantly,"I made a program that grabs the top three new posts on the r/wallpaper subreddit. It downloads the pictures every 24 hours and adds them to my wallpapers folder. What I'm running into is how to have the program running in the background. The program resumes every time I turn the computer on, but it pauses whenever I close the computer. Is there a way to close the computer without pausing the program? I'm on a mac.","Programs can't run when the computer is powered off. However, you can run a computer headlessly (without mouse, keyboard, and monitor) to save resources. Just ensure your program runs over the command line interface.",0.0,False,1,5550 +2018-06-05 04:53:45.747,Pandas - Read/Write to the same csv quickly.. getting permissions error,"I have a script that I am trying to execute every 2 seconds.. to begin it reads a .csv with pd.read_csv. Then executes modifications on the df and finally overwrites the original .csv with to_csv. +I'm running into a PermissionError: [Errno 13] Permission denied: and from my searches I believe it's due to trying to open/write too often to the same file though I could be wrong. + +Any suggestions how to avoid this? +Not sure if relevant but the file is stored in one-drive folder. +It does save on occasion, seemingly randomly. +Increasing the timeout so the script executes slower helps but I want it running fast! + +Thanks","Close the file that you are trying to read and write and then try running your script. +Hope it helps",-0.2012947653214861,False,1,5551 +2018-06-06 11:33:58.087,Optimizing RAM usage when training a learning model,"I have been working on creating and training a Deep Learning model for the first time. I did not have any knowledge about the subject prior to the project and therefor my knowledge is limited even now. +I used to run the model on my own laptop but after implementing a well working OHE and SMOTE I simply couldnt run it on my own device anymore due to MemoryError (8GB of RAM). Therefor I am currently running the model on a 30GB RAM RDP which allows me to do so much more, I thought. +My code seems to have some horribly inefficiencies of which I wonder if they can be solved. One example is that by using pandas.concat my model's RAM usages skyrockets from 3GB to 11GB which seems very extreme, afterwards I drop a few columns making the RAm spike to 19GB but actually returning back to 11GB after the computation is completed (unlike the concat). I also forced myself to stop using the SMOTE for now just because the RAM usage would just go up way too much. +At the end of the code, where the training happens the model breaths its final breath while trying to fit the model. What can I do to optimize this? +I have thought about splitting the code into multiple parts (for exmaple preprocessing and training) but to do so I would need to store massive datasets in a pickle which can only reach 4GB (correct me if I'm wrong). I have also given thought about using pre-trained models but I truely did not understand how this process goes to work and how to use one in Python. +P.S.: I would also like my SMOTE back if possible +Thank you all in advance!","Slightly orthogonal to your actual question, if your high RAM usage is caused by having entire dataset in memory for the training, you could eliminate such memory footprint by reading and storing only one batch at a time: read a batch, train on this batch, read next batch and so on.",0.0,False,1,5552 +2018-06-07 17:31:59.093,ARIMA Forecasting,"I have a time series data which looks something like this +Loan_id Loan_amount Loan_drawn_date + id_001 2000000 2015-7-15 + id_003 100 2014-7-8 + id_009 78650 2012-12-23 + id_990 100 2018-11-12 +I am trying to build a Arima forecasting model on this data which has round about 550 observations. These are the steps i have followed + +Converted the time series data into daily data and replaced NA values with 0. the data look something like this +Loan_id Loan_amount Loan_drawn_date +id_001 2000000 2015-7-15 +id_001 0 2015-7-16 +id_001 0 2015-7-17 +id_001 0 2015-7-18 +id_001 0 2015-7-19 +id_001 0 2015-7-20 +.... +id_003 100 2014-7-8 +id_003 0 2014-7-9 +id_003 0 2014-7-10 +id_003 0 2014-7-11 +id_003 0 2014-7-12 +id_003 0 2014-7-13 +.... +id_009 78650 2012-12-23 +id_009 0 2012-12-24 +id_009 0 2012-12-25 +id_009 0 2012-12-26 +id_009 0 2012-12-27 +id_009 0 2012-12-28 +... +id_990 100 2018-11-12 +id_990 0 2018-11-13 +id_990 0 2018-11-14 +id_990 0 2018-11-15 +id_990 0 2018-11-16 +id_990 0 2018-11-17 +id_990 0 2018-11-18 +id_990 0 2018-11-19 +Can Anyone please suggest me how do i proceed ahead with these 0 values now? +Seeing the variance in the loan amount numbers i would take log of the of the loan amount. i am trying to build the ARIMA model for the first time and I have read about all the methods of imputation but there is nothing i can find. Can anyone please tell me how do i proceed ahead in this data","I don't know exactly about your specific domain problem, but these things apply usually in general: + +If the NA values represent 0 values for your domain specific problem, then replace them with 0 and then fit the ARIMA model (this would for example be the case if you are looking at daily sales and on some days you have 0 sales) +If the NA values represent unknown values for your domain specific problem then do not replace them and fit your ARIMA model. (this would be the case, if on a specific day the employee forgot to write down the amount of sales and it could be any number). + +I probably would not use imputation at all. There are methods to fit an ARIMA model on time series that have missing values. Usually these algorithms should probably also implemented somewhere in python. (but I don't know since I am mostly using R)",1.2,True,1,5553 +2018-06-08 11:15:45.900,Randomizing lists with variables in Python 3,"I'm looking for a way to randomize lists in python (which I already know how to do) but to then make sure that two things aren't next to each other. For example, if I were to be seating people and numbering the listing going down by 0, 1, 2, 3, 4, 5 based on tables but 2 people couldn't sit next to each other how would I make the list organized in a way to prohibit the 2 people from sitting next to each other.","As you say that you know how to shuffle a list, the only requirement is that two elements are not next to each other. +A simple way is to: + +shuffle the full list +if the two elements are close, choose a random possible position for the second one +exchange the two elements + +Maximum cost: one shuffle, one random choice, one exchange",1.2,True,1,5554 +2018-06-09 00:49:48.297,how to check the SD card size before mounted and do not require root,"I want to check the SD card size in bash or python. Right now I know df can check it when the SD card is mounted or fdisk -l if root is available. +But I want to know how to check the SD card size without requiring mounting the card to the file system or requiring the root permission? For example, if the SD card is not mounted and I issue df -h /dev/sdc, this will return a wrong size. In python, os.statvfs this function returns the same content as well. I search on stack overflow but did not find a solution yet.","Well, I found the lsblk -l can do the job. It tells the total size of the partitions.",0.0,False,1,5555 +2018-06-09 15:59:07.447,How to write a python program that 'scrapes' the results from a website for all possible combinations chosen from the given drop down menus?,"There is a website that claims to predict the approximate salary of an individual on the basis of the following criteria presented in the form of individual drop-down + +Age : 5 options +Education : 3 Options +Sex : 3 Options +Work Experience : 4 Options +Nationality: 12 Options + +On clicking the Submit button, the website gives a bunch of text as output on a new page with an estimate of the salary in numerals. +So, there are technically 5*3*3*4*12 = 2160 data points. I want to get that and arrange it in an excel sheet. Then I would run a regression algorithm to guess the function this website has used. This is what I am looking forward to achieve through this exercise. This is entirely for learning purposes since I'm keen on learning these tools. +But I don't know how to go about it? Any relevant tutorial, documentation, guide would help! I am programming in python and I'd love to use it to achieve this task! +Thanks!","If you are uncomfortable asking them for database as roganjosh suggested :) use Selenium. Write in Python a script that controls Web Driver and repeatedly sends requests to all possible combinations. The script is pretty simple, just a nested loop for each type of parameter/drop down. +If you are sure that value of each type do not depend on each other, check what request is sent to the server. If it is simple URL encoded, like age=...&sex=...&..., then Selenium is not needed. Just generate such URLa for all possible combinations and call the server.",1.2,True,1,5556 +2018-06-09 16:51:02.213,"Rasa-core, dealing with dates","I have a problem with rasa core, let's suppose that I have a rasa-nlu able to detect time +eg ""let's start tomorrow"" would get the entity time: 2018-06-10:T18:39:155Z +Ok, now I want next branches, or decisions to be conditioned by: + +time is in the past +time before one month from now +time is beyond 1 +month + +I do not know how to do that. I do not know how to convert it to a slot able to influence the dialog. My only idea would be to have an action that converts the date to a categorical slot right after detecting time, but I see two problems with that approach: + +one it would already be too late, meaning that if I do it with a +posterior action it means the rasa-core has already decided what +decision to take without using the date +and secondly, I do know how to save it, because if I have a +stories.md that compares a detecting date like in the example with +the current time, maybe in the time of the example it was beyond one +month but now it is in the past, so the reset of that story would be +wrong. + +I am pretty lost and I do not know how to deal with this, thanks a lot!!!","I think you could have a validation in the custom form. +Where it perform validation on the time and perform next action base on the decision on the time. +Your story will have to train to handle different action paths.",0.0,False,1,5557 +2018-06-10 13:57:31.837,Multi crtieria alterative ranking based on mixed data types,"I am building a recommender system which does Multi Criteria based ranking of car alternatives. I just need to do ranking of the alternatives in a meaningful way. I have ways of asking user questions via a form. +Each car will be judged on the following criteria: price, size, electric/non electric, distance etc. As you can see its a mix of various data types, including ordinal, cardinal(count) and quantitative dat. +My question is as follows: + +Which technique should I use for incorporating all the models into a single score Which I can rank. I looked at normalized Weighted sum model, but I have a hard time assigning weights to ordinal(ranked) data. I tried using the SMARTER approach for assigning numerical weights to ordinal data but Im not sure if it is appropriate. Please help! +After someone can help me figure out answer to finding the best ranking method, what if the best ranked alternative isnt good enough on an absolute scale? how do i check that so that enlarge the alternative set further? + +3.Since the criterion mention above( price, etc) are all on different units, is there a good method to normalized mixed data types belonging to different scales? does it even make sense to do so, given that the data belongs to many different types? +any help on these problems will be greatly appreciated! Thank you!","I am happy to see that you are willing to use multiple criteria decision making tool. You can use Analytic Hierarchy Process (AHP), Analytic Network Process (ANP), TOPSIS, VIKOR etc. Please refer relevant papers. You can also refer my papers. +Krishnendu Mukherjee",-0.3869120172231254,False,1,5558 +2018-06-11 22:00:14.173,Security of SFTP packages in Python,"There is plenty of info on how to use what seems to be third-party packages that allow you to access your sFTP by inputting your credentials into these packages. +My dilemma is this: How do I know that these third-party packages are not sharing my credentials with developers/etc? +Thank you in advance for your input.","Thanks everyone for comments. +To distill it: Unless you do a code review yourself or you get a the sftp package from a verified vendor (ie - packages made by Amazon for AWS), you can not assume that these packages are ""safe"" and won't post your info to a third-party site.",1.2,True,1,5559 +2018-06-11 22:56:02.750,How to sync 2 streams from separate sources,"Can someone point me the right direction to where I can sync up a live video and audio stream? +I know it sound simple but here is my issue: + +We have 2 computers streaming to a single computer across multiple networks (which can be up to hundreds of miles away). +All three computers have their system clocks synchronized using NTP +Video computer gathers video and streams UDP to the Display computer +Audio computer gathers audio and also streams to the Display computer + +There is an application which accepts the audio stream. This application does two things (plays the audio over the speakers and sends network delay information to my application). I am not privileged to the method which they stream the audio. +My application displays the video and two other tasks (which I haven't been able to figure out how to do yet). +- I need to be able to determine the network delay on the video stream (ideally, it would be great to have a timestamp on the video stream from the Video computer which is related to that system clock so I can compare that timestamp to my own system clock). +- I also need to delay the video display to allow it to be synced up with the audio. +Everything I have found assumes that either the audio and video are being streamed from the same computer, or that the audio stream is being done by gstreamer so I could use some sync function. I am not privileged to the actual audio stream. I am only given the amount of time the audio was delayed getting there (network delay). +So intermittently, I am given a number as the network delay for the audio (example: 250 ms). I need to be able to determine my own network delay for the video (which I don't know how to do yet). Then I need to compare to see if the audio delay is more than the video network delay. Say the video is 100ms ... then I would need to delay the video display by 150ms (which I also don't know how to do). +ANY HELP is appreciated. I am trying to pick up where someone else has left off in this design so it hasn't been easy for me to figure this out and move forward. Also being done in Python ... which further limits the information I have been able to find. Thanks. +Scott","A typical way to synch audio and video tracks or streams is have a timestamp for each frame or packet, which is relative to the start of the streams. +This way you know that no mater how long it took to get to you, the correct audio to match with the video frame which is 20001999 (for example) milliseconds from the start is the audio which is also timestamped as 20001999 milliseconds from the start. +Trying to synch audio and video based on an estimate of the network delay will be extremely hard as the delay is very unlikely to be constant, especially on any kind of IP network. +If you really have no timestamp information available, then you may have to investigate more complex approaches such as 'markers' in the stream metadata or even some intelligent analysis of the audio and video streams to synch on an event in the streams themselves.",0.0,False,1,5560 +2018-06-12 08:22:14.127,Python script as service has not access to asoundrc configuration file,"I have a python script that records audio from an I2S MEMS microphone, connected to a Raspberry PI 3. +This script runs as supposed to, when accessed from the terminal. The problem appears when i run it as a service in the background. +From what i have seen, the problem is that the script as service, has no access to a software_volume i have configured in asoundrc. The strange thing is that i can see this ""device"" in the list of devices using the get_device_info_by_index() function. +For audio capturing i use the pyaudio library and for making the script a service i have utilized the supervisor utility. +Any ideas what the problem might be and how i can make my script to have access to asoundrc when it runs as a service?","The ~/.asoundrc file is looked for the home directory of the current user (this is what ~ means). +Put it into the home directory of the user as which the service runs, or put the definitions into the global ALSA configuration file /etc/asound.conf.",1.2,True,1,5561 +2018-06-12 14:34:32.823,Odoo 10 mass mailing configure bounces,"I'm using Odoo 10 mass mailing module to send newsletters. I have configured it but I don't know how to configure bounced emails. It is registering correctly sent emails, received (except that it is registering bounced as received), opened and clicks. +Can anyone please help me? +Regards","I managed to solve this problem. Just configured the 'bounce' system parameter to an email with the same name. +Example: +I created an email bounce-register@example.com. Also remember to configure the alias domain in your general settings to 'example.com' +After configuring your email to register bounces you need to configure an incomming mail server for this email (I configured it as an IMAP so I think that should do altough you can also configure it as a POP). That would be it. +Hope this info server for you",1.2,True,1,5562 +2018-06-14 15:07:58.413,How to predict word using trained skipgram model?,"I'm using Google's Word2vec and I'm wondering how to get the top words that are predicted by a skipgram model that is trained using hierarchical softmax, given an input word? +For instance, when using negative sampling, one can simply multiply an input word's embedding (from the input matrix) with each of the vectors in the output matrix and take the one with the top value. However, in hierarchical softmax, there are multiple output vectors that correspond to each input word, due to the use of the Huffman tree. +How do we compute the likelihood value/probability of an output word given an input word in this case?","I haven't seen any way to do this, and given the way hierarchical-softmax (HS) outputs work, there's no obviously correct way to turn the output nodes' activation levels into a precise per-word likelihood estimation. Note that: + +the predict_output_word() method that (sort-of) simulates a negative-sampling prediction doesn't even try to handle HS mode +during training, neither HS nor negative-sampling modes make exact predictions – they just nudge the outputs to be more like the current training example would require + +To the extent you could calculate all output node activations for a given context, then check each word's unique HS code-point node values for how close they are to ""being predicted"", you could potentially synthesize relative scores for each word – some measure of how far the values are from a ""certain"" output of that word. But whether and how each node's deviation should contribute to that score, and how that score might be indicative of a interpretable liklihood, is unclear. +There could also be issues because of the way HS codes are assigned strictly by word-frequency – so 'neighbor' word sharing mostly-the-same-encoding may be very different semantically. (There were some hints in the original word2vec.c code that it could potentially be beneficial to assign HS-encodings by clustering related words to have similar codings, rather than by strict frequency, but I've seen little practice of that since.) +I would suggest sticking to negative-sampling if interpretable predictions are important. (But also remember, word2vec isn't mainly used for predictions, it just uses the training-attempts-at-prediction to bootstrap a vector-arrangment that turn out to be useful for other tasks.)",0.0,False,1,5563 +2018-06-15 06:29:25.043,ImportError: cannot import name _remove_dead_weakref python 2.7,"I use windows 7 and python 2.7 +When I used py2exe to make an .exe file I get the error; +Traceback (most recent call last): +File ""mainpy"", line 17, in +File ""main.py"", line 17, in +File ""zipextimporter.pyc"", line 82, in load_module +File ""zipextimporter.pyc"", line 82, in load_module +File ""logging_init_.pyc"", line 26, in +File ""zipextimporter.pyc"", line 82, in load_module +File ""weakref.pyc"", line 14, in +ImportError: cannot import name _remove_dead_weakref +The same code could be used to make an .exe file in another computer so there is nothing wrong with the code in main.py. The minor environmental difference may cause this problem. I used pycharm, python 2.7.10 and py2exe 0.6.9. On another computer all other config are the same except using sublimetext instead of pycharm. +Could anyone please tell me how to fix that? +Another tricky thing is that",It is possible that the library does not exists for the other computer.Please check whether the library exists or not.,0.0,False,1,5564 +2018-06-15 08:06:21.200,finding length of linked list in constant time python,"I'm trying to write a function which finds the length of a linked list in O(1). +I know how to implement it in O(n) but I can't figure out how to do it in constant time... is that even possible?","Its not possible because you have to atleast pass through entire linked list and it takes O(n) +Else you have to use a variable which counts when inserting elements into linked list",0.0,False,1,5565 +2018-06-15 21:13:27.137,"Accessing Hidden Tabs, Web Scraping With Python 3.6","I'm using bs4 and urllib.request in python 3.6 to webscrape. I have to open tabs / be able to toggle an ""aria-expanded"" in button tabs in order to access the div tabs I need. +The button tab when the tab is closed is as follows with <> instead of --: +button id=""0-accordion-tab-0"" type=""button"" class=""accordion-panel-title u-padding-ver-s u-text-left text-l js-accordion-panel-title"" aria-controls=""0-accordion-panel-0"" aria-expanded=""false"" +When opened, the aria-expanded=""true"" and the div tab appears underneath. +Any idea on how to do this? +Help would be super appreciated.","BeautifulSoup is used to parse HTML/XML content. You can't click around on a webpage with it. +I recommend you look through the document to make sure it isn't just moving the content from one place to the other. If the content is loaded through AJAX when the button is clicked then you will have to use something like selenium to trigger the click. +An easier option could be to check what url the content is fetched from when you click the button and make a similar call in your script if possible.",0.0,False,1,5566 +2018-06-16 19:30:32.583,How to I close down a python server built using flask,"When I run this simple code: +from flask import Flask,render_template +app = Flask(__name__) +@app.route('/') +def index(): + return 'this is the homepage' +if __name__ == ""__main__"": + app.run(debug=True, host=""0.0.0.0"",port=8080) +It works fine but when I close it using ctrl+z in the terminal and try to run it again I get OSError: [Errno 98] Address already in use +So I tried changing the port address and re-running it which works for some of the port numbers I enter. But I want to know a graceful way to clear the address being used by previous program so that it is free for the current one. +Also is what is the apt way to shutdown a server and free the port address. +Kindly tell a simple way to do so OR explain the method used fully because I read solutions to similar problems but didn't understand any of it. +When I run +netstat -tulpn +The output is : +(Not all processes could be identified, non-owned process info + will not be shown, you would have to be root to see it all.) +Active Internet connections (only servers) +Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name +tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN - +tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN - +tcp 0 0 0.0.0.0:3689 0.0.0.0:* LISTEN 4361/rhythmbox +tcp6 0 0 ::1:631 :::* LISTEN - +tcp6 0 0 :::3689 :::* LISTEN 4361/rhythmbox +udp 0 0 0.0.0.0:5353 0.0.0.0:* 3891/chrome +udp 0 0 0.0.0.0:5353 0.0.0.0:* - +udp 0 0 0.0.0.0:39223 0.0.0.0:* - +udp 0 0 127.0.1.1:53 0.0.0.0:* - +udp 0 0 0.0.0.0:68 0.0.0.0:* - +udp 0 0 0.0.0.0:631 0.0.0.0:* - +udp 0 0 0.0.0.0:58140 0.0.0.0:* - +udp6 0 0 :::5353 :::* 3891/chrome +udp6 0 0 :::5353 :::* - +udp6 0 0 :::41938 :::* - +I'm not sure how to interpret it. +the output of ps aux | grep 8080 +is : +shreyash 22402 0.0 0.0 14224 928 pts/2 S+ 01:20 0:00 grep --color=auto 8080 +I don't know how to interpret it. +Which one is the the process name and what is it's id?","It stays alive because you're not closing it. With Ctrl+Z you're removing the execution from current terminal without killing a process. +To stop the execution use Ctrl+C",0.2012947653214861,False,2,5567 +2018-06-16 19:30:32.583,How to I close down a python server built using flask,"When I run this simple code: +from flask import Flask,render_template +app = Flask(__name__) +@app.route('/') +def index(): + return 'this is the homepage' +if __name__ == ""__main__"": + app.run(debug=True, host=""0.0.0.0"",port=8080) +It works fine but when I close it using ctrl+z in the terminal and try to run it again I get OSError: [Errno 98] Address already in use +So I tried changing the port address and re-running it which works for some of the port numbers I enter. But I want to know a graceful way to clear the address being used by previous program so that it is free for the current one. +Also is what is the apt way to shutdown a server and free the port address. +Kindly tell a simple way to do so OR explain the method used fully because I read solutions to similar problems but didn't understand any of it. +When I run +netstat -tulpn +The output is : +(Not all processes could be identified, non-owned process info + will not be shown, you would have to be root to see it all.) +Active Internet connections (only servers) +Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name +tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN - +tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN - +tcp 0 0 0.0.0.0:3689 0.0.0.0:* LISTEN 4361/rhythmbox +tcp6 0 0 ::1:631 :::* LISTEN - +tcp6 0 0 :::3689 :::* LISTEN 4361/rhythmbox +udp 0 0 0.0.0.0:5353 0.0.0.0:* 3891/chrome +udp 0 0 0.0.0.0:5353 0.0.0.0:* - +udp 0 0 0.0.0.0:39223 0.0.0.0:* - +udp 0 0 127.0.1.1:53 0.0.0.0:* - +udp 0 0 0.0.0.0:68 0.0.0.0:* - +udp 0 0 0.0.0.0:631 0.0.0.0:* - +udp 0 0 0.0.0.0:58140 0.0.0.0:* - +udp6 0 0 :::5353 :::* 3891/chrome +udp6 0 0 :::5353 :::* - +udp6 0 0 :::41938 :::* - +I'm not sure how to interpret it. +the output of ps aux | grep 8080 +is : +shreyash 22402 0.0 0.0 14224 928 pts/2 S+ 01:20 0:00 grep --color=auto 8080 +I don't know how to interpret it. +Which one is the the process name and what is it's id?","You will have another process listening on port 8080. You can check to see what that is and kill it. You can find processes listening on ports with netstat -tulpn. Before you do that, check to make sure you don't have another terminal window open with the running instance.",-0.1016881243684853,False,2,5567 +2018-06-18 05:46:38.073,How to print all recieved post request include headers in python,"I am a python newbie and i have a controler that get Post requests. +I try to print to log file the request that it receive, i am able to print the body but how can i extract all the request include the headers? +I am using request.POST.get() to get the body/data from the request. +Thanks","request.POST should give you the POST body if it is get use request.GET +if the request body is json use request.data",-0.2012947653214861,False,1,5568 +2018-06-18 09:08:56.737,Add conda to my environment variables or path?,"I am having trouble adding conda to my environment variables on windows. I installed anaconda 3 though I didn't installed python, so neither pip or pip3 is working in my prompt. I viewed a few post online but I didn't find anything regarding how to add conda to my environment variables. +I tried to create a PYTHONPATH variable which contained every single folder in Anaconda 3 though it didn't worked. +My anaconda prompt isn't working too. :( +so...How do I add conda and pip to my environment variables or path ?","Thanks guys for helping me out. I solved the problem reinstalling anaconda (several times :[ ), cleaning every log and resetting the path variables via set path= in the windows power shell (since I got some problems reinstalling anaconda adding the folder to PATH[specifically ""unable to load menus"" or something like that])",0.0,False,1,5569 +2018-06-18 16:13:18.567,"getting ""invalid environment marker"" when trying to install my python project","I'm trying to set up a beta environment on Heroku for my Django-based project, but when I install I am getting: + +error in cryptography setup command: Invalid environment marker: + python_version < '3' + +I've done some googling, and it is suggested that I upgrade setuptools, but I can't figure out how to do that. (Putting setuptools in requirements.txt gives me a different error message.) +Sadly, I'm still on Python 2.7, if that matters.","The problem ended up being the Heroku ""buildpack"" that I was using. I had been using the one from ""thenovices"" for a long time so that I could use numpy, scipy, etc. +Sadly, that buildpack specifies an old version of setuptools and python, and those versions were not understanding some of the new instructions (python_version) in the newer setup files for cryptography. +If you're facing this problem, Heroku's advice is to move to Docker-based Heroku, rather than ""traditional"" Heroku.",1.2,True,1,5570 +2018-06-19 10:47:23.783,how to use the Werkzeug debugger in postman?,"i am building a flask RESTapi and i am using postman to make http post requests to my api , i want to use the werkzeug debugger , but postman wont allow me to put in the debugging pin and debug the code from postman , what can i do ?","Never needed any debugger for postman. This is not the tool you need the long blanket of code for one endpoint to test. +It gives a good option - console. I have never experienced any trouble this simple element didn't help me so far.",0.0,False,1,5571 +2018-06-19 13:14:35.270,Importing Numpy into Sublime Text 3,"I'm new to coding and I have been learning it on Jupyter. I have anaconda, Sublime Text 3, and the numpy package installed on my Mac. +On Jupyter, we would import numpy by simply typing + import numpy as np +However, this doesnt seem to work on Sublime as I get the error ModuleNotFoundError: No module named 'numpy' +I would appreciate it if someone could guide me on how to get this working. Thanks!","If you have Annaconda, install Spyder. +If you continue to have this problem, you could check all the lib install from anaconda. +I suggest you to install nmpy from anaconda.",0.3869120172231254,False,1,5572 +2018-06-19 18:36:14.277,dataframe from underlying script not updating,"I have a script called ""RiskTemplate.py"" which generates a pandas dataframe consisting of 156 columns. I created two additional columns which gives me a total count of 158 columns. However, when I run this ""RiskTemplate.py"" script in another script using the below code, the dataframe only pulls the original 156 columns I had before the two additional columns were added. +exec(open(""RiskTemplate.py"").read()) +how can I get the reference script to pull in the revised dataframe from the underlying script ""RiskTemplate.py""? +here are the lines creating the two additional dataframe columns, they work as intended when I run it directly in the ""RiskTemplate.py"" script. The original dataframe is pulling from SQL via df = pd.read_sql(query,connection) +df['LMV % of NAV'] = df['longmv']/df['End of Month NAV']*100 +df['SMV % of NAV'] = df['shortmv']/df['End of Month NAV']*100","I figured it out, sorry for the confusion. I did not save the risktemplate that I updated the dataframe to in the same folder that the other reference script was looking at! Newbie!",0.3869120172231254,False,1,5573 +2018-06-20 01:59:58.440,Python regex to match words not having dot,"I want to accept only those strings having the pattern 'wild.flower', 'pink.flower',...i.e any word preceding '.flower', but the word should not contain dot. For example, ""pink.blue.flower"" is unacceptable. Can anyone help how to do this in python using regex?","You are looking for ""^\w+\.flower$"".",0.1618299653758019,False,2,5574 +2018-06-20 01:59:58.440,Python regex to match words not having dot,"I want to accept only those strings having the pattern 'wild.flower', 'pink.flower',...i.e any word preceding '.flower', but the word should not contain dot. For example, ""pink.blue.flower"" is unacceptable. Can anyone help how to do this in python using regex?","Your case of pink.blue.flower is unclear. There are 2 possibilities: + +Match only blue (cut off preceding dot and what was before). +Reject this case altogether (you want to match a word preceding .flower +only if it is not preceded with a dot). + +In the first case accept other answers. +But if you want the second solution, use: \b(? Settings > Project > Project Interpreter.",1.2,True,1,5580 +2018-06-22 18:41:42.363,Getting IDs from t-SNE plot?,"Quite simple, +If I perform t-SNE in Python for high-dimensional data then I get 2 or 3 coordinates that reflect each new point. +But how do I map these to the original IDs? +One way that I can think of is if the indices are kept fixed the entire time, then I can do: + +Pick a point in t-SNE +See what row it was in t-SNE (e.g. index 7) +Go to original data and pick out row/index 7. + +However, I don't know how to check if this actually works. My data is super high-dimensional and it is very hard to make sense of it with a normal ""sanity check"". +Thanks a lot! +Best,","If you are using sklearn's t-SNE, then your assumption is correct. The ordering of the inputs match the ordering of the outputs. So if you do y=TSNE(n_components=n).fit_transform(x) then y and x will be in the same order so y[7] will be the embedding of x[7]. You can trust scikit-learn that this will be the case.",0.3869120172231254,False,1,5581 +2018-06-22 19:56:07.460,how to print the first lines of a large XML?,"I have this large XML file on my drive. The file is too large to be opened with sublimetext or other text editors. +It is also too large to be loaded in memory by the regular XML parsers. +Therefore, I dont even know what's inside of it! +Is it just possible to ""print"" a few rows of the XML files (as if it was some sort of text document) so that I have an idea of the nodes/content? +I am suprised not to find an easy solution to that issue. +Thanks!","This is one of the few things I ever do on the command line: the ""more"" command is your friend. Just type + +more big.xml",0.1352210990936997,False,1,5582 +2018-06-25 05:26:40.297,Two python3 interpreters on win10 cause misunderstanding,"I used win10. When I installed Visual Studio2017, I configure the Python3 environment. And then after half year I installed Anaconda(Python3) in another directory. Now I have two interpreters in different directories. + +Now, no matter in what IDE I code the codes, after I save it and double click it in the directory, the Python File is run by the interpreter configured by VS2017. + +Why do I know that? I use sys.path to get to know it. But when I use VS2017 to run the code, it shows no mistake. The realistic example is that I pip install requests in cmd, then I import it in a Python File. Only when I double click it, the Traceback says I don't have this module. In other cases it works well. + +So, how to change the default python interpreter of the cmd.exe?","Just change the interpreter order of the python in the PATH is enough. +If you want to use python further more, I suggest you to use virtual environment tools like pipenv to control your python interpreters and modules.",0.0,False,1,5583 +2018-06-25 07:13:06.080,How can I update Python version when working on JGRASP on mac os?,"When I installed the new version of python 3.6.5, JGRASP was using the previous version, how can I use the new version on JGRASP?","By default, jGRASP will use the first ""python"" on the system path. +The new version probably only exists as ""python3"". If that is the case, install jGRASP 2.0.5 Beta if you are using 2.0.4 or a 2.0.5 Alpha. Then, go to ""Settings"" > ""Compiler Settings"" > ""Workspace"", select language ""Python"" if not already selected, select environment ""Python 3 (python 3) - generic"", hit ""Use"" button, and ""OK"" the dialog.",0.0,False,1,5584 +2018-06-25 13:30:47.293,Passing command line parameters to python script from html page,"I have a html page with text box and submit button. When somebody enters data in text box and click submit, i have to pass that value to a python script which does some operation and print output. Can someone let me now how to achieve this. I did some research on stackoverflow/google but nothing conclusive. I have python 2.7, Windows 10 and Apache tomcat. Any help would be greatly appreciated. +Thanks, +Jagadeesh.K","Short answer: You can't just run a python script in the clients browser. It doesn't work that way. +If you want to execute some python when the user does something, you will have to run a web app like the other answer suggested.",0.0,False,1,5585 +2018-06-26 09:53:17.980,How to uninstall (mini)conda entirely on Windows,"I was surprised to be unable to find any information anywhere on the web on how to do this properly, but I suppose my surprise ought to be mitigated by the fact that normally this can be done via Microsoft's 'Add or Remove Programs' via the Control Panel. +This option is not available to me at this time, since I had installed Python again elsewhere (without having uninstalled it), then uninstalled that installation the standard way. Now, despite no option for uninstalling conda via the Control Panel, conda persists in my command line. +Now, the goal is to remove every trace of it, to end up in a state as though conda never existed on my machine in the first place before I reinstall it to the necessary location. +I have a bad feeling that if I simply delete the files and then reinstall, this will cause problems. Does anyone have any guidance in how to achieve the above?","Open the folder where you installed miniconda, and then search for uninstall.exe. Open that it will erase miniconda for you.",0.9950547536867304,False,1,5586 +2018-06-27 02:35:38.367,"protobuf, and tensorflow installation, which version to choose","I already installed python3.5.2, tensorflow(with python3.5.2). +I want to install protobuf now. However, protobuf supports python3.5.0; 3.5.1; and 3.6.0 +I wonder which version should I install. +My question is should I upgrade python3.5.2 to python3.6, or downgrade it to python3.5.1. +I see some people are trying downgrade python3.6 to python3.5 +I googled how to change python3.5.2 to python3.5.1, but no valuable information. I guess this is not usual option.","So it is version problem +one google post says change python version to a more general version. +I am not sure how to change python3.5.2 to python3.5.1 +I just installed procobuf3.6 +I hope it works",0.0,False,1,5587 +2018-06-27 06:09:44.330,How to Resume Python Script After System Reboot?,"I'm still new to writing scripts with Python and would really appreciate some guidance. +I'm wondering how to continue executing my Python script from where it left off after a system restart. +The script essentially alternates between restarting and executing a task for example: restart the system, open an application and execute a task, restart the system, open another application and execute another task, etc... +But the issue is that once the system restarts and logs back in, all applications shut down including the terminal so the script stops running and never executes the following task. The program shuts down early without an error so the logs are not really of much use. Is there any way to reopen the script and continue from where it left off or prevent applications from being closed during a reboot ? Any guidance on the issue would be appreciated. +Thanks! +Also, I'm using a Mac running High Sierra for reference.","You could write your current progress to a file just before you reboot and read said file on Programm start. +About the automatic restart of the script after reboot: you could have the script to put itself in the Autostart of your system and after everything is done remove itself from it.",0.0,False,1,5588 +2018-06-29 09:49:04.483,Incorrect UTC date in MongoDB Compass,"I package my python (flask) application with docker. Within my app I'm generating UTC date with datetime library using datetime.utcnow(). +Unfortunately, when I inspect saved data with MongoDB Compass the UTC date is offset two hours (to my local time zone). All my docker containers have time zone set to Etc/UTC. Morover, mongoengine connection to MongoDB uses tz_aware=False and tzinfo=None, what prevents on fly date conversions. +Where does the offset come from and how to fix it?","Finally, after trying to prove myself wrong, and hairless head I found the cause and solution for my problem. +We are living in the world of illusion and what you see is not what you get!!!. I decided to inspect my data over mongo shell client +rather than MongoDB Compass GUI. I figure out that data that arrived to database contained correct UTC date. This narrowed all my previous +assumption that there has to be something wrong with my python application, and environment that the application is living in. What left was MongoDB Compass itself. +After changing time zone on my machine to a random time zone, and refreshing collection within MongoDB Compass, displayed UTC date changed to a date that fits random time zone. +Be aware that MongoDB Copass displays whatever is saved in database Date field, enlarged about your machine's time zone. Example, if you saved UTC time equivalent to 8:00 am, +and your machine's time zone is Europe/Warsaw then MongoDB Compass will display 10:00am.",1.2,True,1,5589 +2018-07-01 07:10:49.220,How to replace all string in all columns using pandas?,"In pandas, how do I replace & with '&' from all columns where & could be in any position in a string? +For example, in column Title if there is a value 'Good & bad', how do I replace it with 'Good & bad'?","Try this +df['Title'] = titanic_df['Title'].replace(""&"", ""&"")",0.0,False,1,5590 +2018-07-01 23:33:29.923,Binance API: how to get the USD as the quote asset,"I'm wondering what the symbol is or if I am even able to get historical price data on BTC, ETH, etc. denominated in United States Dollars. +right now when if I'm making a call to client such as: +Client.get_symbol_info('BTCUSD') +it returns nothing +Does anyone have any idea how to get this info? Thanks!","You can not make trades in Binance with dollars but instead with Tether(USDT) that is a cryptocurrency that is backed 1-to-1 with dollar. +To solve that use BTCUSDT +Change BTCUSD to BTCUSDT",0.9950547536867304,False,1,5591 +2018-07-02 10:22:40.247,How can i scale a thickness of a character in image using python OpenCV?,"I created one task, where I have white background and black digits. +I need to take the largest by thickness digit. I have made my picture bw, recognized all symbols, but I don't understand, how to scale thickness. I have tried arcLength(contours), but it gave me the largest by size. I have tried morphological operations, but as I undestood, it helps to remove noises and another mistakes in picture, right? And I had a thought to check the distance between neighbour points of contours, but then I thought that it would be hard because of not exact and clear form of symbols(I draw tnem on paint). So, that's all Ideas, that I had. Can you help me in this question by telling names of themes in Comp. vision and OpenCV, that could help me to solve this task? I don't need exact algorithm of solution, only themes. And if that's not OpenCV task, so which is? What library? Should I learn some pack of themes and basics before the solution of my task?","One possible solution that I can think of is to alternate erosion and find contours till you have only one contour left (that should be the thicker). This could work if the difference in thickness is enough, but I can also foresee many particular cases that can prevent a correct identification, so it depends very much on how is your original image.",0.2012947653214861,False,1,5592 +2018-07-02 13:55:01.080,"django inspectdb, how to write multiple table name during inspection","When I first execute this command it create model in my model.py but when I call it second time for another table in same model.py file then that second table replace model of first can anyone told the reason behind that because I am not able to find perfect solution for that? +$ python manage.py inspectdb tablename > v1/projectname/models.py +When executing this command second time for another table then it replace first table name. +$ python manage.py inspectdb tablename2 > v1/projectname/models.py","python manage.py inspectdb table1 table2 table3... > app_name/models.py +Apply this command for inspection of multiple tables of one database in django.",0.0,False,1,5593 +2018-07-02 17:04:29.297,Count Specific Values in Dataframe,"If I had a column in a dataframe, and that column contained two possible categorical variables, how do I count how many times each variable appeared? +So e.g, how do I count how many of the participants in the study were male or female? +I've tried value_counts, groupby, len etc, but seem to be getting it wrong. +Thanks","You could use len([x for x in df[""Sex""] if x == ""Male""). This iterates through the Sex column of your dataframe and determines whether an element is ""Male"" or not. If it is, it is appended to a list via list comprehension. The length of that list is the number of Males in your dataframe.",0.0,False,1,5594 +2018-07-03 17:27:42.043,Which newline character is in my CSV?,"We receive a .tar.gz file from a client every day and I am rewriting our import process using SSIS. One of the first steps in my process is to unzip the .tar.gz file which I achieve via a Python script. +After unzipping we are left with a number of CSV files which I then import into SQL Server. As an aside, I am loading using the CozyRoc DataFlow Task Plus. +Most of my CSV files load without issue but I have five files which fail. By reading the log I can see that the process is reading the Header and First line as though there is no HeaderRow Delimiter (i.e. it is trying to import the column header as ColumnHeader1ColumnValue1 +I took one of these CSVs, copied the top 5 rows into Excel, used Text-To-Columns to delimit the data then saved that as a new CSV file. +This version imported successfully. +That makes me think that somehow the original CSV isn't using {CR}{LF} as the row delimiter but I don't know how to check. Any suggestions?","Seeing that you have EmEditor, you can use EmEditor to find the eol character in two ways: + +Use View > Character Code Value... at the end of a line to display a dialog box showing information about the character at the current position. +Go to View > Marks and turn on Newline Characters and CR and LF with Different Marks to show the eol while editing. LF is displayed with a down arrow while CRLF is a right angle. + +Some other things you could try checking for are: file encoding, wrong type of data for a field and an inconsistent number of columns.",0.0,False,1,5595 +2018-07-03 18:21:44.653,Calling custom C subroutines in a Python application,"I have two custom-written C routines that I would like to use as a part of a large Python application. I would prefer not to rewrite the C code in pure Python (or Cython, etc.), especially to maintain speed. +What is the cleanest, easiest way that I can use my C code from my Python code? Or, what is the cleanest, easiest way for me to wrap my C code for use in my Python source? +I know ""cleanest"" and ""easiest"" will attract opinions, but I really just need some good options for using custom pre-written code, versus many of the other answers/tutorials which describe how to use full-on C libraries as CPython extensions. +EDIT: +Cython and ctypes have both been suggested. Which is a better choice in my case? Each of the two routines I mentioned originally are very computationally intensive. They are used for image calculations and reconstructions, so my plan is to build a Python application around their use (with other functionality in mind that I already have in Python) with the C code run as needed for processing.","Use cython to wrap your C code. In other words, create a CPython extension using Cython, that calls your C code.",1.2,True,1,5596 +2018-07-04 00:03:40.780,kubernetes architecture for microservices application - suggestions,"I have been asked to create a system which has different functionalities. Assume service 1, service 2 and service 3. I need to run these services per hour to do something. +To make the system of those services I need: database, web interface for seeing the result of the process, caching and etc. +This is what I have thought about so far: + +I need kubernetes to orchestrate my services which are packaged as docker containers. I will deploy mySql to save my data and I can use Redis cache for caching. +My service are written by python scripts and Java and need to interact with each other through APIs. +I think I can use AWS EKS for my kubernetes cluster + + +this is what I need to know: + +how to deploy python or Java applications and connect them to each other and also connect them to a database service +I also need to know how to schedule the application to run per hour so I can see the results in the web interface. + +Please shoot any ideas or questions you have. +Any help would be appreciated.","For python/java applications, create docker images for both applications. If these application run forever to serve traffic then deploy them as deployments.If you need to have only cron like functionality, deploy as Job in kubernetes. +To make services accessible, create services as selector for applications, so these services can route traffic to specific applications. +Database or cache should be exposed as service endpoints so your applications are environment independent.",0.3869120172231254,False,1,5597 +2018-07-04 12:45:42.993,search_s search_ext_s search_s methods of python-ldap library doesn't return any Success response code,"I am using search_ext_s() method of python-ldap to search results on the basis of filter_query, upon completion of search I get msg_id which I passed in result function like this ldap_object.result(msg_id) this returns tuple like this (100, attributes values) which is correct(I also tried result2, result3, result4 method of LDAP object), But how can I get response code for ldap search request, also if there are no result for given filter_criteria I get empty list whereas in case of exception I get proper message like this +ldap.SERVER_DOWN: {u'info': 'Transport endpoint is not connected', 'errno': 107, 'desc': u""Can't contact LDAP server""} +Can somebody please help me if there exists any attribute which can give result code for successful LDAP search operation. +Thanks, +Radhika","An LDAP server simply may not return any results, even if there was nothing wrong with the search operation sent by the client. With python-ldap you get an empty result list. Most times this is due to access control hiding directory content. In general the LDAP server won't tell you why it did not return results. +(There are some special cases where ldap.INSUFFICIENT_ACCESS is raised but you should expect the behaviour to be different when using different LDAP servers.) +In python-ldap if the search operation did not raise an exception the LDAP result code was ok(0). So your application has to deal with an empty search result in some application-specific way, e.g. by also raising a custom exception handled by upper layers.",1.2,True,1,5598 +2018-07-06 07:29:16.617,How to find dot product of two very large matrices to avoid memory error?,"I am trying to learn ML using Kaggle datasets. In one of the problems (using Logistic regression) inputs and parameters matrices are of size (1110001, 8) & (2122640, 8) respectively. +I am getting memory error while doing it in python. This would be same for any language I guess since it's too big. My question is how do they multiply matrices in real life ML implementations (since it would usually be this big)? +Things bugging me : + +Some ppl in SO have suggested to calculate dot product in parts and then combine. But even then matrix would be still too big for RAM (9.42TB? in this case) + +And If I write it to a file wouldn't it be too slow for optimization algorithms to read from file and minimize function? + +Even if I do write it to file how would fmin_bfgs(or any opt. function) read from file? + +Also Kaggle notebook shows only 1GB of storage available. I don't think anyone would allow TBs of storage space. + +In my input matrix many rows have similar values for some columns. Can I use it my advantage to save space? (like sparse matrix for zeros in matrix) +Can anyone point me to any real life sample implementation of such cases. Thanks!","I have tried many things. I will be mentioning these here, if anyone needs them in future: + +I had already cleaned up data like removing duplicates and +irrelevant records depending on given problem etc. +I have stored large matrices which hold mostly 0s as sparse matrix. +I implemented the gradient descent using mini-batch method instead of plain old Batch method (theta.T dot X). + +Now everything is working fine.",1.2,True,1,5599 +2018-07-06 17:58:05.770,Python Unit test debugging in VS code,"I use VS code for my Python projects and we have unit tests written using Python's unittest module. I am facing a weird issue with debugging unit tests. +VSCode Version: May 2018 (1.24) +OS Version: Windows 10 +Let's say I have 20 unit tests in a particular project. +I run the tests by right clicking on a unit test file and click 'Run all unit tests' +After the run is complete, the results bar displays how many tests are passed and how many are failed. (e.g. 15 passed, 5 failed). +And I can run/debug individual test because there is a small link on every unit test function for that. +If I re-run the tests from same file, then the results bar displays the twice number of tests. (e.g. 30 passed, 10 failed) +Also the links against individual test functions disappear. So I cannot run individual tests. +The only way to be able to run/debug individual tests after this is by re-launching the VS code. +Any suggestions on how to fix this?",This was a bug in Python extension for VS code and it is fixed now.,1.2,True,1,5600 +2018-07-08 23:33:21.993,Wondering how I can delete all of my python related files on Mac,"So I was trying to install kivy, which lead me to install pip, and I went down a rabbit hole of altering directories. I am using PyCharm for the record. +I would like to remove everything python related (including all libraries like pip) from my computer, and start fresh with empty directories, so when I download pycharm again, there will be no issues. +I am using a Mac, so if any of you could let me know how to do that on a Mac, it would be greatly appreciated. +Could I just open finder, search python, and delete all of the files (there are tons) or would that be too destructive? +I hope I am making my situation clear enough, please comment any questions to clarify things. +Thanks!","If you are familiar with the Terminal app, you can use command lines to uninstall Python from your Mac. For this, follow these steps: + + +Move Python to Trash. +Open the Terminal app and type the following command line in the window: ~ alexa$ sudo rm -rf /Applications/Python\ 3.6/ +It will require you to enter your administrator password to confirm the deletion. + + +And for the PyCharm: + +Just remove the ~/Library/Caches/PyCharm20 and + ~/Library/Preferences/PyCharm20 directories. + +Or if that won't be enough: + + +Go to Applications > right click PyCharm > move to trash +open a terminal and run the following: find ~/Library/ -iname ""pycharm"" +verify that all of the results are in fact related to PyCharm and not something else important you need to keep. Then, remove them all + using the command: find ~/Library -iname ""pycharm"" -exec rm -r ""{}"" + \;",0.3869120172231254,False,1,5601 +2018-07-10 09:58:49.683,Lost artwork while converting .m4a to .mp3 (Python),"I'm trying to convert m4a audio file with artwork (cover) to mp3. I'm using ffmpeg to convert the audio. +Once it copies, the artwork is lost. I'm quite not sure, how to retain the cover. I found some reference about mutagen library but not sure again how to use to copy the artwork. +Any help would be great. + +ffmpeg -i source/file -acodec libmp3lame -ab 128k destination.mp3 + +Update: +I'm reading the artwork and m4a to be able to attache it back. +I can get the artwork by using + +artwork = audio.tags['covr'] + +Now my problem is how do I save the artwork as image in a new file? +I tried the Following: + +with open(path/to/write, 'wb') as img: + img.write(artwork) + +This gives me an error + +'list' does not support the buffer interface line + +Any suggestion, how I can save the artwork extracted covr data?","If anyone is having the same issue; +I ended up reading the artwork from original file and attaching it back to mp3 + +if audioFileNameWithM4AExtension.startswith(""covr""): #checks if it has cover + cover = audioFileNameWithM4AExtension.tags['covr'][0] #gets the cover",0.0,False,1,5602 +2018-07-10 15:26:20.883,Use proxy sentences from cleaned data,"Gensim's Word2Vec model takes as an input a list of lists with the inner list containing individual tokens/words of a sentence. As I understand Word2Vec is used to ""quantify"" the context of words within a text using vectors. +I am currently dealing with a corpus of text that has already been split into individual tokens and no longer contains an obvious sentence format (punctuation has been removed). I was wondering how should I input this into the Word2Vec model? +Say if I simply split the corpus into ""sentences"" of uniform length (10 tokens per sentence for example), would this be a good way of inputting the data into the model? +Essentially, I am wondering how the format of the input sentences (list of lists) affects the output of Word2Vec?","That sounds like a reasonable solution. If you have access to data that is similar to your cleaned data you could get average sentence length from that data set. Otherwise, you could find other data in the language you are working with (from wikipedia or another source) and get average sentence length from there. +Of course your output vectors will not be as reliable as if you had the correct sentence boundaries, but it sounds like word order was preserved so there shouldn't be too much noise from incorrect sentence boundaries.",0.2012947653214861,False,1,5603 +2018-07-10 19:19:58.840,"Python: ContextualVersionConflict: pandas 0.22.0; Requirement.parse('pandas<0.22,>=0.19'), {'scikit-survival'})","I have this issue: + +ContextualVersionConflict: (pandas 0.22.0 (...), + Requirement.parse('pandas<0.22,>=0.19'), {'scikit-survival'}) + +I have even tried to uninstall pandas and install scikit-survival + dependencies via anaconda. But it still does not work.... +Anyone with a suggestion on how to fix? +Thanks!",Restarting jupyter notebook fixed it. But I am unsure why this would fix it?,0.9999092042625952,False,1,5604 +2018-07-11 15:01:09.260,How do I calculate the percentage of difference between two images using Python and OpenCV?,"I am trying to write a program in Python (with OpenCV) that compares 2 images, shows the difference between them, and then informs the user of the percentage of difference between the images. I have already made it so it generates a .jpg showing the difference, but I can't figure out how to make it calculate a percentage. Does anyone know how to do this? +Thanks in advance.",You will need to calculate this on your own. You will need the count of diferent pixels and the size of your original image then a simple math: (diferentPixelsCount / (mainImage.width * mainImage.height))*100,0.0,False,1,5605 +2018-07-11 21:22:40.900,How to import 'cluster' and 'pylab' into Pycharm,I would like to use Pycharm to write some data science code and I am using Visual Studio Code and run it from terminal. But I would like to know if I could do it on Pycharm? I could not find some modules such as cluster and pylab on Pycharm? Anyone knows how I could import these modules into Pycharm?,"Go to the Preferences Tab -> Project Interpreter, there's a + symbol that allows you to view and download packages. From there you should be able to find cluster and pylab and install them to PyCharm's interpreter. After that you can import them and run them in your scripts. +Alternatively, you may switch the project's interpreter to an interpreter that has the packages installed already. This can be done from that same menu.",0.1352210990936997,False,1,5606 +2018-07-14 17:06:41.383,"Multiple Inputs for CNN: images and parameters, how to merge","I use Keras for a CNN and have two types of Inputs: Images of objects, and one or two more parameters describing the object (e.g. weight). How can I train my network with both data sources? Concatenation doesn't seem to work because the inputs have different dimensions. My idea was to concatenate the output of the image analysis and the parameters somehow, before sending it into the dense layers, but I'm not sure how. Or is it possible to merge two classifications in Keras, i.e. classifying the image and the parameter and then merging the classification somehow?","You can use Concatenation layer to merge two inputs. Make sure you're converting multiple inputs into same shape; you can do this by adding additional Dense layer to either of your inputs, so that you can get equal length end layers. Use those same shape outputs in Concatenation layer.",1.2,True,1,5607 +2018-07-14 20:27:44.470,How to analyse the integrity of clustering with no ground truth labels?,"I'm clustering data (trying out multiple algorithms) and trying to evaluate the coherence/integrity of the resulting clusters from each algorithm. I do not have any ground truth labels, which rules out quite a few metrics for analysing the performance. +So far, I've been using Silhouette score as well as calinski harabaz score (from sklearn). With these scores, however, I can only compare the integrity of the clustering if my labels produced from an algorithm propose there to be at minimum, 2 clusters - but some of my algorithms propose that one cluster is the most reliable. +Thus, if you don't have any ground truth labels, how do you assess whether the proposed clustering by an algorithm is better than if all of the data was assigned in just one cluster?","Don't just rely on some heuristic, that someone proposed for a very different problem. +Key to clustering is to carefully consider the problem that you are working on. What is the proper way of proposing the data? How to scale (or not scale)? How to measure the similarity of two records in a way that it quantifies something meaningful for your domain. +It is not about choosing the right algorithm; your task is to do the math that relates your domain problem to what the algorithm does. Don't treat it as a black box. Choosing the approach based on the evaluation step does not work: it is already too late; you probably did some bad decisions already in the preprocessing, used the wrong distance, scaling, and other parameters.",0.0,False,1,5608 +2018-07-15 06:08:43.183,how to run python code in atom in a terminal?,"I'm a beginner in programming and atom so when try to run my python code written in the atom in terminal I don't know how...i tried installing packages like run-in-terminal,platformio-ide-terminal but I don't know how to use them.","Save your Script as a .py file in a directory. +Open the terminal and navigate to the directory containing your script using cd command. +Run python .py if you are using python2 +Run python3 if you are using python3",0.1352210990936997,False,3,5609 +2018-07-15 06:08:43.183,how to run python code in atom in a terminal?,"I'm a beginner in programming and atom so when try to run my python code written in the atom in terminal I don't know how...i tried installing packages like run-in-terminal,platformio-ide-terminal but I don't know how to use them.","""python filename.py"" should run your python code. If you wish to specifically run the program using python 3.6 then it would be ""python3.6 filename.py"".",0.0,False,3,5609 +2018-07-15 06:08:43.183,how to run python code in atom in a terminal?,"I'm a beginner in programming and atom so when try to run my python code written in the atom in terminal I don't know how...i tried installing packages like run-in-terminal,platformio-ide-terminal but I don't know how to use them.","I would not try to do it using extensions. I would use the platformio-ide-terminal and just do it from the command line. +Just type: Python script_name.py and it should run fine. Be sure you are in the same directory as your python script.",0.1352210990936997,False,3,5609 +2018-07-16 08:18:12.017,How to measure latency in paho-mqtt network,"I'm trying measure the latency from my publisher to my subscriber in an MQTT network. I was hoping to use the on_message() function to measure how long this trip takes but its not clear to me whether this callback comes after the broker receives the message or after the subscriber receives it? +Also does anyone else have any other suggestion on how to measure latency across the network?","on_message() is called on the subscriber when the message reaches the subscriber. +One way to measure latency is to do a loop back publish in the same client e.g. + +Setup a client +Subscribe to a given topic +Publish a message to the topic and record the current (high resolution) timestamp. +When on_message() is called record the time again + +It is worth pointing out that this sort of test assumes that both publisher/subscriber will be on similar networks (e.g. not cellular vs gigabit fibre). +Also latency will be influenced by the load on the broker and the number of subscribers to a given topic. +The other option is to measure latency passively by monitoring the network assuming you can see all the traffic from one location as synchronising clocks across monitoring point is very difficult.",0.3869120172231254,False,2,5610 +2018-07-16 08:18:12.017,How to measure latency in paho-mqtt network,"I'm trying measure the latency from my publisher to my subscriber in an MQTT network. I was hoping to use the on_message() function to measure how long this trip takes but its not clear to me whether this callback comes after the broker receives the message or after the subscriber receives it? +Also does anyone else have any other suggestion on how to measure latency across the network?","I was involved in similar kind of work where I was supposed measure the latency in wireless sensor networks. There are different ways to measure the latencies. +If the subscriber and client are synchronized. + +Fill the payload with the time stamp value at the client and transmit +this packet to subscriber. At the subscriber again take the time +stamp and take the difference between the time stamp at the +subscriber and the timestamp value in the packet. +This gives the time taken for the packet to reach subscriber from +client. + +If the subscriber and client are not synchronized. +In this case measurement of latency is little tricky. Assuming the network is symmetrical. + +Start the timer at client before sending the packet to subscriber. +Configure subscriber to echo back the message to client. Stop the +timer at the client take the difference in clock ticks. This time +represents the round trip time you divide it by two to get one +direction latency.",0.5457054096481145,False,2,5610 +2018-07-16 13:07:02.643,Brief explanation on tensorflow object detection working mechanism,"I've searched for working mechanism of tensorflow object detection in google. I've searched how tensorflow train models with dataset. It give me suggestion about how to implement rather than how it works. +Can anyone explain how dataset are trained in fit into models?","You can't ""simply"" understand how Tensorflow works without a good background on Artificial Intelligence and Machine Learning. +I suggest you start working on those topics. Tensorflow will get much easier to understand and to handle after that.",0.0,False,1,5611 +2018-07-16 16:38:23.357,fetch data from 3rd party API - Single Responsibility Principle in Django,"What's the most elegant way to fetch data from an external API if I want to be faithful to the Single Responsibility Principle? Where/when exactly should it be made? +Assuming I've got a POST /foo endpoint which after being called should somehow trigger a call to the external API and fetch/save some data from it in my local DB. +Should I add the call in the view? Or the Model?","I usually add any external API calls into dedicated services.py module (same level as your models.py that you're planning to save results into or common app if any of the existing are not logically related) +Inside that module you can use class called smth like MyExtarnalService and add all needed methods for fetching, posting, removing etc. just like you would do with drf api view. +Also remember to handle exceptions properly (timeouts, connection errors, error response codes) by defining custom error exception classes.",0.0,False,1,5612 +2018-07-16 18:35:21.250,What is the window length of moving average trend in seasonal.seasonal_decompose package?,"I am using seasonal.seasonal_decompose in python. +What is the window length of moving average trend in seasonal.seasonal_decompose package? +Based on my results, I think it is 25. But how can I be sure? how can I change this window length?","I found the answer. The ""freq"" part defines the window of moving average. Still not sure how the program choose the window when we do not declare it.",0.0,False,1,5613 +2018-07-17 10:48:39.477,How to retrain model in graph (.pb)?,"I have model saved in graph (.pb file). But now the model is inaccurate and I would like to develop it. I have pictures of additional data to learn, but I don't if it's possible or if it's how to do it? The result must be the modified of new data pb graph.","It's a good question. Actually it would be nice, if someone could explain how to do this. But in addition i can say you, that it would come to ""catastrophic forgetting"", so it wouldn't work out. You had to train all your data again. +But anyway, i also would like to know that espacially for ssd, just for test reasons.",0.5457054096481145,False,1,5614 +2018-07-17 10:52:00.203,Django - how to send mail 5 days before event?,"I'm Junior Django Dev. Got my first project. Doing quite well but senior dev that teaches me went on vacations.... +I have a Task in my company to create a function that will remind all people in specyfic Group, 5 days before event by sending mail. +There is a TournamentModel that contains a tournament_start_date for instance '10.08.2018'. +Player can join tournament, when he does he joins django group ""Registered"". +I have to create a function (job?) that will check tournament_start_date and if tournament begins in 5 days, this function will send emails to all people in ""Registered"" Group... automatically. +How can I do this? What should I use? How to run it and it will automatically check? I'm learning python/django for few months... but I meet jobs fot the first time ;/ +I will appreciate any help.",You can set this mail send function as cron job。You can schedule it by crontab or Celery if Your team has used it.,0.2012947653214861,False,1,5615 +2018-07-19 12:11:04.380,how to change vs code python extension's language?,"My computer's system language is zh_cn, so the vs code python extension set the default language to chinese. But i want to change the language to english. +I can't find the reference in the doc or on the internet. Anyone konws how to do it? Thank's for help +PS: vs code's locale is alreay set to english.",When VScode is open go to View menu and select Command Palette. Once the command palette is open type display in the box. This should display the message configure display language. Open that and you should be in a local.json file. The variable local should be set to en for English.,0.0,False,2,5616 +2018-07-19 12:11:04.380,how to change vs code python extension's language?,"My computer's system language is zh_cn, so the vs code python extension set the default language to chinese. But i want to change the language to english. +I can't find the reference in the doc or on the internet. Anyone konws how to do it? Thank's for help +PS: vs code's locale is alreay set to english.","You probably installed other python extensions for VSCode. Microsoft official python extension will follow the locale setting in user/workspace settings. +Try uninstall other python extensions, you may see it changes to English.",0.0,False,2,5616 +2018-07-19 19:12:26.090,Python3 remove multiple hyphenations from a german string,"I'm currently working on a neural network that evaluates students' answers to exam questions. Therefore, preprocessing the corpora for a Word2Vec network is needed. Hyphenation in german texts is quite common. There are mainly two different types of hyphenation: +1) End of line: +The text reaches the end of the line so the last word is sepa- +rated. +2) Short form of enumeration: +in case of two ""elements"": +Geistes- und Sozialwissenschaften +more ""elements"": +Wirtschafts-, Geistes- und Sozialwissenschaften +The de-hyphenated form of these enumerations should be: +Geisteswissenschaften und Sozialwissenschaften +Wirtschaftswissenschaften, Geisteswissenschaften und Sozialwissenschaften +I need to remove all hyphenations and put the words back together. I already found several solutions for the first problem. +But I have absoluteley no clue how to get the second part (in the example above ""wissenschaften"") of the words in the enumeration problem. I don't even know if it is possible at all. +I hope that I have pointet out my problem properly. +So has anyone an idea how to solve this problem? +Thank you very much in advance!","It's surely possible, as the pattern seems fairly regular. (Something vaguely analogous is sometimes seen in English. For example: The new requirements applied to under-, over-, and average-performing employees.) +The rule seems to be roughly, ""when you see word-fragments with a trailing hyphen, and then an und, look for known words that begin with the word-fragments, and end the same as the terminal-word-after-und – and replace the word-fragments with the longer words"". +Not being a German speaker and without language-specific knowledge, it wouldn't be possible to know exactly where breaks are appropriate. That is, in your Geistes- und Sozialwissenschaften example, without language-specific knowledge, it's unclear whether the first fragment should become Geisteszialwissenschaften or Geisteswissenschaften or Geistesenschaften or Geiestesaften or any other shared-suffix with Sozialwissenschaften. But if you've got a dictionary of word-fragments, or word-frequency info from other text that uses the same full-length word(s) without this particular enumeration-hyphenation, that could help choose. +(If there's more than one plausible suffix based on known words, this might even be a possible application of word2vec: the best suffix to choose might well be the one that creates a known-word that is closest to the terminal-word in word-vector-space.) +Since this seems a very German-specific issue, I'd try asking in forums specific to German natural-language-processing, or to libraries with specific German support. (Maybe, NLTK or Spacy?) +But also, knowing word2vec, this sort of patch-up may not actually be that important to your end-goals. Training without this logical-reassembly of the intended full words may still let the fragments achieve useful vectors, and the corresponding full words may achieve useful vectors from other usages. The fragments may wind up close enough to the full compound words that they're ""good enough"" for whatever your next regression/classifier step does. So if this seems a blocker, don't be afraid to just try ignoring it as a non-problem. (Then if you later find an adequate de-hyphenation approach, you can test whether it really helped or not.)",0.3869120172231254,False,1,5617 +2018-07-20 10:26:17.870,Can't install tensorflow with pip or anaconda,"Does anyone know how to properly install tensorflow on Windows? +I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same ""Could not find a version that satisfies the requirement tensorflow-gpu (from versions: ) +No matching distribution found for tensorflow-gpu"" error +I tried installing using pip and anaconda, both don't work for me. + +Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.","Not Enabling the Long Paths can be the potential problem.To solve that, +Steps include: + +Go to Registry Editor on the Windows Laptop + +Find the key ""HKEY_LOCAL_MACHINE""->""SYSTEM""->""CurrentControlSet""-> +""File System""->""LongPathsEnabled"" then double click on that option and change the value from 0 to 1. + + +3.Now try to install the tensorflow it will work.",0.0,False,5,5618 +2018-07-20 10:26:17.870,Can't install tensorflow with pip or anaconda,"Does anyone know how to properly install tensorflow on Windows? +I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same ""Could not find a version that satisfies the requirement tensorflow-gpu (from versions: ) +No matching distribution found for tensorflow-gpu"" error +I tried installing using pip and anaconda, both don't work for me. + +Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.","Actually the easiest way to install tensorflow is: +install python 3.5 (not 3.6 or 3.7) you can check wich version you have by typing ""python"" in the cmd. +When you install it check in the options that you install pip with it and you add it to variables environnement. +When its done just go into the cmd and tipe ""pip install tensorflow"" +It will download tensorflow automatically. +If you want to check that it's been installed type ""python"" in the cmd then some that "">>>"" will appear, then you write ""import tensorflow"" and if there's no error, you've done it!",0.0,False,5,5618 +2018-07-20 10:26:17.870,Can't install tensorflow with pip or anaconda,"Does anyone know how to properly install tensorflow on Windows? +I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same ""Could not find a version that satisfies the requirement tensorflow-gpu (from versions: ) +No matching distribution found for tensorflow-gpu"" error +I tried installing using pip and anaconda, both don't work for me. + +Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.","As of July 2019, I have installed it on python 3.7.3 using py -3 -m pip install tensorflow-gpu +py -3 in my installation selects the version 3.7.3. +The installation can also fail if the python installation is not 64 bit. Install a 64 bit version first.",0.0,False,5,5618 +2018-07-20 10:26:17.870,Can't install tensorflow with pip or anaconda,"Does anyone know how to properly install tensorflow on Windows? +I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same ""Could not find a version that satisfies the requirement tensorflow-gpu (from versions: ) +No matching distribution found for tensorflow-gpu"" error +I tried installing using pip and anaconda, both don't work for me. + +Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.","You mentioned Anaconda. Do you run your python through there? +If so check in Anaconda Navigator --> Environments, if your current environment have got tensorflow installed. +If not, install tensorflow and run from that environment. +Should work.",0.0,False,5,5618 +2018-07-20 10:26:17.870,Can't install tensorflow with pip or anaconda,"Does anyone know how to properly install tensorflow on Windows? +I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same ""Could not find a version that satisfies the requirement tensorflow-gpu (from versions: ) +No matching distribution found for tensorflow-gpu"" error +I tried installing using pip and anaconda, both don't work for me. + +Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.",Tensorflow or Tensorflow-gpu is supported only for 3.5.X versions of Python. Try installing with any Python 3.5.X version. This should fix your problem.,1.2,True,5,5618 +2018-07-21 10:12:24.710,Chatterbot dynamic training,"I'm using chatter bot for implementing chat bot. I want Chatterbot to training the data set dynamically. +Whenever I run my code it should train itself from the beginning, because I require new data for every person who'll chat with my bot. +So how can I achieve this in python3 and on windows platform ? +what I want to achieve and problem I'm facing: +I've a python program which will create a text file student_record.txt, this will be generate from a data base and almost new when different student signup or login. In the chatter bot, I trained the bot using with giving this file name but it still replay from the previous trained data","I got the solution for that, I just deleted the data base on the beginning of the program thus new data base will create during the execution of the program. + I used the following command to delete the data base +import os + os.remove(""database_name"") +in my case +import os + os.remove(""db.sqlite3"") +thank you",0.0,False,1,5619 +2018-07-21 11:51:55.627,How do I use Google Cloud API's via Anaconda Spyder?,"I am pretty new to Python in general and recently started messing with the Google Cloud environment, specifically with the Natural Language API. +One thing that I just cant grasp is how do I make use of this environment, running scripts that use this API or any API from my local PC in this case my Anaconda Spyder environment? +I have my project setup, but from there I am not exactly sure, which steps are necessary. Do I have to include the authentication somehow in the Script inside Spyder? +Some insights would be really helpful.",First install the API by pip install or conda install in the scripts directory of anaconda and then simply import it into your code and start coding.,-0.2012947653214861,False,1,5620 +2018-07-21 16:20:50.893,How to open/create images in Python without using external modules,"I have a python script which opens an image file (.png or .ppm) using OpenCV, then loads all the RGB values into a multidimensional Python array (or list), performs some pixel by pixel calculations solely on the Python array (OpenCV is not used at all for this stage), then uses the newly created array (containing new RGB values) to write a new image file (.png here) using OpenCV again. Numpy is not used at all in this script. The program works fine. +The question is how to do this without using any external libraries, regardless whether they are for image processing or not (e.g. OpenCV, Numpy, Scipy, Pillow etc.). To summarize, I need to use bare bones Python's internal modules to: 1. open image and read the RGB values and 2. write a new image from pre-calculated RGB values. I will use Pypy instead of CPython for this purpose, to speed things up. +Note: I use Windows 10, if that matters.","Working with bare-bones .ppm files is trivial: you have three lines of text (P6, ""width height"", 255), and then you have the 3*width*height bytes of RGB. As long as you don't need more complicated variants of the .ppm format, you can write a loader and a saver in 5 lines of code each.",0.1016881243684853,False,1,5621 +2018-07-22 01:51:12.200,How run my code in spyder as i used to run it in linux terminal,"Apologies if my question is stupid. +I am a newbie is all aspects. +I used to run my python code straight from the terminal in Linux Ubuntu, +e.g. I just open the terminal go to my folder and run my command in my Linux terminal +CUDA_VISIBLE_DEVICES=0 python trainval_net.py --dataset pascal_voc --net resnet101 --epochs 7 --bs 1 --nw 4 --lr 1e-3 --lr_decay_step 5 --cuda +now im trying to use Spyder. +So for the same project i have a folder with bunch of functions/folders/stuff inside it. +So i just open that main folder as a new project, then i have noo idea how i can run my code... +There is a console in the right side of spyder which looks like Ipython and i can do stuff in there, but i cannot run the code that i run in terminal there. +In iphython or jupyther i used to usee ! at the begining of the command but here when i do it (e.g. !CUDA_VISIBLE_DEVICES=0 python trainval_net.py --dataset pascal_voc --net resnet101 --epochs 7 --bs 1 --nw 4 --lr 1e-3 --lr_decay_step 5 --cuda) it does not even know the modules and throw errors (e.g. ImportError: No module named numpy`) +Can anyone tell me how should i run my code here in Spyder +Thank you in advance! :)","Okay I figured it out. +I need to go to run->configure per file and in the command line options put the configuration (--dataset pascal_voc --net resnet101 --epochs 7 --bs 1 --nw 4 --lr 1e-3 --lr_decay_step 5 --cuda)",0.0,False,1,5622 +2018-07-22 04:44:09.413,How to use Midiutil to add multiple notes in one timespot (or how to add chords),I am using Midiutil to recreate a modified Bach contrapuntist melody and I am having difficulty finding a method for creating chords using Midiutil in python. Does anyone know a way to create chords using Midiuitl or if there is a way to create chords.,"A chord consists of multiple notes. +Just add multiple notes with the same timestamp.",1.2,True,1,5623 +2018-07-22 16:11:22.640,"PyCharm, stop the console from clearing every time you run the program","So I have just switched over from Spyder to PyCharm. In Spyder, each time you run the program, the console just gets added to, not cleared. This was very useful because I could look through the console to see how my changes to the code were changing the outputs of the program (obviously the console had a maximum length so stuff would get cleared eventually) +However in PyCharm each time I run the program the console is cleared. Surely there must be a way to change this, but I can't find the setting. Thanks.","In Spyder the output is there because you are running iPython. +In PyCharm you can get the same by pressing on View -> Scientific Mode. +Then every time you run you see a the new output and the history there.",0.3869120172231254,False,1,5624 +2018-07-23 00:44:09.343,dateutil 2.5.0 is the minimum required version,"I'm running the jupyter notebook (Enthought Canopy python distribution 2.7) on Mac OSX (v 10.13.6). When I try to import pandas (import pandas as pd), I am getting the complaint: ImportError: dateutil 2.5.0 is the minimum required version. I have these package versions: + +Canopy version 2.1.3.3542 (64 bit) +jupyter version 1.0.0-25 +pandas version 0.23.1-1 +python_dateutil version 2.6.0-1 + +I'm not getting this complaint when I run with the Canopy Editor so it must be some jupyter compatibility problem. Does anyone have a solution on how to fix this? All was well a few months ago until I recently (and mindlessly) allowed an update of my packages.","Installed Canopy version 2.1.9. The downloaded version worked without updating any of the packages called out by the Canopy Package Manager. Updated all the packages, but then the ""import pandas as pd"" failed when using the jupyter notebook. Downgraded the notebook package from 4.4.1-5 to 4.4.1-4 which cascaded to 35 additional package downgrades. Retested the import of pandas and the issue seems to have disappeared.",0.0,False,3,5625 +2018-07-23 00:44:09.343,dateutil 2.5.0 is the minimum required version,"I'm running the jupyter notebook (Enthought Canopy python distribution 2.7) on Mac OSX (v 10.13.6). When I try to import pandas (import pandas as pd), I am getting the complaint: ImportError: dateutil 2.5.0 is the minimum required version. I have these package versions: + +Canopy version 2.1.3.3542 (64 bit) +jupyter version 1.0.0-25 +pandas version 0.23.1-1 +python_dateutil version 2.6.0-1 + +I'm not getting this complaint when I run with the Canopy Editor so it must be some jupyter compatibility problem. Does anyone have a solution on how to fix this? All was well a few months ago until I recently (and mindlessly) allowed an update of my packages.","I had this same issue using the newest pandas version - downgrading to pandas 0.22.0 fixes the problem. +pip install pandas==0.22.0",0.2401167094949473,False,3,5625 +2018-07-23 00:44:09.343,dateutil 2.5.0 is the minimum required version,"I'm running the jupyter notebook (Enthought Canopy python distribution 2.7) on Mac OSX (v 10.13.6). When I try to import pandas (import pandas as pd), I am getting the complaint: ImportError: dateutil 2.5.0 is the minimum required version. I have these package versions: + +Canopy version 2.1.3.3542 (64 bit) +jupyter version 1.0.0-25 +pandas version 0.23.1-1 +python_dateutil version 2.6.0-1 + +I'm not getting this complaint when I run with the Canopy Editor so it must be some jupyter compatibility problem. Does anyone have a solution on how to fix this? All was well a few months ago until I recently (and mindlessly) allowed an update of my packages.","The issue is with the pandas lib +downgrade using the command below +pip install pandas==0.22.0",0.0,False,3,5625 +2018-07-23 17:57:30.150,CNN image extraction to predict a continuous value,"I have images of vehicles . I need to predict the price of the vehicle based on image extraction. +What I have learnt is , I can use CNN to extract the image features but what I am not able to get is, How to predict the prices of vehicles. +I know that the I need to train my CNN model before it predicts the price. +I don't know how to train the model with images along with prices . +In the end what I expect is , I will input an vehicle image and I need to get price of the vehicle. +Can any one provide the approach for this ?","I would use the CNN to predict the model of the car and then using a list of all the car prices it's easy enough to get the price, or if you dont care about the car model just use the prices as lables",0.0,False,1,5626 +2018-07-24 11:59:30.057,How can I handle Pepper robot shutdown event?,I need to handle the event when the shutdown process is started(for example with long press the robot's chest button or when the battery is critically low). The problem is that I didn't find a way to handle the shutdown/poweroff event. Do you have any idea how this can be done in some convenient way?,"Unfortunately this won't be possible as when you trigger a shutdown naoqi will exit as well and destroy your service. +If you are coding in c++ you could use a destructor, but there is no proper equivalent for python... +An alternative would be to execute some code when your script exits whatever the reason. For this you can start your script as a service and wait for ""the end"" using qiApplication.run(). This method will simply block until naoqi asks your service to exit. +Note: in case of shutdown, all services are being killed, so you cannot run any command from the robot API (as they are probably not available anymore!)",1.2,True,1,5627 +2018-07-24 16:25:19.637,Python - pandas / openpyxl: Tips on Automating Reports (Moving Away from VBA).,"I currently have macros set up to automate all my reports. However, some of my macros can take up to 5-10 minutes due to the size of my data. +I have been moving away from Excel/VBA to Python/pandas for data analysis and manipulation. I still use excel for data visualization (i.e., pivot tables). +I would like to know how other people use python to automate their reports? What do you guys do? Any tips on how I can start the process? +Majority of my macros do the following actions - + +Import text file(s) +Paste the raw data into a table that's linked to pivot tables / charts. +Refresh workbook +Save as new","When using python to automate reports I fully converted the report from Excel to Pandas. I use pd.read_csv or pd.read_excel to read in the data, and export the fully formatted pivot tables into excel for viewing. doing the 'paste into a table and refresh' is not handled well by python in my experience, and will likely still need macros to handle properly ie, export a csv with the formatted data from python then run a short macro to copy and paste. +if you have any more specific questions please ask, i have done a decent bit of this",0.0,False,1,5628 +2018-07-24 19:41:53.300,How to make RNN time-forecast multiple days using Keras?,"I am currently working on a program that would take the previous 4000 days of stock data about a particular stock and predict the next 90 days of performance. +The way I've elected to do this is with an RNN that makes use of LSTM layers to use the previous 90 days to predict the next day's performance (when training, the previous 90 days are the x-values and the next day is used as the y-value). What I would like to do however, is use the previous 90-180 days to predict all the values for the next 90 days. However, I am unsure of how to implement this in Keras as all the examples I have seen only predict the next day and then they may loop that prediction into the next day's 90 day x-values. +Is there any ways to just use the previous 180 days to predict the next 90? Or is the LSTM restricted to only predicting the next day?","I don't have the rep to comment, but I'll say here that I've toyed with a similar task. One could use a sliding window approach for 90 days (I used 30, since 90 is pushing LSTM limits), then predict the price appreciation for next month (so your prediction is for a single value). @Digital-Thinking is generally right though, you shouldn't expect great performance.",0.0,False,1,5629 +2018-07-24 21:28:16.190,How do you setup script RELOAD/RESTART upon file changes using bash?,"I have a Python Kafka worker run by a bash script in a Docker image inside a docker-compose setup that I need to reload and restart whenever a file in its directory changes, as I edit the code. Does anyone know how to accomplish this for a bash script? +Please don't merge this with the several answers about running a script whenever a file in a directory changes. I've seen other answers regarding this, but I can't find a way to run a script once, and then stop, reload and re-run it if any files change. +Thanks!","My suggestion is to let docker start a wrapper script that simply starts the real script in the background. +Then in an infinite loop: + +using inotifywait the wrapper waits for the appropriate change +then kills/stop/reload/... the child process +starts a new one in the background again.",1.2,True,1,5630 +2018-07-25 09:28:59.487,Creating an exe file for windows using mac for my Kivy app,"I've created a kivy app that works perfectly as I desire. It's got a few files in a particular folder that it uses. For the life of me, I don't understand how to create an exe on mac. I know I can use pyinstaller but how do I create an exe from mac. +Please help!","For pyinstaller, they have stated that packaging Windows binaries while running under OS X is NOT supported, and recommended to use Wine for this. + + +Can I package Windows binaries while running under Linux? + +No, this is not supported. Please use Wine for this, PyInstaller runs + fine in Wine. You may also want to have a look at this thread in the + mailinglist. In version 1.4 we had build in some support for this, but + it showed to work only half. It would require some Windows system on + another partition and would only work for pure Python programs. As + soon as you want a decent GUI (gtk, qt, wx), you would need to install + Windows libraries anyhow. So it's much easier to just use Wine. + +Can I package Windows binaries while running under OS X? + +No, this is not supported. Please try Wine for this. + +Can I package OS X binaries while running under Linux? + +This is currently not possible at all. Sorry! If you want to help out, + you are very welcome.",0.2012947653214861,False,2,5631 +2018-07-25 09:28:59.487,Creating an exe file for windows using mac for my Kivy app,"I've created a kivy app that works perfectly as I desire. It's got a few files in a particular folder that it uses. For the life of me, I don't understand how to create an exe on mac. I know I can use pyinstaller but how do I create an exe from mac. +Please help!","This is easy with Pyinstaller. I've used it recently. +Install pyinstaller + +pip install pyinstaller + +Hit following command on terminal where file.py is path to your main file + +pyinstaller -w -F file.py + +Your exe will be created inside a folder dist +NOTE : verified on windowns, not on mac",-0.3869120172231254,False,2,5631 +2018-07-25 12:50:07.533,Python Redis on Heroku reached max clients,"I am writing a server with multiple gunicorn workers and want to let them all have access to a specific variable. I'm using Redis to do this(it's in RAM, so it's fast, right?) but every GET or SET request adds another client. I'm performing maybe ~150 requests per second, so it quickly reaches the 25 connection limit that Heroku has. To access the database, I'm using db = redis.from_url(os.environ.get(""REDIS_URL"")) and then db.set() and db.get(). Is there a way to lower that number? For instance, by using the same connection over and over again for each worker? But how would I do that? The 3 gunicorn workers I have are performing around 50 queries each per second. +If using redis is a bad idea(which it probably is), it would be great if you could suggest alternatives, but also please include a way to fix my current problem as most of my code is based off of it and I don't have enough time to rewrite the whole thing yet. +Note: The three pieces of code are the only times redis and db are called. I didn't do any configuration or anything. Maybe that info will help.","Most likely, your script creates a new connection for each request. +But each worker should create it once and use forever. +Which framework are you using? +It should have some documentation about how to configure Redis for your webapp. +P.S. Redis is a good choice to handle that :)",0.0,False,1,5632 +2018-07-25 18:37:23.550,Async HTTP server with scrapy and mongodb in python,"I am basically trying to start an HTTP server which will respond with content from a website which I can crawl using Scrapy. In order to start crawling the website I need to login to it and to do so I need to access a DB with credentials and such. The main issue here is that I need everything to be fully asynchronous and so far I am struggling to find a combination that will make everything work properly without many sloppy implementations. +I already got Klein + Scrapy working but when I get to implementing DB accesses I get all messed up in my head. Is there any way to make PyMongo asynchronous with twisted or something (yes, I have seen TxMongo but the documentation is quite bad and I would like to avoid it. I have also found an implementation with adbapi but I would like something more similar to PyMongo). +Trying to think things through the other way around I'm sure aiohttp has many more options to implement async db accesses and stuff but then I find myself at an impasse with Scrapy integration. +I have seen things like scrapa, scrapyd and ScrapyRT but those don't really work for me. Are there any other options? +Finally, if nothing works, I'll just use aiohttp and instead of Scrapy I'll do the requests to the websito to scrap manually and use beautifulsoup or something like that to get the info I need from the response. Any advice on how to proceed down that road? +Thanks for your attention, I'm quite a noob in this area so I don't know if I'm making complete sense. Regardless, any help will be appreciated :)","Is there any way to make pymongo asynchronous with twisted + +No. pymongo is designed as a synchronous library, and there is no way you can make it asynchronous without basically rewriting it (you could use threads or processes, but that is not what you asked, also you can run into issues with thread-safeness of the code). + +Trying to think things through the other way around I'm sure aiohttp has many more options to implement async db accesses and stuff + +It doesn't. aiohttp is a http library - it can do http asynchronously and that is all, it has nothing to help you access databases. You'd have to basically rewrite pymongo on top of it. + +Finally, if nothing works, I'll just use aiohttp and instead of scrapy I'll do the requests to the websito to scrap manually and use beautifulsoup or something like that to get the info I need from the response. + +That means lots of work for not using scrapy, and it won't help you with the pymongo issue - you still have to rewrite pymongo! +My suggestion is - learn txmongo! If you can't and want to rewrite it, use twisted.web to write it instead of aiohttp since then you can continue using scrapy!",1.2,True,1,5633 +2018-07-25 21:15:26.713,Python: How to plot an array of y values for one x value in python,"I am trying to plot an array of temperatures for different location during one day in python and want it to be graphed in the format (time, temperature_array). I am using matplotlib and currently only know how to graph 1 y value for an x value. +The temperature code looks like this: +Temperatures = [[Temp_array0] [Temp_array1] [Temp_array2]...], where each numbered array corresponds to that time and the temperature values in the array are at different latitudes and longitudes.","You can simply repeat the X values which are common for y values +Suppose +[x,x,x,x],[y1,y2,y3,y4]",0.0,False,1,5634 +2018-07-26 21:21:24.690,Triggering email out of Spotfire based on conditions,"Does anyone have experience with triggering an email from Spotfire based on a condition? Say, a sales figure falls below a certain threshold and an email gets sent to the appropriate distribution list. I want to know how involved this would be to do. I know that it can be done using an iron python script, but I'm curious if it can be done based on conditions rather than me hitting ""run""?","we actually have a product that does exactly this called the Spotfire Alerting Tool. it functions off of Automation Services and allows you to configure various thresholds for any metrics in the analysis, and then can notify users via email or even SMS. +of course there is the possibility of coding this yourself (the tool is simply an extension developed using the Spotfire SDK) but I can't comment on how to code it. +the best way to get this tool is probably to check with your TIBCO sales rep. if you'd like I can try to reach him on your behalf, but I'll need a bit more info from you. please contact me at nmaresco@tibco.com. +I hope this kind of answer is okay on SO. I don't have a way to reach you privately and this is the best answer I know how to give :)",0.3869120172231254,False,1,5635 +2018-07-27 00:49:39.630,"Scipy interp2d function produces z = f(x,y), I would like to solve for x","I am using the 2d interpolation function in scipy to smooth a 2d image. As I understand it, interpolate will return z = f(x,y). What I want to do is find x with known values of y and z. I tried something like this; +f = interp2d(x,y,z) +index = (np.abs(f(:,y) - z)).argmin() +However the interp2d object does not work that way. Any ideas on how to do this?","I was able to figure this out. yvalue, zvalue, xmin, and xmax are known values. By creating a linspace out of the possible values x can take on, a list can be created with all of the corresponding function values. Then using argmin() we can find the closest value in the list to the known z value. +f = interp2d(x,y,z) +xnew = numpy.linspace(xmin, xmax) +fnew = f(xnew, yvalue) +xindex = (numpy.abs(fnew - zvalue)).argmin() +xvalue = xnew(xindex)",0.0,False,1,5636 +2018-07-27 04:42:13.823,"How to set an start solution in Gurobi, when only objective function is known?","I have a minimization problem, that is modeled to be solved in Gurobi, via python. +Besides, I can calculate a ""good"" initial solution for the problem separately, that can be used as an upper bound for the problem. +What I want to do is to set Gurobi use this upper bound, to enhance its efficiency. I mean, if this upper bound can help Gurobi for its search. The point is that I just have the objective value, but not a complete solution. +Can anybody help me how to set this upper bound in the Gurobi? +Thanks.","I think that if you can calculate a good solution, you can also know some bound for your variable even you dont have the solution exactly ?",0.0,False,1,5637 +2018-07-28 15:56:50.503,Many to many relationship SQLite (studio or sql),"Hellow. It seems to me that I just don't understand something quite obvios in databases. +So, we have an author that write books and have books themselves. One author can write many books as well as one book could be written by many authors. +Thus, we have two tables 'Books' and 'Authors'. +In 'Authors' I have an 'ID'(Primary key) and 'Name', for example: +1 - L.Carrol +2 - D.Brown +In 'Books' - 'ID' (pr.key), 'Name' and 'Authors' (and this column is foreign key to the 'Authors' table ID) +1 - Some_name - 2 (L.Carol) +2 - Another_name - 2,1 (D.Brown, L.Carol) +And here is my stumbling block, cause i don't understand how to provide the possibility to choose several values from 'Authors' table to one column in 'Books' table.But this must be so simple, isn't it? +I've red about many-to-many relationship, saw many examples with added extra table to implement that, but still don't understand how to store multiple values from one table in the other's table column. Please, explain the logic, how should I do something like that ? I use SQLiteStudio but clear sql is appropriate too. Help ^(","You should have third intermediate table which will have following columns: + +id (primary) +author id (from Authors table) +book id (from Books table) + +This way you will be able to create a record which will map 1 author to 1 book. So you can have following records: + +1 ... Author1ID ... Book1ID +2 ... Author1ID ... Book2ID +3 ... Author2ID ... Book2ID + +AuthorXID and BookXID - foreign keys from corresponding tables. +So Book2 has 2 authors, Author1 has 2 books. +Also separate tables for Books and Authors don't need to contain any info about anything except itself. +Authors .. 1---Many .. BOOKSFORAUTHORS .. Many---1 .. Books",1.2,True,1,5638 +2018-07-28 23:43:19.713,Screen up time in desktop,"I might be sounding like a noob while asking this question but I really want to know how can I get the time from when my screen is on. Not the system up time but the screen up time. I want to use this time in a python app. So please tell me if there is any way to get that. Thanks in advance. +Edit- I want to get the time from when the display is black due to no activity and we move mouse or press a key and screen comes up, the display is up, the user is able to read and/or able to edit a document or play games. +OS is windows .","In Mac OS ioreg might have the information you're looking for. +ioreg -n IODisplayWrangler -r IODisplayWrangler -w 0 | grep IOPowerManagement",0.0,False,1,5639 +2018-07-29 11:14:44.810,Django Queryset find data between date,"I don't know what title should be, I just got stuck and need to ask. +I have a model called shift +and imagine the db_table like this: + +#table shift ++---------------+---------------+---------------+---------------+------------+------------+ +| start | end | off_start | off_end | time | user_id | ++---------------+---------------+---------------+---------------+------------+------------+ +| 2018-01-01 | 2018-01-05 | 2018-01-06 | 2018-01-07 | 07:00 | 1 | +| 2018-01-08 | 2018-01-14 | 2018-01-15 | Null | 12:00 | 1 | +| 2018-01-16 | 2018-01-20 | 2018-01-21 | 2018-01-22 | 18:00 | 1 | +| 2018-01-23 | 2018-01-27 | 2018-01-28 | 2018-01-31 | 24:00 | 1 | +| .... | .... | .... | .... | .... | .... | ++---------------+---------------+---------------+---------------+------------+------------+ + +if I use queryset with filter like start=2018-01-01 result will 07:00 +but how to get result 12:00 if I Input 2018-01-10 ?... +thank you!","Question isnt too clear, but maybe you're after something like +start__lte=2018-01-10, end__gte=2018-01-10?",1.2,True,1,5640 +2018-07-31 16:24:41.370,cannot run jupyter notebook from anaconda but able to run it from python,"After installing Anaconda to C:\ I cannot open jupyter notebook. Both in the Anaconda Prompt with jupyter notebook and inside the navigator. I just can't make it to work. It doesn't appear any line when I type jupyter notebook iniside the prompt. Neither does the navigator work. Then after that I reinstall Anaconda, didn't work either. +But then I try to reinstall jupyter notebook dependently using python -m install jupyter and then run python -m jupyter. It works and connect to the localhost:8888. So my question is that how can I make Jupyter works from Anaconda +Also note that my anaconda is not in the environment variable( or %PATH% ) and I have tried reinstalling pyzmq and it didn't solve the problem. I'm using Python 3.7 and 3.6.5 in Anaconda +Moreover, the spyder works perfectly","You need to activate the anaconda environment first. +In terminal: source activate environment_name, (or activate environment_name on windows?) +then jupyter notebook +If you don't know the env name, do conda list +to restore the default python environment: source deactivate",1.2,True,1,5641 +2018-07-31 16:30:46.247,Handling Error for Continuous Features in a Content-Based Filtering Recommender System,"I've got a content-based recommender that works... fine. I was fairly certain it was the right approach to take for this problem (matching established ""users"" with ""items"" that are virtually always new, but contain known features similar to existing items). +As I was researching, I found that virtually all examples of content-based filtering use articles/movies as an example and look exclusively at using encoded tf-idf features from blocks of text. That wasn't exactly what I was dealing with, but most of my features were boolean features, so making a similar vector and looking at cosine distance was not particularly difficult. I also had one continuous feature, which I scaled and included in the vector. As I said, it seemed to work, but was pretty iffy, and I think I know part of the reason why... +The continuous feature that I'm using is a rating (let's call this ""deliciousness""), where, in virtually all cases, a better score would indicate an item more favorable for the user. It's continuous, but it also has a clear ""direction"" (not sure if this is the correct terminology). Error in one direction is not the same as error in another. +I have cases where some users have given high ratings to items with mediocre ""deliciousness"" scores, but logically they would still prefer something that was more delicious. That user's vector might have an average deliciousness of 2.3. My understanding of cosine distance is that in my model, if that user encountered two new items that were exactly the same except that one had a deliciousness of 1.0 and the other had a deliciousness of 4.5, it would actually favor the former because it's a shorter distance between vectors. +How do I modify or incorporate some other kind of distance measure here that takes into account that deliciousness error/distance in one direction is not the same as error/distance in the other direction? +(As a secondary question, how do I decide how to best scale this continuous feature next to my boolean features?)","There are two basic approaches to solve this: +(1) Write your own distance function. The obvious approach is to remove the deliciousness element from each vector, evaluating that difference independently. Use cosine similarity on the rest of the vector. Combine that figure with the taste differential as desired. +(2) Transform your deliciousness data such that the resulting metric is linear. This will allow a ""normal"" distance metric to do its job as expected.",1.2,True,1,5642 +2018-07-31 22:16:11.853,How do i get Mac 10.13 to install modules into a 3.x install instead of 2.7,"I'm trying to learn python practically. +I installed PIP via easy_install and then I wanted to play with some mp3 files so I installed eyed3 via pip while in the project directory. Issue is that it installed the module into python 2.7 which comes standard with mac. I found this out as it keeps telling me that when a script does not run due to missing libraries like libmagic and no matter what I do, it keeps putting any libraries I install into 2.7 thus not being found when running python3. +My question is how to I get my system to pretty much ignore the 2.7 install and use the 3.7 install which I have. +I keep thinking I am doing something wrong as heaps of tutorials breeze over it and only one has so far mentioned that you get clashes between the versions. I really want to learn python and would appreciate some help getting past this blockage.","Have you tried pip3 install [module-name]? +Then you should be able to check which modules you've installed using pip3 freeze.",0.0,False,1,5643 +2018-08-01 06:16:42.720,Any way to save format when importing an excel file in Python?,"I'm doing some work on the data in an excel sheet using python pandas. When I write and save the data it seems that pandas only saves and cares about the raw data on the import. Meaning a lot of stuff I really want to keep such as cell colouring, font size, borders, etc get lost. Does anyone know of a way to make pandas save such things? +From what I've read so far it doesn't appear to be possible. The best solution I've found so far is to use the xlsxwriter to format the file in my code before exporting. This seems like a very tedious task that will involve a lot of testing to figure out how to achieve the various formats and aesthetic changes I need. I haven't found anything but would said writer happen to in any way be able to save the sheet format upon import? +Alternatively, what would you suggest I do to solve the problem that I have described?",Separate data from formatting. Have a sheet that contains only the data – that's the one you will be reading/writing to – and another that has formatting and reads the data from the first sheet.,0.0,False,1,5644 +2018-08-01 10:39:07.337,How backing file works in qcow2?,"qcow2 is an image for qemu and it's good to emulate. +I know how to write data for qcow2 format, but I don't know how backing files in qcow2 work? +I found nothing tutorial said this. +Can anyone give me tips?","Backing file is external snapshot for qcow2 and the qemu will write COW data in the new image. +For example: +You have image A and B, and A is backing file of B. +When you mount B to /dev/nbd and check its data, you'll find you can saw data of A. +That's because if there's no data in the range of B, qemu will read the same range of A. +An important notes: If qemu doesn't find A, you won't be able to mount B on /dev/nbd.",0.3869120172231254,False,1,5645 +2018-08-02 13:30:37.763,how to download many pdf files from google at once using python?,I want to download approximately 50 pdf files from the Internet using a python script. Can Google APIs help me anyhow?,"I am going to assume that you are downloading from Google drive. You can only download one file at a time. You cant batch download of the actual file itself. +YOu could look into some kind of multi threading system and download the files at the same time that way but you man run into quota issues.",0.0,False,1,5646 +2018-08-03 12:50:35.807,how to use coverage run --source = {dir_name},"I have certain files in a directory named benchmarks and I want to get code coverage by running these source files. +I have tried using source flag in the following ways but it doesn't work. +coverage3 run --source=benchmarks +coverage3 run --source=benchmarks/ +On running, I always get Nothing to do. +Thanks","coverage run is like python. If you would run a file with python myprog.py, then you can use coverage run myprog.py.",1.2,True,1,5647 +2018-08-04 18:15:06.493,Discord.py get message embed,"How can I get the embed of a message to a variable with the ID of the message in discord.py? +I get the message with uzenet = await client.get_message(channel, id), but I don't know how to get it's embed.","To get the first Embed of your message, as you said that would be a dict(): +embedFromMessage = uzenet.embeds[0] +To transfer the dict() into an discord.Embed object: +embed = discord.Embed.from_data(embedFromMessage)",1.2,True,1,5648 +2018-08-04 22:59:50.310,How to use Windows credentials to connect remote desktop,"In my Python script I want to connect to remote server every time. So how can I use my windows credentials to connect to server without typing user ID and password. +By default it should read the userid/password from local system and will connect to remote server. +I tried with getuser() and getpass() but I have to enter the password everytime. I don't want to enter the password it should take automatically from local system password. +Any suggestions..",I am sorry this is not exactly an answer but I have looked on the web and I do not think you can write a code to automatically open Remote desktop without you having to enter the credentials but can you please edit the question so that I can see the code?,0.0,False,1,5649 +2018-08-07 05:02:59.393,On project task created do not send email,"By default subscribers get email messages once the new task in a project is created. How it can be tailored so that unless the projects has checkbox ""Send e-mail on new task"" checked it will not send e-mails on new task? +I know how to add a custom field to project.project model. But don't know the next step. +What action to override to not send the email when a new task is created and ""Send e-mail on new task"" is not checked for project?","I found that if project has notifications option "" +Visible by following customers"" enabled then one can configure subscription for each follower. +To not receive e-mails when new task is added to the project: unmark the checkbox ""Task opened"" in the ""Edit subscription of User"" form.",1.2,True,1,5650 +2018-08-08 05:01:22.287,How can I pack python into my project?,"I am making a program that will call python. I would like to add python in my project so users don't have to download python in order to use it, also it will be better to use the python that my program has so users don't have to download any dependency. +My program it's going to be writing in C++ (but can be any language) and I guess I have to call the python that is in the same path of my project? +Let's say that the system where the user is running already has python and he/she calls 'pip' i want the program to call pip provided by the python give it by my program and install it in the program directory instead of the system's python? +It's that possible? If it is how can I do it? +Real examples: +There are programs that offer a terminal where you can execute python to do things in the program like: + +Maya by Autodesk +Nuke by The foundry +Houdini by Side Effects + +Note: It has to be Cross-platform solution","In order to run python code, the runtime is sufficient. Under Windows, you can use py2exe to pack your program code together with the python runtime and all recessary dependencies. But pip cannot be used and it makes no sense, as you don't want to develop, but only use the python part. +To distribute the complete python installation, like Panda3D does, you'll have to include it in the chosen installer software.",0.1352210990936997,False,1,5651 +2018-08-08 06:15:54.700,Python app to organise ideas by tags,"Please give me a hint about how is it better to code a Python application which helps to organise ideas by tags. +Add a new idea: +Input 1: the idea +Input 2: corresponding tags +Search for the idea: +Input 1: one or multiple tags +As far as I understood, it's necessary to create an array with ideas and an array with tags. But how to connect them? For example, idea number 3 corresponds to tags number 1 and 2. So the question is: how to link these two arrays in the most simple and elegant way?","Have two dictionaries: + +Idea -> Set of Tags +Tag -> Set of Ideas + +When you add a new idea, add it to the first dictionary, and then update all the sets of the tags it uses in the second dictionary. This way you get easy lookup by both tag and idea.",0.0,False,1,5652 +2018-08-08 13:54:35.003,Does ImageDataGenerator add more images to my dataset?,"I'm trying to do image classification with the Inception V3 model. Does ImageDataGenerator from Keras create new images which are added onto my dataset? If I have 1000 images, will using this function double it to 2000 images which are used for training? Is there a way to know how many images were created and now fed into the model?","Let me try and tell u in the easiest way possible with the help of an example. +For example: + +you have a set of 500 images +you applied the ImageDataGenerator to the dataset with batch_size = 25 +now you run your model for lets say 5 epochs with +steps_per_epoch=total_samples/batch_size +so , steps_per_epoch will be equal to 20 +now your model will run on all 500 images (randomly transformed according to instructions provided to ImageDataGenerator) in each epoch",0.0,False,2,5653 +2018-08-08 13:54:35.003,Does ImageDataGenerator add more images to my dataset?,"I'm trying to do image classification with the Inception V3 model. Does ImageDataGenerator from Keras create new images which are added onto my dataset? If I have 1000 images, will using this function double it to 2000 images which are used for training? Is there a way to know how many images were created and now fed into the model?","Also note that: These augmented images are not stored in the memory, they are generated on the fly while training and lost after training. You can't read again those augmented images. +Not storing those images is a good idea because we'd run out of memory very soon storing huge no of images",0.1160922760327606,False,2,5653 +2018-08-09 09:03:04.903,Can I use JetBrains MPS in a web application?,"I am developing a small web application with Flask. This application needs a DSL, which can express the content of .pdf files. +I have developed a DSL with JetBrains MPS but now I'm not sure how to use it in my web application. Is it possible? Or should I consider to switch to another DSL or make my DSL directly in Python.","If you want to use MPS in the web frontend the simple answer is: no. +Since MPS is a projectional editor it needs a projection engine so that user can interact with the program/model. The projection engine of MPS is build in Java for desktop applications. There have been some efforts to put MPS on the web and build Java Script/HTML projection engine but none of the work is complete. So unless you would build something like that there is no way to use MPS in the frontend. +If your DSL is textual anyway and doesn't leverage the projectional nature of MPS I would go down the text DSL road with specialised tooling for that e.g. python as you suggested or Xtext.",1.2,True,1,5654 +2018-08-09 10:03:54.250,"How to solve error Expected singleton: purchase.order.line (57, 58, 59, 60, 61, 62, 63, 64)","I'm using odoo version 9 and I've created a module to customize the reports of purchase order. Among the fields that I want displayed in the reports is the supplier reference for article but when I add the code that displays this field + but it displays an error when I want to start printing the report +QWebException: ""Expected singleton: purchase.order.line(57, 58, 59, 60, 61, 62, 63, 64)"" while evaluating +""', '.join([str(x.product_code) for x in o.order_line.product_id.product_tmpl_id.seller_ids])"" +PS: I don't change anything in the module purchase. +I don't know how to fix this problem any idea for help please ?","It is because your purchase order got several orderlines and you are hoping that the order will have only one orderline. +o.orderline.product_id.product_tmpl_id.seller_ids +will work only if there is one orderline otherwise you have loop through each orderline. Here o.orderline will have multiple orderlines and you can get product_id from multiple orderline. If you try o.orderline[0].product_id.product_tmpl_id.seller_ids it will work but will get only first orderline details. Inorder to get all the orderline details you need to loop through it.",1.2,True,1,5655 +2018-08-10 09:07:10.620,how to convert tensorflow .meta .data .index to .ckpt file?,"As we know, when using tensorflow to save checkpoint, we have 3 files, for e.g.: +model.ckpt.data-00000-of-00001 +model.ckpt.index +model.ckpt.meta +I check on the faster rcnn and found that they have an evaluation.py script which helps evaluate the pre-trained model, but the script only accept .ckpt file (as they provided some pre-trained models above). +I have run some finetuning from their pre-trained model +And then I wonder if there's a way to convert all the .data-00000-of-00001, .index and .meta into one single .ckpt file to run the evaluate.py script on the checkpoint? +(I also notice that the pre-trained models they provided in the repo do have only 1 .ckpt file, how can they do that when the save-checkpoint function generates 3 files?)","These +{ +model.ckpt.data-00000-of-00001 +model.ckpt.index +model.ckpt.meta +} +are the more recent checkpoint format +while +{model.ckpt} +is a previous checkpoint format +It will be in the same concept as to convert a Nintendo Switch to NES ... Or a 3 pieces CD bundle to a single ROM cartridge...",0.0,False,1,5656 +2018-08-10 17:54:31.013,How do I write a script that configures an applications settings for me?,"I need help on how to write a script that configures an applications (VLC) settings to my needs without having to do it manually myself. The reason for this is because I will eventually need to start this application on boot with the correct settings already configured. +Steps I need done in the script. +1) I need to open the application. +2) Open the “Open Network Stream…” tab (Can be done with Ctrl+N). +3) Type a string of characters “String of characters” +4) Push “Enter” twice on the keyboard. +I’ve checked various websites across the internet and could not find any information regarding this. I am sure it’s possible but I am new to writing scripts and not too experienced. Are commands like the steps above possible to be completed in a script? +Note: Using Linux based OS (Raspbian). +Thank you.","Do whichever changes you want manually once on an arbitrary system, then make a copy of the application's configuration files (in this case ~/.config/vlc) +When you want to replicate the settings on a different machine, simply copy the settings to the same location.",1.2,True,1,5657 +2018-08-10 22:27:20.097,Python/Tkinter - Making The Background of a Textbox an Image?,"Since Text(Tk(), image=""somepicture.png"") is not an option on text boxes, I was wondering how I could make bg= a .png image. Or any other method of allowing a text box to stay a text box, with an image in the background so it can blend into a its surroundings.","You cannot use an image as a background in a text widget. +The best you can do is to create a canvas, place an image on the canvas, and then create a text item on top of that. Text items are editable, but you would have to write a lot of bindings, and you wouldn't have nearly as many features as the text widget. In short, it would be a lot of work.",1.2,True,1,5658 +2018-08-11 06:44:26.587,how to uninstall pyenv(installed by homebrew) on Mac,"I used to install pyenv by homebrew to manage versions of python, but now, I want to use anaconda.But I don't know how to uninstall pyenv.Please tell me.","Try removing it using the following command: +brew remove pyenv",0.3869120172231254,False,2,5659 +2018-08-11 06:44:26.587,how to uninstall pyenv(installed by homebrew) on Mac,"I used to install pyenv by homebrew to manage versions of python, but now, I want to use anaconda.But I don't know how to uninstall pyenv.Please tell me.","None work for me (under brew) under Mac Cataline. +They have a warning about file missing under .pyenv. +(After I removed the bash_profile lines and also rm -rf ~/.pyenv, +I just install Mac OS version of python under python.org and seems ok. +Seems get my IDLE work and ...",0.3869120172231254,False,2,5659 +2018-08-11 08:48:32.293,How to install pandas for sublimetext?,"I cannot find the way to install pandas for sublimetext. Do you might know how? +There is something called pandas theme in the package control, but that was not the one I needed; I need the pandas for python for sublimetext.","You can install this awesome theme through the Package Control. + +Press cmd/ctrl + shift + p to open the command palette. +Type “install package” and press enter. Then search for “Panda Syntax Sublime” + +Manual installation + +Download the latest release, extract and rename the directory to “Panda Syntax”. +Move the directory inside your sublime Packages directory. (Preferences > Browse packages…) + +Activate the theme +Open you preferences (Preferences > Setting - User) and add this lines: +""color_scheme"": ""Packages/Panda Syntax Sublime/Panda/panda-syntax.tmTheme"" +NOTE: Restart Sublime Text after activating the theme.",-0.2012947653214861,False,2,5660 +2018-08-11 08:48:32.293,How to install pandas for sublimetext?,"I cannot find the way to install pandas for sublimetext. Do you might know how? +There is something called pandas theme in the package control, but that was not the one I needed; I need the pandas for python for sublimetext.","For me, ""pip install pandas"" was not working, so I used pip3 install pandas which worked nicely. +I would advise using either pip install pandas or pip3 install pandas for sublime text",0.0,False,2,5660 +2018-08-11 14:25:40.620,Can I get a list of all urls on my site from the Google Analytics API?,"I have a site www.domain.com and wanted to get all of the urls from my entire website and how many times they have been clicked on, from the Google Analytics API. +I am especially interested in some of my external links (the ones that don't have www.mydomain.com). I will then match this against all of the links on my site (I somehow need to get these from somewhere so may scrape my own site). +I am using Python and wanted to do this programmatically. Does anyone know how to do this?","I have a site www.domain.com and wanted to get all of the urls from my + entire website and how many times they have been clicked on + +I guess you need parameter Page and metric Pageviews + +I am especially interested in some of my external links + +You can get list of external links if you track they as events. +Try to use some crawler, for example Screaming Frog. It allows to get internal and external links. Free use up to 500 pages.",1.2,True,1,5661 +2018-08-12 10:05:41.443,Data extraction from wef output file,"I have a wrf output netcdf file.File have variables temp abd prec.Dimensions keys are time, south-north and west-east. So how I select different lat long value in region. The problem is south-north and west-east are not variable. I have to find index value of four lat long value","1) Change your Registry files (I think it is Registry.EM_COMMON) so that you print latitude and longitude in your wrfout_d01_time.nc files. +2) Go to your WRFV3 map. +3) Clean, configure and recompile. +4) Run your model again the way you are used to.",0.0,False,1,5662 +2018-08-12 19:39:13.970,Cosmic ray removal in spectra,"Python developers +I am working on spectroscopy in a university. My experimental 1-D data sometimes shows ""cosmic ray"", 3-pixel ultra-high intensity, which is not what I want to analyze. So I want to remove this kind of weird peaks. +Does anybody know how to fix this issue in Python 3? +Thanks in advance!!","The answer depends a on what your data looks like: If you have access to two-dimensional CCD readouts that the one-dimensional spectra were created from, then you can use the lacosmic module to get rid of the cosmic rays there. If you have only one-dimensional spectra, but multiple spectra from the same source, then a quick ad-hoc fix is to make a rough normalisation of the spectra and remove those pixels that are several times brighter than the corresponding pixels in the other spectra. If you have only one one-dimensional spectrum from each source, then a less reliable option is to remove all pixels that are much brighter than their neighbours. (Depending on the shape of your cosmics, you may even want to remove the nearest 5 pixels or something, to catch the wings of the cosmic ray peak as well).",0.0,False,1,5663 +2018-08-13 21:59:31.640,PyCharm running Python file always opens a new console,"I initially started learning Python in Spyder, but decided to switch to PyCharm recently, hence I'm learning PyCharm with a Spyder-like mentality. +I'm interested in running a file in the Python console, but every time I rerun this file, it will run under a newly opened Python console. This can become annoying after a while, as there will be multiple Python consoles open which basically all do the same thing but with slight variations. +I would prefer to just have one single Python console and run an entire file within that single console. Would anybody know how to change this? Perhaps the mindset I'm using isn't very PyCharmic?","One console is one instance of Python being run on your system. If you want to run different variations of code within the same Python kernel, you can highlight the code you want to run and then choose the run option (Alt+Shift+F10 default).",0.0,False,3,5664 +2018-08-13 21:59:31.640,PyCharm running Python file always opens a new console,"I initially started learning Python in Spyder, but decided to switch to PyCharm recently, hence I'm learning PyCharm with a Spyder-like mentality. +I'm interested in running a file in the Python console, but every time I rerun this file, it will run under a newly opened Python console. This can become annoying after a while, as there will be multiple Python consoles open which basically all do the same thing but with slight variations. +I would prefer to just have one single Python console and run an entire file within that single console. Would anybody know how to change this? Perhaps the mindset I'm using isn't very PyCharmic?","You have an option to Rerun the program. +Simply open and navigate to currently running app with: + +Alt+4 (Windows) +⌘+4 (Mac) + +And then rerun it with: + +Ctrl+R (Windows) +⌘+R (Mac) + +Another option: +Show actions popup: + +Ctrl+Shift+A (Windows) +⇧+⌘+A (Mac) + +And type Rerun ..., IDE then hint you with desired action, and call it.",0.0,False,3,5664 +2018-08-13 21:59:31.640,PyCharm running Python file always opens a new console,"I initially started learning Python in Spyder, but decided to switch to PyCharm recently, hence I'm learning PyCharm with a Spyder-like mentality. +I'm interested in running a file in the Python console, but every time I rerun this file, it will run under a newly opened Python console. This can become annoying after a while, as there will be multiple Python consoles open which basically all do the same thing but with slight variations. +I would prefer to just have one single Python console and run an entire file within that single console. Would anybody know how to change this? Perhaps the mindset I'm using isn't very PyCharmic?","To allow only one instance to run, go to ""Run"" in the top bar, then ""Edit Configurations..."". Finally, check ""Single instance only"" at the right side. This will run only one instance and restart every time you run.",0.0679224682270276,False,3,5664 +2018-08-14 03:28:58.627,What is Killed:9 and how to fix in macOS Terminal?,"I have a simple Python code for a machine learning project. I have a relatively big database of spontaneous speech. I started to train my speech model. Since it's a huge database I let it work overnight. In the morning I woke up and saw a mysterious +Killed: 9 +line in my Terminal. Nothing else. There is no other error message or something to work with. The code run well for about 6 hours which is 75% of the whole process so I really don't understand whats went wrong. +What is Killed:9 and how to fix it? It's very frustrating to lose hours of computing time... +I'm on macOS Mojave beta if it's matter. Thank you in advance!","Try to change the node version. +In my case, that helps.",-0.2012947653214861,False,1,5665 +2018-08-15 17:19:50.610,Identifying parameters in HTTP request,"I am fairly proficient in Python and have started exploring the requests library to formulate simple HTTP requests. I have also taken a look at Sessions objects that allow me to login to a website and -using the session key- continue to interact with the website through my account. +Here comes my problem: I am trying to build a simple API in Python to perform certain actions that I would be able to do via the website. However, I do not know how certain HTTP requests need to look like in order to implement them via the requests library. +In general, when I know how to perform a task via the website, how can I identify: + +the type of HTTP request (GET or POST will suffice in my case) +the URL, i.e where the resource is located on the server +the body parameters that I need to specify for the request to be successful","This has nothing to do with python, but you can use a network proxy to examine your requests. + +Download a network proxy like Burpsuite +Setup your browser to route all traffic through Burpsuite (default is localhost:8080) +Deactivate packet interception (in the Proxy tab) +Browse to your target website normally +Examine the request history in Burpsuite. You will find every information you need",1.2,True,1,5666 +2018-08-16 03:16:46.443,Why there is binary type after writing to hive table,"I read the data from oracle database to panda dataframe, then, there are some columns with type 'object', then I write the dataframe to hive table, these 'object' types are converted to 'binary' type, does any one know how to solve the problem?","When you read data from oracle to dataframe it's created columns with object datatypes. +You can ask pandas dataframe try to infer better datatypes (before saving to Hive) if it can: +dataframe.infer_objects()",0.0,False,1,5667 +2018-08-16 04:22:51.340,What is the use of Jupyter Notebook cluster,"Can you tell me what is the use of jupyter cluster. I created jupyter cluster,and established its connection.But still I'm confused,how to use this cluster effectively? +Thank you","With Jupyter Notebook cluster, you can run notebook on the local machine and connect to the notebook on the cluster by setting the appropriate port number. Example code: + +Go to Server using ssh username@ip_address to server. +Set up the port number for running notebook. On remote terminal run jupyter notebook --no-browser --port=7800 +On your local terminal run ssh -N -f -L localhost:8001:localhost:7800 username@ip_address of server. +Open web browser on local machine and go to http://localhost:8001/",1.2,True,1,5668 +2018-08-16 12:03:34.353,How to decompose affine matrix?,"I have a series of points in two 3D systems. With them, I use np.linalg.lstsq to calculate the affine transformation matrix (4x4) between both. However, due to my project, I have to ""disable"" the shear in the transform. Is there a way to decompose the matrix into the base transformations? I have found out how to do so for Translation and Scaling but I don't know how to separate Rotation and Shear. +If not, is there a way to calculate a transformation matrix from the points that doesn't include shear? +I can only use numpy or tensorflow to solve this problem btw.","I'm not sure I understand what you're asking. +Anyway If you have two sets of 3D points P and Q, you can use Kabsch algorithm to find out a rotation matrix R and a translation vector T such that the sum of square distances between (RP+T) and Q is minimized. +You can of course combine R and T into a 4x4 matrix (of rotation and translation only. without shear or scale).",1.2,True,1,5669 +2018-08-16 13:00:32.667,Jupyter notebook kernel does not want to interrupt,"I was running a cell in a Jupyter Notebook for a while and decided to interrupt. However, it still continues to run and I don't know how to proceed to have the thing interrupted... +Thanks for help","Sometimes this happens, when you are on a GPU accelerated machine, where the Kernel is waiting for some GPU operation to be finished. I noticed this even on AWS instances. +The best thing you can do is just to wait. In the most cases it will recover and finish at some point. If it does not, at least it will tell you the kernel died after some minutes and you don´t have to copy paste your notebook, to back up your work. In rare cases, you have to kill your python process manually.",1.2,True,1,5670 +2018-08-17 02:02:19.417,find token between two delimiters - discord emotes,"i am trying to recognise discord emotes. +They are always between two : and don't contain space. e.g. +:smile: +I know how to split strings at delimiters, but how do i only split tokens that are within exactly two : and contain no space? +Thanks in advance!","Thanks to @G_M i found the following solution: + + regex = re.compile(r':[A-Za-z0-9]+:') + result = regex.findall(message.content) + +Will give me a list with all the emotes within a message, independent of where they are within the message.",1.2,True,1,5671 +2018-08-17 14:49:24.567,Post file from one server to another,"I have an Apache server A set up that currently hosts a webpage of a bar chart (using Chart.js). This data is currently pulled from a local SQLite database every couple seconds, and the web chart is updated. +I now want to use a separate server B on a Raspberry Pi to send data to the server to be used for the chart, rather than using the database on server A. +So one server sends a file to another server, which somehow realises this and accepts it and processes it. +The data can either be sent and placed into the current SQLite database, or bypass the database and have the chart update directly from the Pi's sent information. +I have come across HTTP Post requests, but not sure if that's what I need or quite how to implement it. +I have managed to get the Pi to simply host a json file (viewable from the external ip address) and pull the data from that with a simple requests.get('ip_address/json_file') in Python, but this doesn't seem like the most robust or secure solution. +Any help with what I should be using much appreciated, thanks!","Maybe I didn't quite understand your request but this is the solution I imagined: + +You create a Frontend with WebSocket support that connects to Server A +Server B (the one running on the raspberry) sends a POST request +with the JSON to Server A +Server A accepts the JSON and sends it to all clients connected with the WebSocket protocol + +Server B ----> Server A <----> Frontend +This way you do not expose your Raspberry directly and every request made by the Frontend goes only to Server A. +To provide a better user experience you could also create a GET endpoint on Server A to retrieve the latest received JSON, so that when the user loads the Frontend for the first time it calls that endpoint and even if the Raspberry has yet to update the data at least the user can have an insight of the latest available data.",0.0,False,1,5672 +2018-08-17 15:42:47.703,How to display a pandas Series in Python?,"I have a variable target_test (for machine learning) and I'd like to display just one element of target_test. +type(target_test) print the following statement on the terminal : +class 'pandas.core.series.Series' +If I do print(target_test) then I get the entire 2 vectors that are displayed. +But I'd like to print just the second element of the first column for example. +So do you have an idea how I could do that ? +I convert target_test to frame or to xarray but it didn't change the error I get. +When I write something like : print(targets_test[0][0]) +I got the following output : +TypeError: 'instancemethod' object has no attribute '__getitem__'","For the first column, you can use targets_test.keys()[i], for the second one targets_test.values[i] where i is the row starting from 0.",1.2,True,1,5673 +2018-08-18 22:38:40.803,django-storages boto3 accessing file url of a private file,"I'm trying to get the generated URL of a file in a test model I've created, +and I'm trying to get the correct url of the file by: modelobject.file.url which does give me the correct url if the file is public, however if the file is private it does not automatically generate a signed url for me, how is this normally done with django-storages? +Is the API supposed to automatically generate a signed url for private files? I am getting the expected Access Denied Page for 'none' signed urls currently, and need to get the signed 'volatile' link to the file. +Thanks in advance","I've figured out what I needed to do, +in the Private Storage class, I forgot to put custom_domain = False originally left this line off, because I did not think I needed it however you absolutely do in order to generate signed urls automatically.",0.9999877116507956,False,1,5674 +2018-08-19 22:55:22.463,Django - DRF (django-rest-framework-social-oauth2) and React creating a user,"I'm using the DRF and ReactJS and I am trying to login with Patreon using +django-rest-framework-social-oauth2. +In React, I send a request to the back-end auth/login/patreon/ and I reach the Patreon OAuth screen where I say I want to login with PAtreon. Patreon then returns with a request to the back-end at accounts/profile. At this point a python-social-oauth user has also been created. +At this point I'm confused. How do I make a request to Patreon to login, create a user in the back-end, and return the session information to the react front-end so that I can include the session information in all following requests from the front-end? I don't want the returned request to be at the backend/accounts/profile, do I? +Update +I now realize I can set the redirect url with LOGIN_REDIRECT_URL but still, how do I now retrieve the session id, pass it to the front-end, and include it with all requests?","Once you receive the user profile email, unique id, and other details from Patreon then create a user at the Database level. +Now after creating a user at the Database level you have to log in the user using the Django login function or any other login mechanism before redirecting the user to the frontend with a session. The redirect URL for the home/ landing page is provided by the Frontend side where they want to land the user after being successfully logged with session-id being set in cookies. Onward Frontend side can use session id in cookies for other requests. +Here is the flow: + +React JS -> auth/login/patreon/ -> redirected to Patreon -> Redirected back to the Backend with user information -> Create User (DB level) -> Login user -> Redirect back to Frontend (React JS on a specific URL provided by Front end)",0.0,False,1,5675 +2018-08-20 15:34:25.450,"In Visual Studio Code, how do I load my python code to a read-print-eval loop?","I am teaching a class that uses VScode. +I am used to teaching using IDLE, and it is very nice for the students to be able to call their defined functions and run snippets of code in a python terminal, for debugging purposes. +In VScode, they I have been unable to do the same in a satisfactory way. +Option1: I can select all code, right click and run selection/line on terminal. This works for small snippets, but I cannot do it for the whole file (even after selecting the whole file with ctrl-A). On linux, this works, but on windows, it does not, unfortunately (and my students use windows) +Option2: I can use the debug console. This requires adding a breakpoint in one of the last lines of the file, and does not offer tab completion. It works, but is less convenient than IDLE. +Option 3: I can also add the commands to run to the bottom of the file (which is a least preferred alternative, given that is forgoes the interativity of the read-print-eval loop). +Is there any better solution? Installing a VScode extension would not be a problem.","Visual Code is just a text editor like your traditional notepad. to run and debug any kind program you need to install the particular extension for the programming language. +In your case you are using python so you need to install the extension of it. the best one is the ""Python"" which is developed by microsoft itself. go to your extensions manager and install this extension. right click and click ""run python file in terminal"" and you are all set. +this will run exactly as they run from the idle(which is default IDE provided by python itself) you can enter the arguments from the console itself. according to me this is the best way to run and debug python programs in VScode. +another way is that VScode shows which python version is installed on your computer on the left bottom side, click on it and the programs will use this interpreter. +out of all the ways listed here and many others, the best method is to run the program in the terminal which is the recommend by python itself and many other programmers. +this method is very simple. what you have to do is open up your command prompt and type the path where python.exe is installed and the type the path of the your program as the argument and press enter. you are done ! +ex : C:\Python27\python.exe C:\Users\Username\Desktop\my_python_script.py +You can also pass your arguments of your program in the command prompt itself. +if you do not want to type all this and then just use the solution mentioned above. +hope that your query is solved. +regards",0.9950547536867304,False,1,5676 +2018-08-20 22:16:15.047,Maximum files size for Pyspark RDD,"I’m practicing Pyspark (standalone) in the Pyspark shell at work and it’s pretty new to me. Is there a rule of thumb regarding max file size and the RAM (or any other spec) on my machine? What about when using a cluster? +The file I’m practicing with is about 1200 lines. But I’m curious to know how large of a file size can be read into an RDD in regards to machine specifications or cluster specifications.","There is no hard limit on the Data size you can process, however when your RDD (Resilient Distributed Dataset) size exceeds the size of your RAM then the data will be moved to Disk. Even after the data is moved to the Disk spark will be equally capable of processing it. For example if your data is 12GB and available memory is 8GB spark will distribute the leftover data to disk and takes care of all transformations / actions seamlessly. Having said that you can process the data appropriately equal to size of disk. +There are of-course size limitation on size of single RDD which is 2GB. In other words the maximum size of a block will not exceed 2GB.",1.2,True,1,5677 +2018-08-22 12:17:01.487,Abaqus: parametric geometry/assembly in Inputfile or Python script?,"i want to do something as a parametric study in Abaqus, where the parameter i am changing is a part of the assembly/geometry. +Imagine the following: +A cube is hanging on 8 ropes. Each two of the 8 ropes line up in one corner of a room. the other ends of the ropes merge with the room diagonal of the cube. It's something like a cable-driven parallel robot/rope robot. +Now, i want to calculate the forces in the ropes in different positions of the cube, while only 7 of the 8 ropes are actually used. That means i have 8 simulations for each position of my cube. +I wrote a matlab script to generate the nodes and wires of the cube in different positions and angle of rotations so i can copy them into an input file for Abaqus. +Since I'm new to Abaqus scripting etc, i wonder which is the best way to make this work. +would you guys generate 8 input files for one position of the cube and calculate +them manually or is there a way to let abaqus somehow iterate different assemblys? +I guess i should wright a python script, but i don't know how to make the ropes the parameter that is changing. +Any help is appreciated! +Thanks, Tobi","In case someon is interested, i was able to do it the following way: +I created a model in abaqus till the point, i could have started the job. Then i took the .jnl file (which is created automaticaly by abaqus) and saved it as a .py file. Then i modified this script by defining every single point as a variable and every wire for the parts as tuples, consisting out of the variables. Than i made for loops and for every 9 cases unique wire definitions, which i called during the loop. During the loop also the constraints were changed and the job were started. I also made a field output request for the endnodes of the ropes (representing motors) for there coordinates and reaction force (the same nodes are the bc pinned) +Then i saved the fieldoutput in a certain simple txt file which i was able to analyse via matlab. +Then i wrote a matlab script which created the points, attached them to the python script, copied it to a unique directory and even started the job. +This way, i was able to do geometric parametric studies in abaqus using matlab and python. +Code will be uploaded soon",1.2,True,1,5678 +2018-08-22 12:57:46.077,Pandas DataFrame Display in Jupyter Notebook,"I want to make my display tables bigger so users can see the tables better when that are used in conjunction with Jupyter RISE (slide shows). +How do I do that? +I don't need to show more columns, but rather I want the table to fill up the whole width of the Jupyter RISE slide. +Any idea on how to do that? +Thanks","If df is a pandas.DataFrame object. +You can do: +df.style.set_properties(**{'max-width': '200px', 'font-size': '15pt'})",0.0,False,1,5679 +2018-08-22 13:38:01.097,Will making a Django website public on github let others get the data in its database ? If so how to prevent it?,"I have a locally made Django website and I hosted it on Heroku, at the same time I push changes to anathor github repo. I am using built in Database to store data. Will other users be able to get the data that has been entered in the database from my repo (like user details) ? +If so how to prevent it from happening ? Solutions like adding files to .gitignore will also prevent pushing to Heroku.","The code itself wouldn't be enough to get access to the database. For that you need the db name and password, which shouldn't be in your git repo at all. +On Heroku you use environment variables - which are set automatically by the postgres add-on - along with the dj_database_url library which turns that into the relevant values in the Django DATABASES setting.",0.0,False,1,5680 +2018-08-22 15:24:11.663,Uploading an image to S3 and manipulating with Python in Lambda - best practice,"I'm building my first web application and I've got a question around process and best practice, I'm hoping the expertise on this website might be give me a bit of direction. +Essentially, all the MVP is doing is going to be writing an overlay onto an image and presenting this back to the user, as follows; + +User uploads picture via web form (into AWS S3) - to do +Python script executes (in lambda) and creates image overlay, saves new image back into S3 - complete +User is presented back with new image to download - to do + +I've been running this locally as sort of a proof of concept and was planning on linking up with S3 today but then suddenly realised, what happens when there are two concurrent users and two images being uploaded with different filenames with two separate lambda functions working? +The only solution I could think of is having the image renamed upon upload with a record inserted into an RDS, then the lambda function to run upon record insertion against the new image, which would resolve half of it, but then how would I get the correct image relayed back to the user? +I'll be clear, I have next to no experience in web development, I want the front end to be as dumb as possible and run everything in Python (I'm a data scientist, I can write Python for data analysis but no experience as a software dev!)","You don't really need an RDS, just invoke your lambda synchronously from the browser. +So + +Upload file to S3, using a randomized file name +Invoke your lambda synchronously, passing it the file name +Have your lambda read the file, convert it, and respond with either the file itself (binary responses aren't trivial), or a path to the converted file in S3.",0.0,False,1,5681 +2018-08-23 12:03:16.460,How to install twilio via pip,"how to install twilio via pip? +I tried to install twilio python module +but i can't install it +i get following error +no Module named twilio +When trying to install twilio +pip install twilio +I get the following error. +pyopenssl 18.0.0 has requirement six>=1.5.2, but you'll have six 1.4.1 which is incompatible. +Cannot uninstall 'pyOpenSSL'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. +i got the answer and installed +pip install --ignore-installed twilio +but i get following error + +Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/pytz-2018.5.dist-info' +Consider using the `--user` option or check the permissions. + +i have anaconda installed +is this a problem?","step1:download python-2.7.15.msi +step 2:install and If your system does not have Python added to your PATH while installing +""add python exe to path"" +step 3:go C:\Python27\Scripts of your system +step4:in command prompt C:\Python27\Scripts>pip install twilio +step 5:after installation is done >python command line + import twilio +print(twilio.version) +step 6:if u get the version ...you are done",-0.2012947653214861,False,1,5682 +2018-08-23 14:53:44.523,How to retrieve objects from the sotlayer saved quote using Python API,"I'm trying to retrieve the objects/items (server name, host name, domain name, location, etc...) that are stored under the saved quote for a particular Softlayer account. Can someone help how to retrieve the objects within a quote? I could find a REST API (Python) to retrieve quote details (quote ID, status, etc..) but couldn't find a way to fetch objects within a quote. +Thanks! +Best regards, +Khelan Patel",Thanks Albert getRecalculatedOrderContainer is the thing I was looking for.,0.0,False,1,5683 +2018-08-23 23:45:21.277,Can I debug Flask applications in IntelliJ?,"I know how to debug a flask application in Pycharm. The question is whether this is also possible in IntelliJ. +I have my flask application debugging in Pycharm but one thing I could do in IntelliJ was evaluate expressions inline by pressing the alt + left mouse click. This isn't available in Pycharm so I wanted to run my Flask application in IntelliJ but there isn't a Flask template. +Is it possible to add a Flask template to the Run/Debug configuration? I tried looking for a plugin but couldn't find that either.","Yes, you can. Just setup the proper parameters for Run script into PyCharm IDE. After that you can debug it as usual py script. In PyCharm you can evaluate any line in debug mode too.",0.0,False,1,5684 +2018-08-24 14:36:02.090,"how to add the overall ""precision"" and ""recall"" metrics to ""tensorboard"" log file, after training is finished?","After the training is finished and I did the prediction on my network, I want to calculate ""precision"" and ""recall"" of my model, and then send it to log file of ""tensorboard"" to show the plot. +while training, I send ""tensorboard"" function as a callback to keras. but after training is finished, I dont know how to add some more data to tensorboard to be plotted. +I use keras for coding and tensorflow as its backend.",I believe that you've already done that work: it's the same process as the validation (prediction and check) step you do after training. You simply tally the results of the four categories (true/false pos/neg) and plug those counts into the equations (ratios) for precision and recall.,0.0,False,1,5685 +2018-08-27 21:20:09.317,Convolutional neural network architectures with an arbitrary number of input channels (more than RGB),"I am very new to image recognition with CNNs and currently using several standard (pre-trained) architectures available within Keras (VGG and ResNet) for image classification tasks. I am wondering how one can generalise the number of input channels to more than 3 (instead of standard RGB). For example, I have an image which was taken through 5 different (optic) filters and I am thinking about passing these 5 images to the network. +So, conceptually, I need to pass as an input (Height, Width, Depth) = (28, 28, 5), where 28x28 is the image size and 5 - the number of channels. +Any easy way to do it with ResNet or VGG please?","If you retrain the models, that's not a problem. Only if you want to use a trained model, you have to keep the input the same.",1.2,True,1,5686 +2018-08-28 02:21:26.060,How to use Docker AND Conda in PyCharm,"I want to run python in PyCharm by using a Docker image, but also with a Conda environment that is set up in the Docker image. I've been able to set up Docker and (locally) set up Conda in PyCharm independently, but I'm stumped as to how to make all three work together. +The problem comes when I try to create a new project interpreter for the Conda environment inside the Docker image. When I try to enter the python interpreter path, it throws an error saying that the directory/path doesn't exist. +In short, the question is the same as the title: how can I set up PyCharm to run on a Conda environment inside a Docker image?","I'm not sure if this is the most eloquent solution, but I do have a solution to this now! + +Start up a container from the your base image and attach to it +Install the Conda env yaml file inside the docker container +From outside the Docker container stream (i.e. a new terminal window), commit the existing container (and its changes) to a new image: docker commit SOURCE_CONTAINER NEW_IMAGE + +Note: see docker commit --help for more options here + +Run the new image and start a container for it +From PyCharm, in preferences, go to Project > Project Interpreter +Add a new Docker project interpreter, choosing your new image as the image name, and set the path to wherever you installed your Conda environment on the Docker image (ex: /usr/local/conda3/envs/my_env/bin/python) + +And just like that, you're good to go!",1.2,True,1,5687 +2018-08-28 13:52:18.727,how to detect upside down face?,"I would like to detect upright and upside-down faces, however faces weren't recognized in upside-down images. +I used the dlib library in Python with shape_predictor_68_face_landmarks.dat. +Is there a library that can recognize upright and upside-down faces?","You could use the same library to detect upside down faces. If the library is unable to detect the face initially, transform it 180° and check again. If it is recognized in this condition, you know it was an upside down face.",1.2,True,1,5688 +2018-08-29 10:28:25.480,How to have cfiles in python code,"I'm using the Geany IDE and I've wrote a python code that makes a GUI. Im new to python and i'm better with C. I've done research on the web and its too complicated because theres so much jargon involved. Behind each button I want C to be the backbone of it (So c to execute when clicked). So, how can i make a c file and link it to my code?","I too had a question like this and I found a website that described how to do it step by step but I can’t seem to find it. If you think about it, all these ‘import’ files are just code thats been made separately and thats why you import them. So, in order to import your ‘C File’ do the following. + +Create the file you want to put in c (e.g bloop.c) +Then open the terminal and assuming you saved your file to the desktop, type ‘cd Desktop’. If you put it somewhere else other than the desktop, then type cd (insert the directory). +Now, type in gcc -shared -Wl,-soname,adder -o adder.so -fPIC bloop.c into the terminal. +After that, go into you python code and right at the very top of your code, type ‘import ctypes’ or ‘from ctypes import *’ to import the ctypes library. +Below that type adder = CDLL(‘./adder.so’). +if you want to add a instance for the class you need to type (letter or word)=adder.main(). For example, ctest = adder.main() +Now lets say you have a method you want to use from your c program you can type your charater or word (dot) method you created in c. For example ‘ctest.beans()’ (assuming you have a method in your code called beans).",1.2,True,1,5689 +2018-08-29 13:57:38.713,Cannot update svg file(s) for saleor framework + python + django,"I would like to know how should i could manage to change the static files use by the saelor framework. I've tried to change the logo.svg but failed to do so. +I'm still learning python program while using the saleor framework for e-commerce. +Thank you.",Here is how it should be done. You must put your logo in the saleor/static/images folder then change it in base.html file in footer and navbar section.,1.2,True,1,5690 +2018-08-29 20:22:17.757,"Determining ""SystemFaceButton"" RBG Value At RunTime","I am using tkinter and the PIL to make a basic photo viewer (mostly for learning purposes). I have the bg color of all of my widgets set to the default which is ""systemfacebutton"", whatever that means. +I am using the PIL.Image module to view and rotate my images. When an image is rotated you have to choose a fillcolor for the area behind the image. I want this fill color to be the same as the default system color but I have no idea how to get a the rgb value or a supported color name for this. It has to be calculated by python at run time so that it is consistent on anyone's OS. +Does anyone know how I can do this?","You can use w.winfo_rgb(""systembuttonface"") to turn any color name to a tuple of R, G, B. (w is any Tkinter widget, the root window perhaps. Note that you had the color name scrambled.) The values returned are 16-bit for some unknown reason, you'll likely need to shift them right by 8 bits to get the 0-255 values commonly used for specifying colors.",1.2,True,1,5691 +2018-08-30 01:29:02.027,"In tf.layers.conv2d, with use_bias=True, are the biases tied or untied?","One more question: +If they are tied biases, how can I implement untied biases? +I am using tensorflow 1.10.0 in python.","tied biases is used in tf.layers.conv2d. +If you want united biases, just turn off use_bias and create bias variable manually with tf.Variable or tf.get_variable same shape with following feature map, finally sum them up.",1.2,True,1,5692 +2018-08-30 19:43:08.963,Reading all the image files in a folder in Django,"I am trying to create a picture slideshow which will show all the png and jpg files of a folder using django. +Problem is how do I open windows explorer through django and prompt user to choose a folder name to load images from. Once this is done, how do I read all image files from this folder? Can I store all image files from this folder inside a list and pass this list in template views through context?","This link “https://github.com/csev/dj4e-samples/tree/master/pics” +shows how to store data into to database(sqlite is the database used here) using Django forms. But you cannot upload an entire folder at once, so you have to create a one to many model between display_id(This is just a field name in models you can name it anything you want) and pics. Now you can individually upload all pics in the folder to the same display _id and access all of them using this display_id. Also make sure to pass content_type for jpg and png separately while retrieving the pics.",0.0,False,1,5693 +2018-08-31 00:05:09.460,How can I get SMS verification code in my Python program?,"I'm writing a Python script to do some web automation stuff. In order to log in the website, I have to give it my phone number and the website will send out an SMS verification code. Is there a way to get this code so that I can use it in my Python program? Right now what I can think of is that I can write an Android APP and it will be triggered once there are new SMS and it will get the code and invoke an API so that the code will be stored somewhere. Then I can grab the stored code from within my Python program. This is doable but a little bit hard for me as I don't know how to develop a mobile APP. I want to know is there any other methods so that I can get this code? Thanks. +BTW, I have to use my own phone number and can't use other phone to receive the verification code. So it may not possible to use some services.",Answer my own question. I use IFTTT to forward the message to Slack and use Slack API to access the message.,0.0,False,1,5694 +2018-08-31 16:13:31.870,How to list available policies for an assumed AWS IAM role,"I am using python and boto to assume an AWS IAM role. I want to see what policies are attached to the role so i can loop through them and determine what actions are available for the role. I want to do this so I can know if some actions are available instead of doing this by calling them and checking if i get an error. However I cannot find a way to list the policies for the role after assuming it as the role is not authorised to perform IAM actions. +Is there anyone who knows how this is done or is this perhaps something i should not be doing.","To obtain policies, your AWS credentials require permissions to retrieve the policies. +If such permissions are not associated with the assumed role, you could use another set of credentials to retrieve the permissions (but those credentials would need appropriate IAM permissions). +There is no way to ask ""What policies do I have?"" without having the necessary permissions. This is an intentional part of AWS security because seeing policies can reveal some security information (eg ""Oh, why am I specifically denied access to the Top-Secret-XYZ S3 bucket?"").",0.3869120172231254,False,1,5695 +2018-08-31 19:23:27.853,"Creating ""zero state"" migration for existing db with sqlalchemy/alembic and ""faking"" zero migration for that existing db","I want to add alembic to an existing ,sqlalchemy using, project, with a working production db. I fail to find what's the standard way to do a ""zero"" migration == the migration setting up the db as it is now (For new developers setting up their environment) +Currently I've added import the declarative base class and all the models using it to the env.py , but first time alembic -c alembic.dev.ini revision --autogenerate does create the existing tables. +And I need to ""fake"" the migration on existing installations - using code. For django ORM I know how to make this work, but I fail to find what's the right way to do this with sqlalchemy/alembic","alembic revision --autogenerate inspects the state of the connected database and the state of the target metadata and then creates a migration that brings the database in line with metadata. +If you are introducing alembic/sqlalchemy to an existing database, and you want a migration file that given an empty, fresh database would reproduce the current state- follow these steps. + +Ensure that your metadata is truly in line with your current database(i.e. ensure that running alembic revision --autogenerate creates a migration with zero operations). + +Create a new temp_db that is empty and point your sqlalchemy.url in alembic.ini to this new temp_db. + +Run alembic revision --autogenerate. This will create your desired bulk migration that brings a fresh db in line with the current one. + +Remove temp_db and re-point sqlalchemy.url to your existing database. + +Run alembic stamp head. This tells sqlalchemy that the current migration represents the state of the database- so next time you run alembic upgrade head it will begin from this migration.",0.9999999999999966,False,1,5696 +2018-09-02 16:24:08.867,Django send progress back to client before request has ended,"I am working on an application in Django where there is a feature which lets the user share a download link to a public file. The server downloads the file and processes the information within. This can be a time taking task therefore I want to send periodic feedbacks to the user before operations has completed. For instances, I would like to inform the user that file has downloaded successfully or if some information was missing from one of the record e.t.c. +I was thinking that after the client app has sent the upload request, I could get client app to periodically ask the server about the status. But I don't know how can I track the progress a different request.How can I implement this?","At first the progress task information can be saved in rdb or redis。 +You can return the id of the task when uses submit the request to start task and the task can be executed in the background context。 +The background task can save the task progress info in the db which you selected. +The app client get the progress info by the task id which the backend returned and the backend get the progress info from the db and push it in the response. +The interval of the request can be defined by yourself.",0.0,False,1,5697 +2018-09-03 02:29:05.750,Numpy array size different when saved to disk (compared to nbytes),"Is it possible that a flat numpy 1d array's size (nbytes) is 16568 (~16.5kb) but when saved to disk, has a size of >2 mbs? +I am saving the array using numpy's numpy.save method. Dtype of array is 'O' (or object). +Also, how do I save that flat array to disk such that I get approx similar size to nbytes when saved on disk? Thanks","For others references, From numpy documentation: + +numpy.ndarray.nbytes attribute +ndarray.nbytes Total bytes consumed by the elements of the array. +Notes +Does not include memory consumed by non-element attributes of the +array object. + +So, the nbytes just considers elements of the array.",0.0,False,1,5698 +2018-09-05 10:27:35.310,Regex to match all lowercase character except some words,"I would like to write a RE to match all lowercase characters and words (special characters and symbols should not match), so like [a-z]+ EXCEPT the two words true and false. +I'm going to use it with Python. +I've written (?!true|false\b)\b[a-z]+, it works but it does not recognise lowercase characters following an uppercase one (e.g. with ""This"" it doesn't match ""his""). I don't know how to include also this kind of match. +For instance: + +true & G(asymbol) & false should match only asymbol +true & G(asymbol) & anothersymbol should match only [asymbol, anothersymbol] +asymbolUbsymbol | false should match only [asymbol, bsymbol] + +Thanks","I would create two regexes (you want to mix word boundary matching with optionally splitting words apart, which is, AFAIK not straighforward mixable, you would have to re-phrase your regex either without word boundaries or without splitting): + +first regex: [a-z]+ +second regex: \b(?!true|false)[a-z]+",0.0,False,1,5699 +2018-09-06 08:27:52.960,How to use double as the default type for floating numbers in PyTorch,"I want all the floating numbers in my PyTorch code double type by default, how can I do that?","You should use for that torch.set_default_dtype. +It is true that using torch.set_default_tensor_type will also have a similar effect, but torch.set_default_tensor_type not only sets the default data type, but also sets the default values for the device where the tensor is allocated, and the layout of the tensor.",0.3869120172231254,False,1,5700 +2018-09-06 20:34:52.430,how to change directory in Jupyter Notebook with Special characters?,"When I created directory under the python env, it has single quote like (D:\'Test Directory'). How do I change to this directory in Jupyter notebook?",I could able to change the directory using escape sequence like this.. os.chdir('C:\\'Test Directory\'),0.0,False,1,5701 +2018-09-08 02:24:30.413,"Graph traversal, maybe another type of mathematics?","Let’s say you have a set/list/collection of numbers: [1,3,7,13,21,19] (the order does not matter). Let’s say for reasons that are not important, you run them through a function and receive the following pairs: +(1, 13), (1, 19), (1, 21), (3,19), (7, 3), (7,13), (7,19), (21, 13), (21,19). Again order does not matter. My question involves the next part: how do I find out the minimum amount of numbers that can be part of a pair without being repeated? For this particular sequence it is all six. For [1,4,2] the pairs are (1,4), (1,2), (2,4). In this case any one of the numbers could be excluded as they are all in pairs, but they each repeat, therefore it would be 2 (which 2 do not matter). +At first glance this seems like a graph traversal problem - the numbers are nodes, the pairs edges. Is there some part of mathematics that deals with this? I have no problem writing up a traversal algorithm, I was just wondering if there was a solution with a lower time complexity. Thanks!","If you really intended to find the minimum amount, the answer is 0, because you don't have to use any number at all. +I guess you meant to write ""maximal amount of numbers"". +If I understand your problem correctly, it sounds like we can translated it to the following problem: +Given a set of n numbers (1,..,n), what is the maximal amount of numbers I can use to divide the set into pairs, where each number can appear only once. +The answer to this question is: + +when n = 2k f(n) = 2k for k>=0 +when n = 2k+1 f(n) = 2k for k>=0 + +I'll explain, using induction. + +if n = 0 then we can use at most 0 numbers to create pairs. +if n = 2 (the set can be [1,2]) then we can use both numbers to +create one pair (1,2) +Assumption: if n=2k lets assume we can use all 2k numbers to create 2k pairs and prove using induction that we can use 2k+2 numbers for n = 2k+2. +Proof: if n = 2k+2, [1,2,..,k,..,2k,2k+1,2k+2], we can create k pairs using 2k numbers (from our assomption). without loss of generality, lets assume out pairs are (1,2),(3,4),..,(2k-1,2k). we can see that we still have two numbers [2k+1, 2k+2] that we didn't use, and therefor we can create a pair out of two of them, which means that we used 2k+2 numbers. + +You can prove on your own the case when n is odd.",0.0,False,2,5702 +2018-09-08 02:24:30.413,"Graph traversal, maybe another type of mathematics?","Let’s say you have a set/list/collection of numbers: [1,3,7,13,21,19] (the order does not matter). Let’s say for reasons that are not important, you run them through a function and receive the following pairs: +(1, 13), (1, 19), (1, 21), (3,19), (7, 3), (7,13), (7,19), (21, 13), (21,19). Again order does not matter. My question involves the next part: how do I find out the minimum amount of numbers that can be part of a pair without being repeated? For this particular sequence it is all six. For [1,4,2] the pairs are (1,4), (1,2), (2,4). In this case any one of the numbers could be excluded as they are all in pairs, but they each repeat, therefore it would be 2 (which 2 do not matter). +At first glance this seems like a graph traversal problem - the numbers are nodes, the pairs edges. Is there some part of mathematics that deals with this? I have no problem writing up a traversal algorithm, I was just wondering if there was a solution with a lower time complexity. Thanks!","In case anyone cares in the future, the solution is called a blossom algorithm.",0.0,False,2,5702 +2018-09-08 12:37:37.387,error in missingno module import in Jupyter Notebook,"Getting error in missingno module import in Jupyter Notebook . It works fine in IDLE . But showing ""No missingno module exist"" in Jupyter Notebook . Can anybody tell me how to resolve this ?",Installing missingno through anaconda solved the problem for me,0.5457054096481145,False,2,5703 +2018-09-08 12:37:37.387,error in missingno module import in Jupyter Notebook,"Getting error in missingno module import in Jupyter Notebook . It works fine in IDLE . But showing ""No missingno module exist"" in Jupyter Notebook . Can anybody tell me how to resolve this ?","This command helped me: +conda install -c conda-forge/label/gcc7 missingno + You have to make sure that you run Anaconda prompt as Administrator.",0.3869120172231254,False,2,5703 +2018-09-08 18:25:29.300,Lazy loading with python and flask,"I’ve build a web based data dashboard that shows 4 graphs - each containing a large amount of data points. +When the URL endpoint is visited Flask calls my python script that grabs the data from a sql server and then starts manipulating it and finally outputs the bokeh graphs. +However, as these graphs get larger/there becomes more graphs on the screen the website takes long to load - since the entire function has to run before something is displayed. +How would I go about lazy loading these? I.e. it loads the first (most important graph) and displays it while running the function for the other graphs, showing them as and when they finish running (showing a sort of loading bar where each of the graphs are or something). +Would love some advice on how to implement this or similar. +Thanks!","I had the same problem as you. The problem with any kind of flask render is that all data is processed and passed to the page (i.e. client) simultaneously, often at large time cost. Not only that, but the the server web process is quite heavily loaded. +The solution I was forced to implement as the comment suggested was to load the page with blank charts and then upon mounting them access a flask api (via JS ajax) that returns chart json data to the client. This permits lazy loading of charts, as well as allowing the data manipulation to possibly be performed on a worker and not web server.",0.9950547536867304,False,1,5704 +2018-09-09 08:36:10.063,I can't import tkinter in pycharm community edition,"I've been trying for a few days now, to be able to import the library tkinter in pycharm. But, I am unable to do so. +,I tried to import it or to install some packages but still nothing, I reinstalled python and pycharm again nothing. Does anyone know how to fix this? +I am using pycharm community edition 2018 2.3 and python 3.7 . +EDIT:So , I uninstalled python 3.7 and I installed python 3.6 x64 ,I tried changing my interpreter to the new path to python and still not working... +EDIT 2 : I installed pycharm pro(free trial 30 days) and it's actually works and I tried to open my project in pycharm community and it's not working... +EDIT 3 : I installed python 3.6 x64 and now it's working. +Thanks for the help.","Thanks to vsDeus for asking this question. I had the same problem running Linux Mint Mate 19.1 and nothing got tkinter and some other modules working in Pycharm CE. In Eclipse with Pydev all worked just fine but for some reason I would rather work in Pycharm when coding than Eclipse. +The steps outlined here did not work for me but the steps he took handed me the solution. Basically I had to uninstall Pycharm, remove all its configuration files, then reinstall pip3, tkinter and then reinstall Pycharm CE. Finally I reopened previously saved projects and then set the correct interpreter. +When I tried to change the python interpreter before no alternatives appeared. After all these steps the choice became available. Most importantly now tkinter, matplotlib and other modules I wanted to use are available in Pycharm.",0.0,False,1,5705 +2018-09-10 11:25:39.227,how to use Tensorflow seq2seq.GreedyEmbeddingHelper first parameter Embedding in case of using normal one hot vector instead of embedding?,"I am trying to decode one character (represented as c-dimensional one hot vectors) at a time with tensorflow seq2seq model implementations. I am not using any embedding in my case. +Now I am stuck with tf.contrib.seq2seq.GreedyEmbeddingHelper. It requires ""embedding: A callable that takes a vector tensor of ids (argmax ids), or the params argument for embedding_lookup. The returned tensor will be passed to the decoder input."" +How I will define callable? What are inputs (vector tensor if ids(argmax ids)) and outputs of this callable function? Please explain using examples.","embedding = tf.Variable(tf.random_uniform([c-dimensional , +EMBEDDING_DIM])) +here you can create the embedding for you own model. +and this will be trained during your training process to give a vector for your own input. +if you don't want to use it you just can create a matrix where is every column of it is one hot vector represents the character and pass it as embedding. +it will be some thing like that: +[[1,0,0],[0,1,0],[0,0,1]] +here if you have vocabsize of 3 .",0.0,False,1,5706 +2018-09-10 11:43:29.133,"one server, same domain, different apps (example.com/ & example.com/tickets )?","I want advice on how to do the following: +On the same server, I want to have two apps. One WordPress app and one Python app. At the same time, I want the root of my domain to be a static landing page. +Url structure I want to achieve: + +example.com/ => static landing page +example.com/tickets => wordpress +example.com/pythonapp => python app + +I have never done something like this before and searching for solutions didn't help. +Is it even possible? +Is it better to use subdomains? +Is it better to use different servers? +How should I approach this? +Thanks in advance!","It depends on the webserver you want to use. Let's go with apache as it is one of the most used web servers on the internet. + +You install your wordpress installation into the /tickets subdirectory and install word-press as you normally would. This should install wordpress into the subdirectory. +Configure your Python-WSGI App with this configuration: + +WSGIScriptAlias /pythonapp /var/www/path/to/my/wsgi.py",0.2012947653214861,False,1,5707 +2018-09-12 02:15:34.913,How to saving plots and model results to pdf in python?,I know how to save model results to .txt files and saving plots to .png. I also found some post which shows how to save multiple plots on a single pdf file. What I am looking for is generating a single pdf file which can contain both model results/summary and it's related plots. So at the end I can have something like auto generated model report. Can someone suggest me how I can do this?,I’ve had good results with the fpdf module. It should do everything you need it to do and the learning curve isn’t bad. You can install with pip install fpdf.,0.0,False,1,5708 +2018-09-12 06:55:00.637,"Error configuring: unknown option ""-ipadx""","I want to add InPadding to my LabelFrame i'm using AppJar GUI. I try this: +self.app.setLabelFrameInPadding(self.name(""_content""), [20, 20]) +But i get this error: + +appJar:WARNING [Line 12->3063/configureWidget]: Error configuring _content: unknown option ""-ipadx"" + +Any ideas how to fix it?","Because of the way containers are implemented in appJar, padding works slightly differently for labelFrames. +Try calling: app.setLabelFramePadding('name', [20,20])",0.0,False,1,5709 +2018-09-12 13:04:34.793,Two flask Apps same domain IIS,"I want to deploy same flask application as two different instances lets say sandbox instance and testing instance on the same iis server and same machine. having two folders with different configurations (one for testing and one for sandbox) IIS runs whichever is requested first. for example I want to deploy one under www.example.com/test and the other under www.example.com/sandbox. if I requested www.example.com/test first then this app keeps working correctly but whenever I request www.example.com/sandbox it returns 404 and vice versa! +question bottom line: + +how can I make both apps run under the same domain with such URLs? +would using app factory pattern solve this issue? +what blocks both apps from running side by side as I am trying to do? + +thanks a lot in advance",been stuck for a week before asking this question and the neatest way I found was to assign each app a different app pool and now they are working together side by side happily ever after.,1.2,True,1,5710 +2018-09-13 06:51:37.370,Sharing PonyORM's db session across different python module,"I initially started a small python project (Python, Tkinter amd PonyORM) and became larger that is why I decided to divide the code (used to be single file only) to several modules (e.g. main, form1, entity, database). Main acting as the main controller, form1 as an example can contain a tkinter Frame which can be used as an interface where the user can input data, entity contains the db.Enttiy mappings and database for the pony.Database instance along with its connection details. I think problem is that during import, I'm getting this error ""pony.orm.core.ERDiagramError: Cannot define entity 'EmpInfo': database mapping has already been generated"". Can you point me to any existing code how should be done.","Probably you import your modules in a wrong order. Any module which contains entity definitions should be imported before db.generate_mapping() call. +I think you should call db.generate_mapping() right before entering tk.mainloop() when all imports are already done.",1.2,True,1,5711 +2018-09-13 08:55:49.327,Python3 - How do I stop current versions of packages being over-ridden by other packages dependencies,"Building Tensorflow and other such packages from source and especially against GPU's is a fairly long task and often encounters errors, so once built and installed I really dont want to mess with them. +I regularly use virtualenvs, but I am always worried about installing certain packages as sometimes their dependencies will overwrite my own packages I have built from source... +I know I can remove, and then rebuild from my .wheels, but sometimes this is a time consuming task. Is there a way that if I attempt to pip install a package, it first checks against current package versions and doesn't continue before I agree to those changes? +Even current packages dependencies don't show versions with pip show","Is there a way that if I attempt to pip install a package, it first checks against current package versions and doesn't continue before I agree to those changes? + +No. But pip install doesn't touch installed dependencies until you explicitly run pip install -U. So don't use -U/--upgrade option and upgrade dependencies when pip fails with unmet dependencies.",0.0,False,1,5712 +2018-09-14 02:32:31.807,how do I connect sys.argv into my float value?,"I must use ""q"" (which is a degree measure) from the command line and then convert ""q"" to radians and have it write out the value of sin(5q) + sin(6q). Considering that I believe I have to use sys.argv's for this I have no clue where to even begin","you can use following commands +q=sys.argv[1] #you can give the decimal value too in your command line +now q will be string eg. ""1.345"" so you have convert this to float[ using +function q=float(q) .",0.0,False,1,5713 +2018-09-14 10:30:59.240,Scrapy: Difference between simple spider and the one with ItemLoader,"I've been working on scrapy for 3 months. for extracting selectors I use simple response.css or response.xpath.. +I'm asked to switch to ItemLoaders and use add_xpath add_css etc. +I know how ItemLoaders work and ho convinient they are but can anyone compare these 2 w.r.t efficiency? which way is efficient and why ??",Item loaders do exactly the same thing underneath that you do when you don't use them. So for every loader.add_css/add_xpath call there will be responce.css/xpath executed. It won't be any faster and the little amount of additional work they do won't really make things any slower (especially in comparison to xml parsing and network/io load).,0.0,False,1,5714 +2018-09-15 01:56:10.107,Possible to get a file descriptor for Python's StringIO?,"From a Python script, I want to feed some small string data to a subprocess, but said subprocess non-negotiably accepts only a filename as an argument, which it will open and read. I non-negotiably do not want to write this data to disk - it should reside only in memory. +My first instinct was to use StringIO, but I realize that StringIO has no fileno(). mmap(-1, ...) also doesn't seem to create a file descriptor. With those off the table, I'm at a loss as to how to do this. Is this even achievable? The fd would be OS-level visible, but (I would expect) only to the process's children. +tl;dr how to create private file descriptor to a python string/memory that only a child process can see? +P.S. This is all on Linux and doesn't have to be portable in any way.","Reifying @user4815162342's comment as an answer: +The direct way to do this is: + +pass /dev/stdin as the file argument to the process; +use stdin=subprocess.PIPE; +finally, Popen.communicate() to feed the desired contents",0.6730655149877884,False,1,5715 +2018-09-17 15:44:04.130,how to modify txt file properties with python,"I am trying to make a python program that creates and writes in a txt file. +the program works, but I want it to cross the ""hidden"" thing in the txt file's properties, so that the txt can't be seen without using the python program I made. I have no clues how to do that, please understand I am a beginner in python.",I'm not 100% sure but I don't think you can do this in Python. I'd suggest finding a simple Visual Basic script and running it from your Python file.,0.0,False,1,5716 +2018-09-18 15:03:23.547,How can I run code for a certain amount of time?,"I want to play a sound (from a wav file) using winsound's winsound.PlaySound function. I know that winsound.Beep allows me to specify the time in milliseconds, but how can I implement that behavior with winsound.PlaySound? +I tried to use the time.sleep function, but that only delays the function, not specifies the amount of time. +Any help would be appreciated.","Create a thread to play the sound, start it. Create a thread that sleeps the right amount of time and has a handle to the first thread. Have the second thread terminate the first thread when the sleep is over.",1.2,True,1,5717 +2018-09-18 16:35:17.860,Do I need two instances of python-flask?,"I am building a web-app. One part of the app calls a function that starts a tweepy StreamListener on certain track. That functions process a tweet and then it writes a json object to a file or mongodb. +On the other hand I need a process that is reading the file or mongodb and paginates the tweet if some property is in it. The thing is that I don't know how to do that second part. Do I need different threads? +What solutions could there be?","You can certainly do it with a thread or spinning up a new process that will perform the pagination. +Alternatively you can look into a task queue service (Redis queue, celery, as examples). Your web-app can add a task to this queue and your other program can listen to this queue and perform the pagination tasks as they come in.",0.0,False,1,5718 +2018-09-19 22:34:46.480,Celery - how to stop running task when using distributed RabbitMQ backend?,"If I am running Celery on (say) a bank of 50 machines all using a distributed RabbitMQ cluster. +If I have a task that is running and I know the task id, how in the world can Celery figure out which machine its running on to terminate it? +Thanks.","I am not sure if you can actually do it, when you spawn a task you will have a worker, somewhere in you 50 boxes, that executes that and you technically have no control on it as it s a separate process and the only thing you can control is either the asyncResult or the amqp message on the queue.",0.0,False,1,5719 +2018-09-19 23:16:35.920,how to run periodic task in high frequency in flask?,"I want my flask APP to pull updates from a local txt file every 200ms, is it possible to do that? +P.S. I've considered BackgroundScheduler() from apschedulerler, but the granularity of is 1s.",Couldn't you just start a loop in a thread that sleeps for 200 ms before the next iteration?,0.2012947653214861,False,1,5720 +2018-09-20 06:14:37.797,How to search for all existing mongodbs for single GET request,"Suppose I have multiple mongodbs like mongodb_1, mongodb_2, mongodb_3 with same kind of data like employee details of different organizations. +When user triggers GET request to get employee details from all the above 3 mongodbs whose designation is ""TechnicalLead"". then first we need to connect to mongodb_1 and search and then disconnect with mongodb_1 and connect to mongodb_2 and search and repeat the same for all dbs. +Can any one suggest how can we achieve above using python EVE Rest api framework. +Best Regards, +Narendra","First of all, it is not a recommended way to run multiple instances (especially when the servers might be running at the same time) as it will lead to usage of the same config parameters like for example logpath and pidfilepath which in most cases is not what you want. +Secondly for getting the data from multiple mongodb instances you have to create separate get requests for fetching the data. There are two methods of view for the model that can be used: + +query individual databases for data, then assemble the results for viewing on the screen. +Query a central database that the two other databases continously update.",0.0,False,1,5721 +2018-09-20 17:05:30.047,python asyncronous images download (multiple urls),"I'm studying Python for 4/5 months and this is my third project built from scratch, but im not able to solve this problem on my own. +This script downloads 1 image for each url given. +Im not able to find a solution on how to implement Thread Pool Executor or async in this script. I cannot figure out how to link the url with the image number to the save image part. +I build a dict of all the urls that i need to download but how do I actually save the image with the correct name? +Any other advise? +PS. The urls present at the moment are only fake one. +Synchronous version: + + + import requests + import argparse + import re + import os + import logging + + from bs4 import BeautifulSoup + + + parser = argparse.ArgumentParser() + parser.add_argument(""-n"", ""--num"", help=""Book number"", type=int, required=True) + parser.add_argument(""-p"", dest=r""path_name"", default=r""F:\Users\123"", help=""Save to dir"", ) + args = parser.parse_args() + + + + logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', + level=logging.ERROR) + logger = logging.getLogger(__name__) + + + def get_parser(url_c): + url = f'https://test.net/g/{url_c}/1' + logger.info(f'Main url: {url_c}') + responce = requests.get(url, timeout=5) # timeout will raise an exeption + if responce.status_code == 200: + page = requests.get(url, timeout=5).content + soup = BeautifulSoup(page, 'html.parser') + return soup + else: + responce.raise_for_status() + + + def get_locators(soup): # take get_parser + # Extract first/last page num + first = int(soup.select_one('span.current').string) + logger.info(f'First page: {first}') + last = int(soup.select_one('span.num-pages').string) + 1 + + # Extract img_code and extension + link = soup.find('img', {'class': 'fit-horizontal'}).attrs[""src""] + logger.info(f'Locator code: {link}') + code = re.search('galleries.([0-9]+)\/.\.(\w{3})', link) + book_code = code.group(1) # internal code + extension = code.group(2) # png or jpg + + # extract Dir book name + pattern = re.compile('pretty"":""(.*)""') + found = soup.find('script', text=pattern) + string = pattern.search(found.text).group(1) + dir_name = string.split('""')[0] + logger.info(f'Dir name: {dir_name}') + + logger.info(f'Hidden code: {book_code}') + print(f'Extension: {extension}') + print(f'Tot pages: {last}') + print(f'') + + return {'first_p': first, + 'last_p': last, + 'book_code': book_code, + 'ext': extension, + 'dir': dir_name + } + + + def setup_download_dir(path, dir): # (args.path_name, locator['dir']) + # Make folder if it not exist + filepath = os.path.join(f'{path}\{dir}') + if not os.path.exists(filepath): + try: + os.makedirs(filepath) + print(f'Directory created at: {filepath}') + except OSError as err: + print(f""Can't create {filepath}: {err}"") + return filepath + + + def main(locator, filepath): + for image_n in range(locator['first_p'], locator['last_p']): + url = f""https://i.test.net/galleries/{locator['book_code']}/{image_n}.{locator['ext']}"" + logger.info(f'Url Img: {url}') + responce = requests.get(url, timeout=3) + if responce.status_code == 200: + img_data = requests.get(url, timeout=3).content + else: + responce.raise_for_status() # raise exepetion + + with open((os.path.join(filepath, f""{image_n}.{locator['ext']}"")), 'wb') as handler: + handler.write(img_data) # write image + print(f'Img {image_n} - DONE') + + + if __name__ == '__main__': + try: + locator = get_locators(get_parser(args.num)) # args.num ex. 241461 + main(locator, setup_download_dir(args.path_name, locator['dir'])) + except KeyboardInterrupt: + print(f'Program aborted...' + '\n') + + +Urls list: + + + def img_links(locator): + image_url = [] + for num in range(locator['first_p'], locator['last_p']): + url = f""https://i.test.net/galleries/{locator['book_code']}/{num}.{locator['ext']}"" + image_url.append(url) + logger.info(f'Url List: {image_url}') + return image_url","I found the solution in the book fluent python. Here the snippet: + + def download_many(cc_list, base_url, verbose, concur_req): + counter = collections.Counter() + with futures.ThreadPoolExecutor(max_workers=concur_req) as executor: + to_do_map = {} + for cc in sorted(cc_list): + future = executor.submit(download_one, cc, base_url, verbose) + to_do_map[future] = cc + done_iter = futures.as_completed(to_do_map) + if not verbose: + done_iter = tqdm.tqdm(done_iter, total=len(cc_list)) + for future in done_iter: + try: + res = future.result() + except requests.exceptions.HTTPError as exc: + error_msg = 'HTTP {res.status_code} - {res.reason}' + error_msg = error_msg.format(res=exc.response) + except requests.exceptions.ConnectionError as exc: + error_msg = 'Connection error' + else: + error_msg = '' + status = res.status + if error_msg: + status = HTTPStatus.error + counter[status] += 1 + if verbose and error_msg: + cc = to_do_map[future] + print('*** Error for {}: {}'.format(cc, error_msg)) + return counter",1.2,True,1,5722 +2018-09-23 11:46:47.050,How to put a list of arbitrary integers on screen (from lowest to highest) in pygame proportionally?,"Let's say I have a list of 887123, 123, 128821, 9, 233, 9190902. I want to put those strings on screen using pygame (line drawing), and I want to do so proportionally, so that they fit the screen. If the screen is 1280x720, how do I scale the numbers down so that they keep their proportions to each other but fit the screen? +I did try with techniques such as dividing every number by two until they are all smaller than 720, but that is skewed. Is there an algorithm for this sort of mathematical scaling?",I used this algorithm: x = (x / (maximum value)) * (720 - 1),0.3869120172231254,False,1,5723 +2018-09-23 16:36:12.867,Python3.6 and singletons - use case and parallel execution,"I have several unit-tests (only python3.6 and higher) which are importing a helper class to setup some things (eg. pulling some Docker images) on the system before starting the tests. +The class is doing everything while it get instantiate. It needs to stay alive because it holds some information which are evaluated during the runtime and needed for the different tests. +The call of the helper class is very expensive and I wanna speedup my tests the helper class only once. My approach here would be to use a singleton but I was told that in most cases a singleton is not needed. Are there other options for me or is a singleton here actually a good solution? +The option should allow executing all tests at all and every test on his own. +Also I would have some theoretical questions. +If I use a singleton here how is python executing this in parallel? Is python waiting for the first instance to be finish or can there be a race condition? And if yes how do I avoid them?","I can only given an answer on the ""are there other options for me"" part of your question... +The use of such a complex setup for unit-tests (pulling docker images etc.) makes me suspicious: +It can mean that your tests are in fact integration tests rather than unit-tests. Which could be perfectly fine if your goal is to find the bugs in the interactions between the involved components or in the interactions between your code and its system environment. (The fact that your setup involves Docker images gives the impression that you intend to test your system-under-test against the system environment.) If this is the case I wish you luck to get the other aspects of your question answered (parallelization of tests, singletons and thread safety). Maybe it makes sense to tag your question ""integration-testing"" rather than ""unit-testing"" then, in order to attract the proper experts. +On the other hand your complex setup could be an indication that your unit-tests are not yet designed properly and/or the system under test is not yet designed to be easily testable with unit-tests: Unit-tests focus on the system-under-test in isolation - isolation from depended-on-components, but also isolation from the specifics of the system environment. For such tests of a properly isolated system-under-test a complex setup using Docker would not be needed. +If the latter is true you could benefit from making yourself familiar with topics like ""mocking"", ""dependency injection"" or ""inversion of control"", which will help you to design your system-under-test and your unit test cases such that they are independent of the system environment. Then, your complex setup would no longer be necessary and the other aspects of your question (singleton, parallelization etc.) may no longer be relevant.",0.0,False,1,5724 +2018-09-24 09:39:59.467,How to increase the error limit in flake8 and pylint VS Code?,"As mentioned above I would like to know how I can increase the no of errors shown in flake8 and pylint. I have installed both and they work fine when I am working with small files. I am currently working with a very large file (>18k lines) and there is no error highlighting done at the bottom part of the file, I believe the current limit is set to 100 and would like to increase it. +If this isn't possible is there any way I can just do linting for my part of the code? I am just adding a function in this large file and would like to monitor the same.","Can use ""python.linting.maxNumberOfProblems"": 2000 to increase the no of problems being displayed but the limit seems to be set to 1001 so more than 1001 problems can't be displayed.",0.0,False,1,5725 +2018-09-24 11:25:27.520,Knowledge graph in python for NLP,how do I build a knowledge graph in python from structured texts? Do I need to know any graph databases? Any resources would be of great help.,"Knowledge Graph (KG) is just a virtual representation and not an actual graph stored as it is. +To store the data you can use any of the present databases like SQL, MongoDB, etc. But to benefit the fact that we are storing graphs here, I'll suggest better use graph-based databases like node4js.",0.0,False,1,5726 +2018-09-25 08:12:35.473,How to view Opendaylight topology on external webgui,"I'm exploring ODL and mininet and able to run both and populate the network nodes over ODL and I can view the topology via ODL default webgui. +I'm planning to create my own webgui and to start with simple topology view. I need advise and guideline on how I can achieve topology view on my own webgui. Plan to use python and html. Just a simple single page html and python script. Hopefully someone could lead me the way. Please assist and thank you.","If a web GUI for ODL would provide value for you, please consider working to contribute that upstream. The previous GUI (DLUX) has recently been deprecated because no one was supporting it, although it seems many people were using it.",0.0,False,1,5727 +2018-09-26 04:22:21.250,"Python3, calling super's __init__ from a custom exception","I have created custom exception in python 3 and the over all code works just fine. But there is one thing I am not able to wrap my head around is that why do I need to send my message to the Exception class's __init__() and how does it convert the Custom exception into that string message when I try to print the exception since the code in the Exception or even the BaseException does not do much. +Not quite able to understand why call the super().__init__() from custom exception?","This is so that your custom exceptions can start off with the same instance attributes as a BaseException object does, including the value attribute, which stores the exception message, which is needed by certain other methods such as __str__, which allows the exception object to be converted to a string directly. You can skip calling super().__init__ in your subclass's __init__ and instead initialize all the necessary attributes on your own if you want, but then you would not be taking advantage of one of the key benefits of class inheritance. Always call super().__init__ unless you have very specific reasons not to reuse any of the parent class's instance attributes.",0.3869120172231254,False,1,5728 +2018-09-26 21:19:30.580,Interpreter problem (apparently) with a project in PyCharm,"I recently upgraded PyCharm (community version). If it matters, I am running on a Mac OSX machine. After the upgrade, I have one project in which PyCharm cannot find any python modules. It can't find numpy, matplotlib, anything ... I have checked a couple of other projects and they seem to be fine. I noticed that somehow the interpreter for the project in question was not the same as for the others. So I changed it to match the others. But PyCharm still can't find the modules. Any ideas what else I can do? +More generally, something like this happens every time I upgrade to a new PyCharm version. The fix each time is a little different. Any ideas on how I can prevent this in the first place? +EDIT: FWIW, I just now tried to create a new dummy project. It has the same problem. I notice that my two problem projects are created with a ""venv"" sub-directory. My ""good"" projects don't have this thing. Is this a clue to what is going on? +EDIT 2: OK, just realized that when creating a new project, I can select ""New environment"" or ""Existing interpreter"", and I want ""Existing interpreter"". However, I would still like to know how one project that was working fine before is now hosed, and how I can fix it. Thanks.","Your project is most likely pointing to the wrong interpreter. E.G. Using a virtual environment when you want to use a global one. +You must point PyCharm to the correct interpreter that you want to use. +""File/Settings(Preferences On Mac)/Project: ... /Project Interpreter"" takes you to the settings associated with the interpreters. +This window shows all of the modules within the interpreter. +From here you can click the settings wheel in the top right and configure your interpreters. (add virtual environments and what not) +or you can select an existing interpreter from the drop down to use with your project.",0.2012947653214861,False,2,5729 +2018-09-26 21:19:30.580,Interpreter problem (apparently) with a project in PyCharm,"I recently upgraded PyCharm (community version). If it matters, I am running on a Mac OSX machine. After the upgrade, I have one project in which PyCharm cannot find any python modules. It can't find numpy, matplotlib, anything ... I have checked a couple of other projects and they seem to be fine. I noticed that somehow the interpreter for the project in question was not the same as for the others. So I changed it to match the others. But PyCharm still can't find the modules. Any ideas what else I can do? +More generally, something like this happens every time I upgrade to a new PyCharm version. The fix each time is a little different. Any ideas on how I can prevent this in the first place? +EDIT: FWIW, I just now tried to create a new dummy project. It has the same problem. I notice that my two problem projects are created with a ""venv"" sub-directory. My ""good"" projects don't have this thing. Is this a clue to what is going on? +EDIT 2: OK, just realized that when creating a new project, I can select ""New environment"" or ""Existing interpreter"", and I want ""Existing interpreter"". However, I would still like to know how one project that was working fine before is now hosed, and how I can fix it. Thanks.","It seems, when you are creating a new project, you also opt to create a new virtual environment, which then is created (default) in that venv sub-directory. +But that would only apply to new projects, what is going on with your old projects, changing their project interpreter environment i do not understand. +So what i would say is you have some corrupt settings (e.g. in ~/Library/Preferences/PyCharm2018.2 ), which are copied upon PyCharm upgrade. +You might try newly configure PyCharm by moving away those PyCharm preferences, so you can put them back later. +The Project configuration mainly, special the Project interpreter on the other hand is stored inside $PROJECT_ROOT/.idea and thus should not change.",1.2,True,2,5729 +2018-09-27 04:54:42.700,how can i check all the values of dataframe whether have null values in them without a loop,"if all(data_Window['CI']!=np.nan): +I have used the all() function with if so that if column CI has no NA values, then it will do some operation. But i got syntax error.","This gives you all a columns and how many null values they have. +df = pd.DataFrame({0:[1,2,None,],1:[2,3,None]) +df.isnull().sum()",0.0,False,1,5730 +2018-09-27 09:20:03.150,Choosing best semantics for related variables in an untyped language like Python,"Consider the following situation: you work with audio files and soon there are different contexts of what ""an audio"" actually is in same solution. +This on one side is more obvious through typing, though while Python has classes and typing, but it is less explicit in the code like in Java. I think this occurs in any untyped language. +My question is how to have less ambiguous variable names and whether there is something like an official and widely accepted guideline or even a standard like PEP/RFC for that or comparable. +Examples for variables: + +A string type to designate the path/filename of the actual audio file +A file handle for the above to do the I/O +Then, in the package pydub, you deal with the type AudioSegment +While in the package moviepy, you deal with the type AudioFileClip + +Using all the four together, requires in my eyes for a clever naming strategy, but maybe I just oversee something. +Maybe this is a quite exocic example, but if you think of any other media types, this should provide a more broad view angle. Likewise, is a Document a handle, a path or an abstract object?","There is no definitive standard/rfc to name your variables. One option is to prefix/suffix your variables with a (possibly short form) type. For example, you can name a variable as foo_Foo where variable foo_Foo is of type Foo.",0.0,False,1,5731 +2018-09-27 14:44:45.557,Holoviews - network graph - change edge color,I am using holoviews and bokeh with python 3 to create an interactive network graph fro mNetworkx. I can't manage to set the edge color to blank. It seems that the edge_color option does not exist. Do you have any idea how I could do that?,"Problem solved, the option to change edges color is edge_line_color and not edge_color.",0.3869120172231254,False,1,5732 +2018-09-27 15:09:52.837,Make Pipenv create the virtualenv in the same folder,"I want Pipenv to make virtual environment in the same folder with my project (Django). +I searched and found the PIPENV_VENV_IN_PROJECT option but I don't know where and how to use this.","This maybe help someone else.. I find another easy way to solve this! +Just make empty folder inside your project and name it .venv +and pipenv will use this folder.",0.9999999998319656,False,2,5733 +2018-09-27 15:09:52.837,Make Pipenv create the virtualenv in the same folder,"I want Pipenv to make virtual environment in the same folder with my project (Django). +I searched and found the PIPENV_VENV_IN_PROJECT option but I don't know where and how to use this.","For posterity's sake, if you find pipenv is not creating a virtual environment in the proper location, you may have an erroneous Pipfile somewhere, confusing the pipenv shell call - in which case I would delete it form path locations that are not explicitly linked to a repository.",0.1618299653758019,False,2,5733 +2018-09-27 15:26:56.820,Force tkinter listbox to highlight item when selected before task is started,"I have a tkinter listbox, when I select a item it performs a few actions then returns the results, while that is happening the item I selected does not show as selected, is there a way to force it to show selected immediately so it's obvious to the user they selected the correct one while waiting on the returned results? I'm using python 3.4 and I'm on a windows 7 machine.",The item does show as selected right away because the time consuming actions are executed before updating the GUI. You can force the GUI to update before executing the actions by using window.update_idletasks().,0.0,False,1,5734 +2018-09-27 20:02:47.580,In Python DataFrame how to find out number of rows that have valid values of columns,"I want to find the number of rows that have certain values such as None or """" or NaN (basically empty values) in all columns of a DataFrame object. How can I do this?","Use df.isnull().sum() to get number of rows with None and NaN value. +Use df.eq(value).sum() for any kind of values including empty string """".",0.2655860252697744,False,1,5735 +2018-09-28 10:00:04.037,.get + dict variable,"I have a charge object with information in charge['metadata']['distinct_id']. There could be the case that it's not set, therefore I tried it that way which doesn't work charge.get(['metadata']['distinct_id'], None) +Do you know how to do that the right way?","You don't say what the error is, but, two things possibly wrong + +it should be charge.get('metadata', None) +you can't directly do it on two consecutive levels. If the metadata key returns None, you can't go on and ask for the distinct_id key. You could return an empty dict and apply get to that, eg something like charge.get('metadata', {}).get('distinct_id', None)",1.2,True,2,5736 +2018-09-28 10:00:04.037,.get + dict variable,"I have a charge object with information in charge['metadata']['distinct_id']. There could be the case that it's not set, therefore I tried it that way which doesn't work charge.get(['metadata']['distinct_id'], None) +Do you know how to do that the right way?","As @blue_note mentioned you could not user two consecutive levels. However your can try something like +charge.get('metadata', {}).get('distinct_id') +here, you tried to get 'metadata' from charge and if it does not found then it will consider blank dictionary and try to get 'distinct_id' from there (technically it does not exists). In this scenario, you need not to worry about if metadata exists or not. If it exists then it will check for distinct_id from metadata or else it throws None. +Hope this will solve your problem. +Cheers..!",0.1352210990936997,False,2,5736 +2018-09-28 16:43:57.900,PyMongo how to get the last item in the collection?,"In the MongoDB console, I know that you can use $ last and $ natural. In PyMongo, I could not use it, maybe I was doing something wrong?","Another way is: +db.collection.find().limit(1).sort([('$natural',-1)]) +This seemed to work best for me.",0.2012947653214861,False,1,5737 +2018-09-29 12:08:21.843,how can I use Transfer Learning for LSTM?,I intent to implement image captioning. Would it be possible to transfer learning for LSTM? I have used pretrained VGG16(transfer learning) to Extract features as input of the LSTM.,"As I have discovered, we can't use Transfer learning on the LSTM weights. I think the causation is infra-structure of LSTM networks.",1.2,True,1,5738 +2018-09-29 19:52:13.553,Is there any way to retrieve file name using Python?,"In a Linux directory, I have several numbered files, such as ""day1"" and ""day2"". My goal is to write a code that retrieves the number from the files and add 1 to the file that has the biggest number and create a new file. So, for example, if there are files, 'day1', 'day2' and 'day3', the code should read the list of files and add 'day4'. To do so, at least I need to know how to retrieve the numbers on the file name.",Get all files with the os module/package (don't have the exact command handy) and then use regex(package) to get the numbers. If you don't want to look into regex you could remove the letters from your string with replace() and convert that string with int().,0.0,False,1,5739 +2018-09-30 05:33:24.990,"python 3, how print function changes output?","The following were what I did in python shell. Can anyone explain the difference? + + + +datetime.datetime.now() + datetime.datetime(2018, 9, 29, 21, 34, 10, 847635) +print(datetime.datetime.now()) + 2018-09-29 21:34:26.900063","The first is the result of calling repr on the datetime value, the second is the result of calling str on a datetime. +The Python shell calls repr on values other than None before printing them, while print tries str before calling repr (if str fails). +This is not dependent on the Python version.",1.2,True,1,5740 +2018-09-30 17:25:43.813,Python's cmd.Cmd case insensitive commands,"I am using python's CLI module which takes any do_* method and sets it as a command, so a do_show() method will be executed if the user type ""show"". +How can I execute the do_show() method using any variation of capitalization from user input e.g. SHOW, Show, sHoW and so on without giving a Command Not Found error? +I think the answer would be something to do with overriding the Cmd class and forcing it to take the user's input.lower() but idk how to do that :/",You should override onecmd to achieve desired functionality.,1.2,True,1,5741 +2018-10-01 07:38:38.010,Possible ways to embed python matplotlib into my presentation interactively,"I need to present my data in various graphs. Usually what I do is to take a screenshot of my graph (I almost exclusively make them with matplotlib) and paste it into my PowerPoint. +Unfortunately my direct superior seems not to be happy with the way I present them. Sometimes he wants certain things in log scale and sometimes he dislike my color palette. The data is all there, but because its an image there's no way I can change that in the meeting. +My superior seems to really care about those things and spend quite a lot of time telling me how to make plots in every single meeting. He (usually) will not comment on my data before I make a plot the way he wants. +That's where my question becomes relevant. Right now what I have in my mind is to have an interactive canvas embedded in my PowerPoint such that I can change the range of the axis, color of my data point, etc in real time. I have been searching online for such a thing but it comes out empty. I wonder if that could be done and how can it be done? +For some simple graphs Excel plot may work, but usually I have to present things in 1D or 2D histograms/density plots with millions of entries. Sometimes I have to fit points with complicated mathematical formulas and that's something Excel is incapable of doing and I must use scipy and pandas. +The closest thing to this I found online is rise with jupyter which convert a jupyter notebook into a slide show. I think that is a good start which allows me to run python code in real time inside the presentation, but I would like to use PowerPoint related solutions if possible, mostly because I am familiar with how PowerPoint works and I still find certain PowerPoint features useful. +Thank you for all your help. While I do prefer PowerPoint, any other products that allows me to modify plots in my presentation in real time or alternatives of rise are welcomed.","When putting a picture in PowerPoint you can decide whether you want to embed it or link to it. If you decide to link to the picture, you would be free to change it outside of powerpoint. This opens up the possibility for the following workflow: +Next to your presentation you have a Python IDE or Juypter notebook open with the scripts that generate the figures. They all have a savefig command in them to save to exactly the location on disc from where you link the images in PowerPoint. If you need to change the figure, you make the changes in the python code, run the script (or cell) and switch back to PowerPoint where the newly created image is updated. +Note that I would not recommend putting too much effort into finding a better solution to this, but rather spend the time thinking about good visual reprentations of the data, due to the following reasons: 1. If your instrutor's demands are completely unreasonable (""I like blue better than green, so you need to use blue."") than it's not worth spending effort into satisfying their demands at all. 2. If your instrutor's demands are based on the fact that the current reprentation does not allow to interprete the data correctly, this can be prevented by spending more thoughts on good plots prior to the presentation. This is a learning process, which I guess your instructor wants you to internalize. After all, you won't get a degree in computer science for writing a PowerPoint backend to matplotlib, but rather for being able to present your research in a way suited for your subject.",1.2,True,1,5742 +2018-10-01 18:01:04.283,"""No package 'coinhsl' found"": IPOPT compiles and passes test, but pyomo cannot find it?","I don't know if the problem is between me and Pyomo.DAE or between me and IPOPT. I am doing this all from the command-line interface in Bash on Ubuntu on Windows (WSL). When I run: + +JAMPchip@DESKTOP-BOB968S:~/examples/dae$ python3 run_disease.py + +I receive the following output: + +WARNING: Could not locate the 'ipopt' executable, which is required + for solver + ipopt Traceback (most recent call last): File ""run_disease.py"", line 15, in + results = solver.solve(instance,tee=True) File ""/usr/lib/python3.6/site-packages/pyomo/opt/base/solvers.py"", line + 541, in solve + self.available(exception_flag=True) File ""/usr/lib/python3.6/site-packages/pyomo/opt/solver/shellcmd.py"", line + 122, in available + raise ApplicationError(msg % self.name) pyutilib.common._exceptions.ApplicationError: No executable found for + solver 'ipopt' + +When I run ""make test"" in the IPOPT build folder, I reecieved: + +Testing AMPL Solver Executable... + Test passed! Testing C++ Example... + Test passed! Testing C Example... + Test passed! Testing Fortran Example... + Test passed! + +But my one major concern is that in the ""configure"" output was the follwing: + +checking for COIN-OR package HSL... not given: No package 'coinhsl' + found + +There were also a few warning when I ran ""make"". I am not at all sure where the issue lies. How do I make python3 find IPOPT, and how do I tell if I have IPOPT on the system for pyomo.dae to find? I am pretty confident that I have ""coibhsl"" in the HSL folder, so how do I make sure that it is found by IPOPT?","As sascha states, you need to make sure that the directory containing your IPOPT executable (likely the build folder) is in your system PATH. That way, if you were to open a terminal and call ipopt from an arbitrary directory, it would be detected as a valid command. This is distinct from being able to call make test from within the IPOPT build folder.",0.0,False,1,5743 +2018-10-02 13:17:45.260,how to disable printscreen key on windows using python,"Is there any way to disable the print screen key when running a python application? +Maybe editing the windows registry is the way? +Thanks!","printscreen is OS Functionality. +Their is No ASCII code for PrintScreen. +Even their are many ways to take PrintScreen. + +Thus, You can Disable keyboard but its difficult to stop user from taking PrintScreen.",0.0,False,1,5744 +2018-10-04 09:28:55.060,How does scrapy behave when enough resources are not available,"I am running multiple scrapers using the command line which is an automated process. +Python : 2.7.12 +Scrapy : 1.4.0 +OS : Ubuntu 16.04.4 LTS +I want to know how scrapy handles the case when + +There is not enough memory/cpu bandwidth to start a scraper. +There is not enough memory/cpu bandwidth during a scraper run. + +I have gone through the documentation but couldn't find anything. +Anyone answering this, you don't have to know the right answer, if you can point me to the general direction of any resource you know which would be helpful, that would also be appreciated","The operating system kills any process that tries to access more memory than the limit. +Applies to python programs too. and scrapy is no different. +More often than not, bandwidth is the bottleneck in scraping / crawling applications. +Memory would only be a bottleneck if there is a serious memory leak in your application. +Your application would just be very slow if CPU is being shared by many process on the same machine.",1.2,True,1,5745 +2018-10-04 17:55:05.990,how to change raspberry pi ip in flask web service,"I have a raspberry pi 3b+ and i'm showing ip camera stream using the Opencv in python. +My default ip in rasppberry is 169.254.210.x range and I have to put the camera in the same range. +How can i change my raspberry ip? +Suppose if I run the program on a web service such as a flask, can i change the raspberry pi server ip every time?","You can statically change your ip of raspberry pi by editing /etc/network/interfaces +Try editing a line of the file which contains address.",0.0,False,1,5746 +2018-10-04 19:48:49.993,"""No module named 'docx'"" error but ""requirement already satisfied"" when I try to install","From what I've read, it sounds like the issue might be that the module isn't in the same directory as my script. Is that the case? If so, how do I find the module and move it to the correct location? +Edit +In case it's relevant - I installed docx using easy_install, not pip.","Please install python-docx. +Then you import docx (not python-docx)",0.0,False,2,5747 +2018-10-04 19:48:49.993,"""No module named 'docx'"" error but ""requirement already satisfied"" when I try to install","From what I've read, it sounds like the issue might be that the module isn't in the same directory as my script. Is that the case? If so, how do I find the module and move it to the correct location? +Edit +In case it's relevant - I installed docx using easy_install, not pip.","pip show docx +This will show you where it is installed. However, if you're using python3 then + pip install python-docx +might be the one you need.",0.0,False,2,5747 +2018-10-05 16:36:21.090,How can I see what packages were installed using `sudo pip install`?,"I know that installing python packages using sudo pip install is bad a security risk. Unfortunately, I found this out after installing quite a few packages using sudo. +Is there a way to find out what python packages I installed using sudo pip install? The end goal being uninstallment and correctly re-installing them within a virtual environment. +I tried pip list to get information about the packages, but it only gave me their version. pip show gave me more information about an individual package such as where it is installed, but I don't know how to make use of that information.",try the following command: pip freeze,0.0,False,2,5748 +2018-10-05 16:36:21.090,How can I see what packages were installed using `sudo pip install`?,"I know that installing python packages using sudo pip install is bad a security risk. Unfortunately, I found this out after installing quite a few packages using sudo. +Is there a way to find out what python packages I installed using sudo pip install? The end goal being uninstallment and correctly re-installing them within a virtual environment. +I tried pip list to get information about the packages, but it only gave me their version. pip show gave me more information about an individual package such as where it is installed, but I don't know how to make use of that information.","any modules you installed with sudo will be owned by root, so you can open your shell/terminal, cd to site-packages directory & check the directories owner with ls -la, then any that has root in the owner column is the one you want to uninstall.",1.2,True,2,5748 +2018-10-06 20:14:32.777,Is it possible to change the loss function dynamically during training?,"I am working on a machine learning project and I am wondering whether it is possible to change the loss function while the network is training. I'm not sure how to do it exactly in code. +For example, start training with cross entropy loss and then halfway through training, switch to 0-1 loss.",You have to implement your own algorithm. This is mostly possible with Tensorflow.,0.0,False,1,5749 +2018-10-08 17:02:32.603,Keras LSTM Input Dimension understanding each other,"but I have been trying to play around with it for awhile. I've seen a lot of guides on how Keras is used to build LSTM models and how people feed in the inputs and get expected outputs. But what I have never seen yet is, for example stock data, how we can make the LSTM model understand patterns between different dimensions, say close price is much higher than normal because volume is low. +Point of this is that I want to do a test with stock prediction, but make it so that each dimensions are not reliant on previous time steps, but also reliant on other dimensions it haves as well. +Sorry if I am not asking the question correctly, please ask more questions if I am not explaining it clearly.","First: Regressors will replicate if you input a feature that gives some direct intuition about the predicted input might be to secure the error is minimized, rather than trying to actually predict it. Try to focus on binary classification or multiclass classification, whether the closing price go up/down or how much. +Second: Always engineer the raw features to give more explicit patterns to the ML algorithm. Think on inputs as Volume(t) - Volume(t-1), close(t)^2 - close(t-1)^2, technical indicators(RSI, CCI, OBV etc.) Create your own features. You can use the pyti library for technical indicators.",0.0,False,1,5750 +2018-10-09 06:31:10.137,SoftLayer API: order a 128 subnet,"We are trying to order a 128 subnet. But looks like it doesn't work, get an error saying Invalid combination specified for ordering a subnet. The same code works to order a 64 subnet. Any thoughts how to order a 128 subnet? + +network_mgr = SoftLayer.managers.network.NetworkManager(client) +network_mgr.add_subnet(‘private’, 128, vlan_id, test_order=True) + + +Traceback (most recent call last): + File ""subnet.py"", line 11, in + result = nwmgr.add_subnet('private', 128, vlan_id, test_order=True) + File ""/usr/local/lib/python2.7/site-packages/SoftLayer/managers/network.py"", line 154, in add_subnet + raise TypeError('Invalid combination specified for ordering a' +TypeError: Invalid combination specified for ordering a subnet.","Currently it seems not possible to add 128 ip subnet into the order, the package used by the manager to order subnets only allows to add subnets for: 64,32,16,8,4 (capacity), +It is because the package that does not contain any item that has 128 ip addresses subnet, this is the reason why you are getting the error Exception you provided. +You may also verify this through the Portal UI, if you can see 128 ip address option through UI in your account, please update this forum with a screenshot.",0.0,False,1,5751 +2018-10-09 10:19:24.127,Add Python to the Windows path,"If I forget to add the Python to the path while installing it, how can I add it to my Windows path? +Without adding it to the path I am unable to use it. Also if I want to put python 3 as default.","Edit Path in Environment Variables +Add Python's path to the end of the list (these are separated by ';'). +For example: + +C:\Users\AppData\Local\Programs\Python\Python36; +C:\Users\AppData\Local\Programs\Python\Python36\Scripts + +and if you want to make it default +you have to edit the system environmental variables +edit the following from the Path + +C:\Windows;C:\Windows\System32;C:\Python27 + +Now Python 3 would have been become the default python in your system +You can check it by python --version",0.3869120172231254,False,1,5752 +2018-10-09 11:15:40.860,"Deploying python with docker, images too big","We've built a large python repo that uses lots of libraries (numpy, scipy, tensor flow, ...) And have managed these dependencies through a conda environment. Basically we have lots of developers contributing and anytime someone needs a new library for something they are working on they 'conda install' it. +Fast forward to today and now we need to deploy some applications that use our repo. We are deploying using docker, but are finding that these images are really large and causing some issues, e.g. 10+ GB. However each individual application only uses a subset of all the dependencies in the environment.yml. +Is there some easy strategy for dealing with this problem? In a sense I need to know the dependencies for each application, but I'm not sure how to do this in an automated way. +Any help here would be great. I'm new to this whole AWS, Docker, and python deployment thing... We're really a bunch of engineers and scientists who need to scale up our software. We have something that works, it just seems like there has to be a better way .","First see if there are easy wins to shrink the image, like using Alpine Linux and being very careful about what gets installed with the OS package manager, and ensuring you only allow installing dependencies or recommended items when truly required, and that you clean up and delete artifacts like package lists, big things you may not need like Java, etc. +The base Anaconda/Ubuntu image is ~ 3.5GB in size, so it's not crazy that with a lot of extra installations of heavy third-party packages, you could get up to 10GB. In production image processing applications, I routinely worked with Docker images in the range of 3GB to 6GB, and those sizes were after we had heavily optimized the container. +To your question about splitting dependencies, you should provide each different application with its own package definition, basically a setup.py script and some other details, including dependencies listed in some mix of requirements.txt for pip and/or environment.yaml for conda. +If you have Project A in some folder / repo and Project B in another, you want people to easily be able to do something like pip install or conda env create -f ProjectB_environment.yml or something, and voila, that application is installed. +Then when you deploy a specific application, have some CI tool like Jenkins build the container for that application using a FROM line to start from your thin Alpine / whatever container, and only perform conda install or pip install for the dependency file for that project, and not all the others. +This also has the benefit that multiple different projects can declare different version dependencies even among the same set of libraries. Maybe Project A is ready to upgrade to the latest and greatest pandas version, but Project B needs some refactoring before the team wants to test that upgrade. This way, when CI builds the container for Project B, it will have a Python dependency file with one set of versions, while in Project A's folder or repo of source code, it might have something different.",1.2,True,1,5753 +2018-10-09 15:27:34.223,Text classification by pattern,"Could you recomend me best way how to do it: i have a list phrases, for example [""free flower delivery"",""flower delivery Moscow"",""color + home delivery"",""flower delivery + delivery"",""order flowers + with delivery"",""color delivery""] and pattern - ""flower delivery"". I need to get list with phrases as close as possible to pattern. +Could you give some advice to how to do it?","Answer given by nflacco is correct.. In addition to that, have you tried edit distance? Try fuzzywuzzy (pip install fuzzywuzzy).. it uses Edit distance to give you a score, how near two sentences are",0.2012947653214861,False,1,5754 +2018-10-10 10:39:12.207,TensorFlow: Correct way of using steps in Stochastic Gradient Descent,"I am currently using TensorFlow tutorial's first_steps_with_tensor_flow.ipynb notebook to learn TF for implementing ML models. In the notebook, they have used Stochastic Gradient Descent (SGD) to optimize the loss function. Below is the snippet of the my_input_function: +def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None): +Here, it can be seen that the batch_size is 1. The notebook uses a housing data set containing 17000 labeled examples for training. This means for SGD, I will be having 17000 batches. +LRmodel = linear_regressor.train(input_fn = lambda:my_input_fn(my_feature, + targets), steps=100) +I have three questions - + +Why is steps=100 in linear_regressor.train method above? Since we have 17000 batches and steps in ML means the count for evaluating one batch, in linear_regressor.train method steps = 17000 should be initialized, right? +Is number of batches equal to the number of steps/iterations in ML? +With my 17000 examples, if I keep my batch_size=100, steps=500, and num_epochs=5, what does this initialization mean and how does it correlate to 170 batches?","step is the literal meaning: means you refresh the parameters in your batch size; so for linear_regessor.train, it will train 100 times for this batch_size 1. +epoch means to refresh the whole data, which is 17,000 in your set.",-0.3869120172231254,False,1,5755 +2018-10-11 15:24:17.267,Writing unit tests in Python,"I have a task in which i have a csv file having some sample data. The task is to convert the data inside the csv file into other formats like JSON, HTML, YAML etc after applying some data validation rules. +Now i am also supposed to write some unit tests for this in pytest or the unittest module in Python. +My question is how do i actually write the unit tests for this since i am converting them to different JSON/HTML files ? Should i prepare some sample files and then do a comparison with them in my unit tests. +I think only the data validation part in the task can be tested using unittest and not the creation of files in different formats right ? +Any ideas would be immensely helpful. +Thanks in advance.","You should do functional tests, so testing the whole pipeline from a csv file to the end result, but unit tests is about checking that individual steps work. +So for instance, can you read a csv file properly? Does it fail as expected when you don't provide a csv file? Are you able to check each validation unit? Are they failing when they should? Are they passing valid data? +And of course, the result must be tested as well. Starting from a known internal representation, is the resulting json valid? Does it contain all the required data? Same for yaml, HTML. You should not test the formatting, but really what was output and if it's correct. +You should always test that valid data passes and that incorrect doesn't at each step of your work flow.",1.2,True,1,5756 +2018-10-12 12:28:16.983,How to get filtered rowCount in a QSortFilterProxyModel,"I use a QSortFilterProxyModel to filter a QSqlTableModel's data, and want to get the filtered rowCount. +But when I call the QSortFilterProxyModel.rowCount method, the QSqlTableModel's rowCount was returned. +So how can I get the filtered rowcount?",You should after set QSortFilterProxyModel filter to call proxymodel.rowCount。,0.0,False,1,5757 +2018-10-13 10:45:57.270,python 3.7 setting environment variable path,"I installed Anaconda 3 and wanted to execute python from the shell. It returned that it's either written wrong or does not exist. Apparently, I have to add a path to the environmentle variable. +Can someone tell how to do this? +Environment: Windows 10, 64 bit and python 3.7 +Ps: I know the web is full with that but I am notoriously afraid to make a mistake. And I did not find an exact entry for my environment. Thanks in advance. +Best Daniel","Windows: + +search for -->Edit the system environment variables +In Advanced tab, click Environment variabless +In System variables, Select PATH and click edit. Now Click new, ADD YOU PATH. +Click Apply and close. + +Now, check in command prompt",1.2,True,1,5758 +2018-10-14 02:29:15.473,"Given two lists of ints, how can we find the closes number in one list from the other one?","Given I have two different lists with ints. +a = [1, 4, 11, 20, 25] and b = [3, 10, 20] +I want to return a list of length len(b) that stores the closest number in a for each ints in b. +So, this should return [4, 11, 20]. +I can do this in brute force, but what is a more efficient way to do this? +EDIT: It would be great if I can do this with standard library, if needed, only.","Use binary search, assuming the lists are in order. +The brute force in this case is only O(n), so I wouldn't worry about it, just use brute force. +EDIT: +yeh it is O(len(a)*len(b)) (roughly O(n^2) +sorry stupid mistake. +Since these aren't necessarily sorted the fastest is still O(len(a)*len(b)) though. Sorting the lists (using timsort) would take O(nlogn), then binary search O(logn), which results in O(nlog^2n)*O(n)=O(n^2log^2n), which is slower then just O(n^2).",0.0,False,1,5759 +2018-10-14 18:17:29.503,Python tasks and DAGs with different conda environments,"Say that most of my DAGs and tasks in AirFlow are supposed to run Python code on the same machine as the AirFlow server. +Can I have different DAGs use different conda environments? If so, how should I do it? For example, can I use the Python Operator for that? Or would that restrict me to using the same conda environment that I used to install AirFlow. +More generally, where/how should I ideally activate the desired conda environment for each DAG or task?","The Python that is running the Airflow Worker code, is the one whose environment will be used to execute the code. +What you can do is have separate named queues for separate execution environments for different workers, so that only a specific machine or group of machines will execute a certain DAG.",1.2,True,1,5760 +2018-10-14 18:54:30.970,Is it possible to make my own encryption when sending data through sockets?,For example in python if I’m sending data through sockets could I make my own encryption algorithm to encrypt that data? Would it be unbreakable since only I know how it works?,"Yes you can. Would it be unbreakable? No. This is called security through obscurity. You're relying on the fact that nobody knows how it works. But can you really rely on that? +Someone is going to receive the data, and they'll have to decrypt it. The code must run on their machine for that to happen. If they have the code, they know how it works. Well, at least anyone with a lot of spare time and nothing else to do can easily reverse engineer it, and there goes your obscurity. +Is it feasable to make your own algorithm? Sure. A bit of XOR here, a bit of shuffling there... eventually you'll have an encryption algorithm. It probably wouldn't be a good one but it would do the job, at least until someone tries to break it, then it probably wouldn't last a day. +Does Python care? Do sockets care? No. You can do whatever you want with the data. It's just bits after all, what they mean is up to you. +Are you a cryptographer? No, otherwise you wouldn't be here asking this. So should you do it? No.",1.2,True,1,5761 +2018-10-14 19:10:42.147,imshow() with desired framerate with opencv,"Is there any workaround how to use cv2.imshow() with a specific framerate? Im capturing the video via VideoCapture and doing some easy postprocessing on them (both in a separeted thread, so it loads all frames in Queue and the main thread isn't slowed by the computation). I tryed to fix the framerate by calculating the time used for ""reading"" the image from the queue and then substract that value from number of miliseconds avalible for one frame: +if I have as input video with 50FPS and i want to playback it in real-time i do 1000/50 => 20ms per frame. +And then wait that time using cv2.WaitKey() +But still I get some laggy output. Which is slower then the source video","I don't believe there is such a function in opencv but maybe you could improve your method by adding a dynamic wait time using timers? timeit.default_timer() +calculate the time taken to process and subtract that from the expected framerate and maybe add a few ms buffer. +eg cv2.WaitKey((1000/50) - (time processing finished - time read started) - 10) +or you could have a more rigid timing eg script start time + frame# * 20ms - time processing finished +I haven't tried this personally so im not sure if it will actually work, also might be worth having a check so the number isnt below 1",1.2,True,1,5762 +2018-10-16 21:43:21.673,"Azure Machine Learning Studio execute python script, Theano unable to execute optimized C-implementations (for both CPU and GPU)","I am execute a python script in Azure machine learning studio. I am including other python scripts and python library, Theano. I can see the Theano get loaded and I got the proper result after script executed. But I saw the error message: + +WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string. + +Did anyone know how to solve this problem? Thanks!","I don't think you can fix that - the Python script environment in Azure ML Studio is rather locked down, you can't really configure it (except for choosing from a small selection of Anaconda/Python versions). +You might be better off using the new Azure ML service, which allows you considerably more configuration options (including using GPUs and the like).",1.2,True,1,5763 +2018-10-17 14:07:06.357,how to use pip install a package if there are two same version of python on windows,"I have two same versions of python on windows. Both are 3.6.4. I installed one of them, and the other one comes with Anaconda. +My question is how do I use pip to install a package for one of them? It looks like the common method will not work since the two python versions are the same.","pip points to only one installation because pip is a script from one python. +If you have one Python in your PATH, then it's that python and that pip that will be used.",0.2012947653214861,False,2,5764 +2018-10-17 14:07:06.357,how to use pip install a package if there are two same version of python on windows,"I have two same versions of python on windows. Both are 3.6.4. I installed one of them, and the other one comes with Anaconda. +My question is how do I use pip to install a package for one of them? It looks like the common method will not work since the two python versions are the same.","Use virtualenv, conda environment or pipenv, it will help with managing packages for different projects.",0.0,False,2,5764 +2018-10-18 03:14:15.287,How can i make computer read a python file instead of py?,"I have a problem with installing numpy with python 3.6 and i have windows 10 64 bit +Python 3.6.6 +But when i typed python on cmd this appears +Python is not recognized as an internal or external command +I typed py it solves problem but how can i install numpy +I tried to type commant set path =c:/python36 +And copy paste the actual path on cmd but it isnt work +I tried also to edit the enviromnent path through type a ; and c:/python 36 and restart but it isnt help this +I used pip install nupy and download pip but it isnt work",Try pip3 install numpy. To install python 3 packages you should use pip3,0.0,False,2,5765 +2018-10-18 03:14:15.287,How can i make computer read a python file instead of py?,"I have a problem with installing numpy with python 3.6 and i have windows 10 64 bit +Python 3.6.6 +But when i typed python on cmd this appears +Python is not recognized as an internal or external command +I typed py it solves problem but how can i install numpy +I tried to type commant set path =c:/python36 +And copy paste the actual path on cmd but it isnt work +I tried also to edit the enviromnent path through type a ; and c:/python 36 and restart but it isnt help this +I used pip install nupy and download pip but it isnt work","On Windows, the py command should be able to launch any Python version you have installed. Each Python installation has its own pip. To be sure you get the right one, use py -3.6 -m pip instead of just pip. + +You can use where pip and where pip3 to see which Python's pip they mean. Windows just finds the first one on your path. + +If you activate a virtualenv, then you you should get the right one for the virtualenv while the virtualenv is active.",0.0,False,2,5765 +2018-10-18 09:53:46.373,Is it possible to manipulate data from csv without the need for producing a new csv file?,"I know how to import and manipulate data from csv, but I always need to save to xlsx or so to see the changes. Is there a way to see 'live changes' as if I am already using Excel? +PS using pandas +Thanks!",This is not possible using pandas. This lib creates copy of your .csv / .xls file and stores it in RAM. So all changes are applied to file stored in you memory not on disk.,1.2,True,1,5766 +2018-10-19 09:04:38.457,how to remove zeros after decimal from string remove all zero after dot,"I have data frame with a object column lets say col1, which has values likes: +1.00, +1, +0.50, +1.54 +I want to have the output like the below: +1, +1, +0.5, +1.54 +basically, remove zeros after decimal values if it does not have any digit after zero. Please note that i need answer for dataframe. pd.set_option and round don't work for me.","A quick-and-dirty solution is to use ""%g"" % value, which will convert floats 1.5 to 1.5 but 1.0 to 1 and so on. The negative side-effect is that large numbers will be represented in scientific notation like 4.44e+07.",0.0,False,1,5767 +2018-10-19 10:34:41.947,Call Python functions from C# in Visual Studio Python support VS 2017,"This is related to new features Visual Studio has introduced - Python support, Machine Learning projects to support. +I have installed support and found that I can create a python project and can run it. However, I could not find how to call a python function from another C# file. +Example, I created a classifier.py from given project samples, Now I want to run the classifier and get results from another C# class. +If there is no such portability, then how is it different from creating a C# Process class object and running the Python.exe with our py file as a parameter.","As per the comments, python support has come in visual studio. Visual studio is supporting running python scripts and debugging. +However, calling one python function from c# function and vice versa is not supported yet. +Closing the thread. Thanks for suggestions.",1.2,True,1,5768 +2018-10-19 10:55:44.210,Running Jenkinsfile with multiple Python versions,"I have a multibranch pipeline set up in Jenkins that runs a Jenkinsfile, which uses pytest for testing scripts, and outputs the results using Cobertura plug-in and checks code quality with Pylint and Warnings plug-in. +I would like to test the code with Python 2 and Python 3 using virtualenv, but I do not know how to perform this in the Jenkinsfile, and Shining Panda plug-in will not work for multibranch pipelines (as far as I know). Any help would be appreciated.","You can do it even using vanilla Jenkins (without any plugins). 'Biggest' problem will be with proper parametrization. But let's start from the beginning. +2 versions of Python +When you install 2 versions of python on a single machine you will have 2 different exec files. For python2 you will have python and for python3 you will have python3. Even when you create virtualenv (use venv) you will have both of them. So you are able to run unittests agains both versions of python. It's just a matter of executing proper command from batch/bash script. +Jenkins +There are many ways of performing it: + +you can prepare separate jobs for both python 2 and 3 versions of tests and run them from jenkins file +you can define the whole pipeline in a single jenkins file where each python test is a different stage (they can be run one after another or concurrently)",0.3869120172231254,False,1,5769 +2018-10-20 01:58:46.053,How to find redundant paths (subpaths) in the trajectory of a moving object?,"I need to track a moving deformable object in a video (but only 2D space). How do I find the paths (subpaths) revisited by the object in the span of its whole trajectory? For instance, if the object traced a path, p0-p1-p2-...-p10, I want to find the number of cases the object traced either p0-...-p10 or a sub-path like p3-p4-p5. Here, p0,p1,...,p10 represent object positions (in (x,y) pixel coordinates at the respective instants). Also, how do I know at which frame(s) these paths (subpaths) are being revisited?","I would first create a detection procedure that outputs a list of points visited along with their video frame number. Then use list exploration functions to know how many redundant suites are found and where. +As you see I don't write your code. If you need anymore advise please ask!",0.0,False,1,5770 +2018-10-20 13:20:10.283,Python - How to run script continuously to look for files in Windows directory,"I got a requirement to parse the message files in .txt format real time as and when they arrive in incoming windows directory. The directory is in my local Windows Virtual Machine something like D:/MessageFiles/ +I wrote a Python script to parse the message files because it's a fixed width file and it parses all the files in the directory and generates the output. Once the files are successfully parsed, it will be moved to archive directory. Now, i would like to make this script run continuously so that it looks for the incoming message files in the directory D:/MessageFiles/ and perform the processing as and when it sees the new files in the path. +Can someone please let me know how to do this?","There are a few ways to do this, it depends on how fast you need it to archive the files. +If the frequency is low, for example every hour, you can try to use windows task scheduler to run the python script. +If we are talking high frequency, or you really want a python script running 24/7, you could put it in a while loop and at the end of the loop do time.sleep() +If you go with this, I would recommend not blindly parsing the entire directory on every run, but instead finding a way to check whether new files have been added to the directory (such as the amount of files perhaps, or the total size). And then if there is a fluctuation you can archive.",1.2,True,1,5771 +2018-10-20 15:04:32.733,PyOpenGL camera system,"I'm confused on how the PyOpenGL camera works or how to implement it. Am I meant to rotate and move the whole world around the camera or is there a different way? +I couldn't find anything that can help me and I don't know how to translate C to python. +I just need a way to transform the camera that can help me understand how it works.","To say it bluntly: There is no such thing as a ""camera"" in OpenGL (neither there is in DirectX, or Vulkan, or in any of the legacy 3D graphics APIs). The effects of a camera is understood as some parameter that contributes to the ultimate placement of geometry inside the viewport volume. +The sooner you understand that all that current GPUs do is offering massively accelerated computational resources to set the values of pixels in a 2D grid, where the region of the pixels changed are mere points, lines or triangles on a 2D plane onto which they are projected from an arbitrarily dimensioned, abstract space, the better. +You're not even moving around the world around the camera. Setting up transformations is actually errecting the stage in which ""the world"" will appear in the first place. Any notion of a ""camera"" is an abstraction created by a higher level framework, like a third party 3D engine or your own creation. +So instead of thinking in terms of a camera, which constrains your thinking, you should think about it this way: +What kind of transformations do I have to chain up, to give a tuple of numbers that are called ""position"" an actual meaning, by letting this position turn up at a certain place on the visible screen? +You really ought to think that way, because that is what's actually happening.",1.2,True,1,5772 +2018-10-21 13:11:30.197,Anaconda Installation on Azure Web App Services,"I install my python modules via pip for my Azure Web Apps. But some of python libraries that I need are only available in conda. I have been trying to install anaconda on Azure Web Apps (windows/linux), no success so far. Any suggestions/examples on how to use conda env on azure web apps?","Currently, Azure App Service only supports the official Python to be installed as extensions. Instead of using the normal App Service, I would suggest you to use a Webapp for Container so that you can deploy your web app as a docker container. I suppose this is the only solution until Microsoft supports Anaconda on App Service.",0.3869120172231254,False,1,5773 +2018-10-21 15:08:58.620,Why tokenize/preprocess words for language analysis?,"I am currently working on a Python tweet analyser and part of this will be to count common words. I have seen a number of tutorials on how to do this, and most tokenize the strings of text before further analysis. +Surely it would be easier to avoid this stage of preprocessing and count the words directly from the string - so why do this?","Perhaps I'm being overly correct, but doesn't tokenization simply refer to splitting up the input stream (of characters, in this case) based on delimiters to receive whatever is regarded as a ""token""? +Your tokens can be arbitrary: you can perform analysis on the word level where your tokens are words and the delimiter is any space or punctuation character. It's just as likely that you analyse n-grams, where your tokens correspond to a group of words and delimiting is done e.g. by sliding a window. +So in short, in order to analyse words in a stream of text, you need to tokenize to receive ""raw"" words to operate on. +Tokenization however is often followed by stemming and lemmatization to reduce noise. This becomes quite clear when thinking about sentiment analysis: if you see the tokens happy, happily and happiness, do you want to treat them each separately, or wouldn't you rather combine them to three instances of happy to better convey a stronger notion of ""being happy""?",1.2,True,2,5774 +2018-10-21 15:08:58.620,Why tokenize/preprocess words for language analysis?,"I am currently working on a Python tweet analyser and part of this will be to count common words. I have seen a number of tutorials on how to do this, and most tokenize the strings of text before further analysis. +Surely it would be easier to avoid this stage of preprocessing and count the words directly from the string - so why do this?","Tokenization is an easy way of understanding the lexicon/vocabulary in text processing. +A basic first step in analyzing language or patterns in text is to remove symbols/punctuations and stop words. With tokenization you are able to split the large chunks of text to identify and remove text which might not add value, in many cases, stop words like 'the','a','and', etc do not add much value in identifying words of interest. +Word frequencies are also very common in understanding the usage of words in text, Google's Ngram allows for language analysis and plots out the popularity/frequency of a word over the years. If you do not tokenize or split the strings, you will not have a basis to count the words that appear in a text. +Tokenization also allows you to run a more advanced analysis, for example tagging the part of speech or assigning sentiments to certain words. Also for machine learning, texts are mostly preprocessed to convert them to arrays which are used in te different layers of neural networks. Without tokenizing, the inputs will all be too distinct to run any analysis on.",0.0,False,2,5774 +2018-10-23 13:07:01.447,Shutdown (a script) one raspberry pi with another raspberry pi,"I am currently working on a school project. We need to be able to shutdown (and maybe restart) a pythonscript that is running on another raspberry pi using a button. +I thought that the easiest thing, might just be to shutdown the pi from the other pi. But I have no experience on this subject. +I don't need an exact guide (I appreciate all the help I can get) but does anyone know how one might do this?","Well first we should ask if the PI you are trying to shutdown is connect to a network ? (LAN or the internet, doesn't matter). +If the answer is yes, you can simply connect to your PI through SSH, and call shutdown.sh. +I don't know why you want another PI, you can do it through any device connected to the same network as your first PI (Wi-Fi or ethernet if LAN, or simply from anytwhere if it's open to the internet). +You could make a smartphone app, or any kind or code that can connect to SSH (all of them).",0.0,False,1,5775 +2018-10-23 15:25:12.317,"python+docker: docker volume mounting with bad perms, data silently missing","I'm running into an issue without volume mounting, combined with the creation of directories in python. +Essentially inside my container, I'm writing to some path /opt/…, and I may have to make the path (which I'm using os.makedirs for) +If I mount a host file path like -v /opt:/opt, with bad ""permissions"" where the docker container does not seem to be able to write to, the creation of the path inside the container DOES NOT FAIL. The makedirs(P) works, because inside the container, it can make the dir just fine, because it has sudo permissions. However, nothing gets written, silently, on the host at /opt/…. The data just isn't there, but no exception is ever raised. +If I mount a path with proper/open permissions, like -v /tmp:/opt, then the data shows up on the host machine at /tmp/… as expected. +So, how do I not silently fail if there are no write permissions on the host on the left side of the -v argument?\ +EDIT: my question is ""how do I detect this bad deployment scenario, crash, and fail fast inside the container, if the person who deploys the container, does it wrong""? Just silently not writing data isn't acceptable.","The bad mount is root on the host right, and the good mount is the user in the Docker group on the host? Can you check the user/group of the mounted /opt? It should be different than that of /tmp.",0.0,False,1,5776 +2018-10-24 06:17:42.420,Building comprehensive scraping program/database for real estate websites,"I have a project I’m exploring where I want to scrape the real estate broker websites in my country (30-40 websites of listings) and keep the information about each property in a database. +I have experimented a bit with scraping in python using both BeautifulSoup and Scrapy. +What I would Ideally like to achieve is a daily updated database that will find new properties and remove properties when they are sold. +Any pointers as to how to achieve this? +I am relatively new to programming and open to learning different languages and resources if python isn’t suitable. +Sorry if this forum isn’t intended for this kind of vague question :-)",Build a scraper and schedule a daily run. You can use scrapy and the daily run will update the database daily.,0.0,False,1,5777 +2018-10-24 09:41:09.793,Using convolution layer trained weights for different image size,"I want to use the first three convolution layers of vgg-16 to generate feature maps. +But i want to use it with variable image size,i.e not imagenet size of 224x224 or 256x256. Such as 480x640or any other randome image dimension. +As convolution layer are independent of image spatial size, how can I use the weights for varying image sizes? +So how do we use the pre-trained weights of vgg-16 upto the first three convolution layers. +Kindly let me know if that is possible.","As convolution layer are independent of image size +Actually it's more complicated than that. The kernel itself is independent of the image size because we apply it on each pixel. And indeed, the training of these kernels can be reused. +But this means that the output size is dependent on the image size, because this is the number of nodes that are fed out of the layer for each input pixel. So the dense layer is not adapted to your image, even if the feature extractors are independent. +So you need to preprocess your image to fit into the size of the first layer or you retrain your dense layers from scratch. +When people talk about ""transfer-learning"" is what people have done in segmentation for decades. You reuse the best feature extractors and then you train a dedicated model with these features.",1.2,True,1,5778 +2018-10-24 18:05:05.703,Display complex numbers in UI when using wxPython,"I know complex math and the necessary operations (either ""native"" Python, or through NumPy). My question has to do with how to display complex numbers in a UI using wxPython. All the questions I found dealing with Python and complex numbers have to do with manipulating complex data. +My original thought was to subclass wx.TextCtrl and override the set and get methods to apply and strip some formatting as needed, and concatenating an i (or j) to the imaginary part. +Am I going down the wrong path? I feel like displaying complex numbers is something that should already be done somewhere. +What would be the recommended pattern for this even when using another UI toolkit, as the problem is similar. Also read my comment below on why I would like to do this.","As Brian considered my first comment good advice, and he got no more answers, I am posting it as an answer. Please refer also to the other question comments discussing the issue. + +In any UI you display strings and you read strings from the user. Why + would you mix the type to string or string to type translation with + widgets functionality? Get them, convert and use, or ""print"" them to + string and show the string in the ui.",0.0,False,1,5779 +2018-10-24 21:37:53.237,Change file metadata using Apache Beam on a cloud database?,"Can you change the file metadata on a cloud database using Apache Beam? From what I understand, Beam is used to set up dataflow pipelines for Google Dataflow. But is it possible to use Beam to change the metadata if you have the necessary changes in a CSV file without setting up and running an entire new pipeline? If it is possible, how do you do it?","You could code Cloud Dataflow to handle this but I would not. A simple GCE instance would be easier to develop and run the job. An even better choice might be UDF (see below). +There are some guidelines for when Cloud Dataflow is appropriate: + +Your data is not tabular and you can not use SQL to do the analysis. +Large portions of the job are parallel -- in other words, you can process different subsets of the data on different machines. +Your logic involves custom functions, iterations, etc... +The distribution of the work varies across your data subsets. + +Since your task involves modifying a database, I am assuming a SQL database, it would be much easier and faster to write a UDF to process and modify the database.",0.0,False,1,5780 +2018-10-25 02:44:34.287,How to use Tensorflow Keras API,"Well I start learning Tensorflow but I notice there's so much confusion about how to use this thing.. +First, some tutorials present models using low level API tf.varibles, scopes...etc, but other tutorials use Keras instead and for example to use tensor board to invoke callbacks. +Second, what's the purpose of having ton of duplicate API, really what's the purpose behind using high level API like Keras when you have low level to build model like Lego blocks? +Finally, what's the true purpose of using eager execution?","You can use these APIs all together. E.g. if you have a regular dense network, but with an special layer you can use higher level API for dense layers (tf.layers and tf.keras) and low level API for your special layer. Furthermore, it is complex graphs are easier to define in low level APIs, e.g. if you want to share variables, etc. +Eager execution helps you for fast debugging, it evaluates tensors directly without a need of invoking a session.",0.0,False,1,5781 +2018-10-25 11:08:14.153,Keras flow_from_dataframe wrong data ordering,"I am using keras's data generator with flow_from_dataframe. for training it works just fine, but when using model.predict_generator on the test set, I discovered that the ordering of the generated results is different than the ordering of the ""id"" column in my dataframe. +shuffle=False does make the ordering of the generator consistent, but it is a different ordering than the dataframe. I also tried different batch sizes and the corresponding correct steps for the predict_generator function. (for example: batch_Size=1, steps=len(data)) +how can I make sure the labels predicted for my test set are ordered in the same way of my dataframe ""id"" column?","While I haven't found a way to decide the order in which the generator produces data, the order can be obtained with the generator.filenames property.",1.2,True,1,5782 +2018-10-25 15:16:07.853,Write python functions to operate over arbitrary axes,"I've been struggling with this problem in various guises for a long time, and never managed to find a good solution. +Basically if I want to write a function that performs an operation over a given, but arbitrary axis of an arbitrary rank array, in the style of (for example) np.mean(A,axis=some_axis), I have no idea in general how to do this. +The issue always seems to come down to the inflexibility of the slicing syntax; if I want to access the ith slice on the 3rd index, I can use A[:,:,i], but I can't generalise this to the nth index.","numpy functions use several approaches to do this: + +transpose axes to move the target axis to a known position, usually first or last; and if needed transpose the result +reshape (along with transpose) to reduce the problem simpler dimensions. If your focus is on the n'th dimension, it might not matter where the (:n) dimension are flattened or not. They are just 'going along for the ride'. +construct an indexing tuple. idx = (slice(None), slice(None), j); A[idx] is the equivalent of A[:,:,j]. Start with a list or array of the right size, fill with slices, fiddle with it, and then convert to a tuple (tuples are immutable). +Construct indices with indexing_tricks tools like np.r_, np.s_ etc. + +Study code that provides for axes. Compiled ufuncs won't help, but functions like tensordot, take_along_axis, apply_along_axis, np.cross are written in Python, and use one or more of these tricks.",1.2,True,1,5783 +2018-10-25 15:26:46.793,Highly variable execution times in Cython functions,"I have a performance measurement issue while executing a migration to Cython from C-compiled functions (through scipy.weave) called from a Python engine. +The new cython functions profiled end-to-end with cProfile (if not necessary I won't deep down in cython profiling) record cumulative measurement times highly variable. +Eg. the cumulate time of a cython function executed 9 times per 5 repetitions (after a warm-up of 5 executions - not took in consideration by the profiling function) is taking: + +in a first round 215,627339 seconds +in a second round 235,336131 seconds + +Each execution calls the functions many times with different, but fixed parameters. +Maybe this variability could depends on CPU loads of the test machine (a cloud-hosted dedicated one), but I wonder if such a variability (almost 10%) could depend someway by cython or lack of optimization (I already use hints on division, bounds check, wrap-around, ...). +Any idea on how to take reliable metrics?",I'm not a performance expert but from my understanding the thing you should be measuring would be the average time it take per execution not the cumulative time? Other than that is your function doing any like reading from disk and/or making network requests?,0.0,False,2,5784 +2018-10-25 15:26:46.793,Highly variable execution times in Cython functions,"I have a performance measurement issue while executing a migration to Cython from C-compiled functions (through scipy.weave) called from a Python engine. +The new cython functions profiled end-to-end with cProfile (if not necessary I won't deep down in cython profiling) record cumulative measurement times highly variable. +Eg. the cumulate time of a cython function executed 9 times per 5 repetitions (after a warm-up of 5 executions - not took in consideration by the profiling function) is taking: + +in a first round 215,627339 seconds +in a second round 235,336131 seconds + +Each execution calls the functions many times with different, but fixed parameters. +Maybe this variability could depends on CPU loads of the test machine (a cloud-hosted dedicated one), but I wonder if such a variability (almost 10%) could depend someway by cython or lack of optimization (I already use hints on division, bounds check, wrap-around, ...). +Any idea on how to take reliable metrics?","First of all, you need to ensure that your measurement device is capable of measuring what you need: specifically, only the system resources you consume. UNIX's utime is one such command, although even that one still includes swap time. Check the documentation of your profiler: it should have capabilities to measure only the CPU time consumed by the function. If so, then your figures are due to something else. +Once you've controlled the external variations, you need to examine the internal. You've said nothing about the complexion of your function. Some (many?) functions have available short-cuts for data-driven trivialities, such as multiplication by 0 or 1. Some are dependent on an overt or covert iteration that varies with the data. You need to analyze the input data with respect to the algorithm. +One tool you can use is a line-oriented profiler to detail where the variations originate; seeing which lines take the extra time should help determine where the ""noise"" comes from.",0.2012947653214861,False,2,5784 +2018-10-25 20:43:10.730,Kernel size change in convolutional neural networks,"I have been working on creating a convolutional neural network from scratch, and am a little confused on how to treat kernel size for hidden convolutional layers. For example, say I have an MNIST image as input (28 x 28) and put it through the following layers. +Convolutional layer with kernel_size = (5,5) with 32 output channels + +new dimension of throughput = (32, 28, 28) + +Max Pooling layer with pool_size (2,2) and step (2,2) + +new dimension of throughput = (32, 14, 14) + +If I now want to create a second convolutional layer with kernel size = (5x5) and 64 output channels, how do I proceed? Does this mean that I only need two new filters (2 x 32 existing channels) or does the kernel size change to be (32 x 5 x 5) since there are already 32 input channels? +Since the initial input was a 2D image, I do not know how to conduct convolution for the hidden layer since the input is now 3 dimensional (32 x 14 x 14).","you need 64 kernel, each with the size of (32,5,5) . +depth(#channels) of kernels, 32 in this case, or 3 for a RGB image, 1 for gray scale etc, should always match the input depth, but values are all the same. +e.g. if you have a 3x3 kernel like this : [-1 0 1; -2 0 2; -1 0 1] and now you want to convolve it with an input with N as depth or say channel, you just copy this 3x3 kernel N times in 3rd dimension, the following math is just like the 1 channel case, you sum all values in all N channels which your kernel window is currently on them after multiplying the kernel values with them and get the value of just 1 entry or pixel. so what you get as output in the end is a matrix with 1 channel:) how much depth you want your matrix for next layer to have? that's the number of kernels you should apply. hence in your case it would be a kernel with this size (64 x 32 x 5 x 5) which is actually 64 kernels with 32 channels for each and same 5x5 values in all cahnnels. +(""I am not a very confident english speaker hope you get what I said, it would be nice if someone edit this :)"")",0.0,False,1,5785 +2018-10-25 21:40:47.257,Python: I can not get pynput to install,"I'm trying to run a program with pynput. I tried installing it through terminal on Mac with pip. However, it still says it's unresolved on my ide PyCharm. Does anyone have any idea of how to install this?","I have three theories, but first: make sure it is installed by running python -c ""import pynput"" + +JetBrain's IDEs typically do not scan for package updates, so try restarting the IDE. +JetBrain's IDE might configure a python environment for you, this might cause you to have to manually import it in your run configuration. +You have two python versions installed and you installed the package on the opposite version you run script on. + +I think either 1 or 3 is the most likely.",0.0,False,1,5786 +2018-10-26 07:04:15.230,How to get the dimension of tensors at runtime?,"I can get the dimensions of tensors at graph construction time via manually printing shapes of tensors(tf.shape()) but how to get the shape of these tensors at session runtime? +The reason that I want shape of tensors at runtime is because at graph construction time shape of some tensors is coming as (?,8) and I cannot deduce the first dimension then.","You have to make the tensors an output of the graph. For example, if showme_tensor is the tensor you want to print, just run the graph like that : +_showme_tensor = sess.run(showme_tensor) +and then you can just print the output as you print a list. If you have different tensors to print, you can just add them like that : +_showme_tensor_1, _showme_tensor_2 = sess.run([showme_tensor_1, showme_tensor_2])",0.0,False,1,5787 +2018-10-27 10:53:32.190,python - pandas dataframe to powerpoint chart backend,"I have a pandas dataframe result which stores a result obtained from a sql query. I want to paste this result onto the chart backend of a specified chart in the selected presentation. Any idea how to do this? +P.S. The presentation is loaded using the module python-pptx","you will need to read a bit about python-pptx. +You need chart's index and slide index of the chart. Once you know them +get your chart object like this-> +chart = presentation.slides[slide_index].shapes[shape_index].chart +replacing data +chart.replace_data(new_chart_data) +reset_chart_data_labels(chart) +then when you save your presentation it will have updated the data. +usually, I uniquely name all my slides and charts in a template and then I have a function that will get me the chart's index and slide's index. (basically, I iterate through all slides, all shapes, and find a match for my named chart). +Here is a screenshot where I name a chart->[![screenshot][1]][1]. Naming slides is a bit more tricky and I will not delve into that but all you need is slide_index just count the slides 0 based and then you have the slide's index. +[1]: https://i.stack.imgur.com/aFQwb.png",0.0,False,1,5788 +2018-10-31 22:26:56.993,How to make Flask app up and running after server restart?,"What is the recommended way to run Flask app (e.g. via Gunicorn?) and how to make it up and running automatically after linux server (redhat) restart? +Thanks",have you looked at supervisord? it works reasonably well and handles restarting processes automatically if they fail as well as looking after error logs nicely,0.0,False,1,5789 +2018-11-01 03:08:27.057,cv2 show video stream & add overlay after another function finishes,"I am current working on a real time face detection project. +What I have done is that I capture the frame using cv2, do detection and then show result using cv2.imshow(), which result in a low fps. +I want a high fps video showing on the screen without lag and a low fps detection bounding box overlay. +Is there a solution to show the real time video stream (with the last detection result bounding box), and once a new detection is finished, show the new bounding box and the background was not delayed by the detection function. +Any help is appreciated! +Thanks!","A common approach would be to create a flag that allows the detection algorithim to only run once every couple of frames and save the predicted reigons of interest to a list, whilst creating bounding boxes for every frame. +So for example you have a face detection algorithim, process every 15th frame to detect faces, but in every frame create a bounding box from the predictions. Even though the predictions get updated every 15 frames. +Another approach could be to add an object tracking layer. Run your heavy algorithim to find the ROIs and then use the object tracking library to hold on to them till the next time it runs the detection algorithim. +Hope this made sense.",1.2,True,1,5790 +2018-11-01 07:22:44.353,What Is the Correct Mimetype (in and out) for a .Py File for Google Drive?,"I have a script that uploads files to Google Drive. I want to upload python files. I can do it manually and have it keep the file as .py correctly (and it's previewable), but no matter what mimetypes I try, I can't get my program to upload it correctly. It can upload the file as a .txt or as something GDrive can't recognize, but not as a .py file. I can't find an explicit mimetype for it (I found a reference for text/x-script.python but it doesn't work as an out mimetype). +Does anyone know how to correctly upload a .py file to Google Drive using REST?",Also this is a valid Python mimetype: text/x-script.python,-0.2012947653214861,False,1,5791 +2018-11-01 09:31:15.857,Running a python file in windows after removing old python files,So I am running python 3.6.5 on a school computer the most things are heavily restricted to do on a school computer and i can only use python on drive D. I cannot use batch either. I had python 2.7 on it last year until i deleted all the files and installed python 3.6.5 after that i couldn't double click on a .py file to open it as it said continue using E:\Python27\python(2.7).exe I had the old python of a USB which is why it asks this but know i would like to change that path the the new python file so how would i do that in windows,Just open your Python IDE and open the file manually.,0.0,False,1,5792 +2018-11-01 22:25:51.750,GROUPBY with showing all the columns,"I want to do a groupby of my MODELS by CITYS with keeping all the columns where i can print the percentage of each MODELS IN THIS CITY. +I put my dataframe in PHOTO below. +And i have written this code but i don""t know how to do ?? +for name,group in d_copy.groupby(['CITYS'])['MODELS']:","Did you try this : d_copy.groupby(['CITYS','MODELS']).mean() to have the average percentage of a model by city. +Then if you want to catch the percentages you have to convert it in DF and select the column : pd.DataFrame(d_copy.groupby(['CITYS','MODELS']).mean())['PERCENTAGE']",0.0,False,1,5793 +2018-11-03 05:34:23.617,Google Data Studio Connector and App Scripts,"I am working on a project for a client in which I need to load a lot of data into data studio. I am having trouble getting the deployment to work with my REST API. +The API has been tested with code locally but I need to know how to make it compatible with the code base in App Scripts. Has anyone else had experience with working around this? The endpoint is a Python Flask application. +Also, is there a limit on the amount of data that you can dump in a single response to the Data Studio? As a solution to my needs(needing to be able to load data for 300+ accounts) I have created a program that caches the data needed from each account and returns the whole payload at once. There are a lot of entries, so I was wondering if they had a limit to what can be uploaded at once. +Thank you in advance","I found the issue, it was a simple case of forgetting to add the url to the whitelist.",0.3869120172231254,False,1,5794 +2018-11-03 15:56:12.343,Multi-Line Combobox in Tkinter,"Is it possible to have a multi-line text entry field with drop down options? +I currently have a GUI with a multi-line Text widget where the user writes some comments, but I would like to have some pre-set options for these comments that the user can hit a drop-down button to select from. +As far as I can tell, the Combobox widget does not allow changing the height of the text-entry field, so it is effectively limited to one line (expanding the width arbitrarily is not an option). Therefore, what I think I need to do is sub-class the Text widget and somehow add functionality for a drop down to show these (potentially truncated) pre-set options. +I foresee a number of challenges with this route, and wanted to make sure I'm not missing anything obvious with the existing built-in widgets that could do what I need.","I don't think you are missing anything. Note that ttk.Combobox is a composite widget. It subclasses ttk.Entry and has ttk.Listbox attached. +To make multiline equivalent, subclass Text. as you suggested. Perhaps call it ComboText. Attach either a frame with multiple read-only Texts, or a Text with multiple entries, each with a separate tag. Pick a method to open the combotext and methods to close it, with or without copying a selection into the main text. Write up an initial doc describing how to operate the thing.",0.2012947653214861,False,1,5795 +2018-11-04 15:50:14.623,"Apache - if file does not exist, run script to create it, then serve it","How can I get this to happen in Apache (with python, on Debian if it matters)? + +User submits a form +Based on the form entries I calculate which html file to serve them (say 0101.html) +If 0101.html exists, redirect them directly to 0101.html +Otherwise, run a script to create 0101.html, then redirect them to it. + +Thanks! +Edit: I see there was a vote to close as too broad (though no comment or suggestion). I am just looking for a minimum working example of the Apache configuration files I would need. If you want the concrete way I think it will be done, I think apache just needs to check if 0101.html exists, if so serve it, otherwise run cgi/myprogram.py with input argument 0101.html. Hope this helps. If not, please suggest how I can make it more specific. Thank you.","Apache shouldn't care. Just serve a program that looks for the file. If it finds it it will read it (or whatever and) return results and if it doesn't find it, it will create and return the result. All can be done with a simple python file.",1.2,True,1,5796 +2018-11-04 18:53:52.133,AWS CLI upload failed: unknown encoding: idna,"I am trying to push some files up to s3 with the AWS CLI and I am running into an error: +upload failed: ... An HTTP Client raised and unhandled exception: unknown encoding: idna +I believe this is a Python specific problem but I am not sure how to enable this type of encoding for my python interpreter. I just freshly installed Python 3.6 and have verified that it being used by powershell and cmd. +$> python --version + Python 3.6.7 +If this isn't a Python specific problem, it might be helpful to know that I also just freshly installed the AWS CLI and have it properly configured. Let me know if there is anything else I am missing to help solve this problem. Thanks.","Even I was facing same issue. I was running it on Windows server 2008 R2. I was trying to upload around 500 files to s3 using below command. + +aws s3 cp sourcedir s3bucket --recursive --acl + bucket-owner-full-control --profile profilename + +It works well and uploads almost all files, but for initial 2 or 3 files, it used to fail with error: An HTTP Client raised and unhandled exception: unknown encoding: idna +This error was not consistent. The file for which upload failed, it might succeed if I try to run it again. It was quite weird. +Tried on trial and error basis and it started working well. +Solution: + +Uninstalled Python 3 and AWS CLI. +Installed Python 2.7.15 +Added python installed path in environment variable PATH. Also added pythoninstalledpath\scripts to PATH variable. +AWS CLI doesnt work well with MS Installer on Windows Server 2008, instead used PIP. + +Command: + +pip install awscli + +Note: for pip to work, do not forget to add pythoninstalledpath\scripts to PATH variable. +You should have following version: +Command: + +aws --version + +Output: aws-cli/1.16.72 Python/2.7.15 Windows/2008ServerR2 botocore/1.12.62 +Voila! The error is gone!",-0.1618299653758019,False,2,5797 +2018-11-04 18:53:52.133,AWS CLI upload failed: unknown encoding: idna,"I am trying to push some files up to s3 with the AWS CLI and I am running into an error: +upload failed: ... An HTTP Client raised and unhandled exception: unknown encoding: idna +I believe this is a Python specific problem but I am not sure how to enable this type of encoding for my python interpreter. I just freshly installed Python 3.6 and have verified that it being used by powershell and cmd. +$> python --version + Python 3.6.7 +If this isn't a Python specific problem, it might be helpful to know that I also just freshly installed the AWS CLI and have it properly configured. Let me know if there is anything else I am missing to help solve this problem. Thanks.","I had the same problem in Windows. +After investigating the problem, I realized that the problem is in the aws-cli installed using the MSI installer (x64). After removing ""AWS Command Line Interface"" from the list of installed programs and installing aws-cli using pip, the problem was solved. +I also tried to install MSI installer x32 and the problem was missing.",1.2,True,2,5797 +2018-11-05 10:20:35.477,Calling a Python function from HTML,"Im writing a webapplication, where im trying to display the connected USB devices. I found a Python function that does exactly what i want but i cant really figure out how to call the function from my HTML code, preferably on the click of a button.","simple answer: you can't. the code would have to be run client-side, and no browser would execute potentially malicious code automatically (and not every system has a python interpreter installed). +the only thing you can execute client-side (without the user taking action, e.g. downloading a program or browser add-on) is javascript.",1.2,True,1,5798 +2018-11-05 18:11:03.353,How to create Graphql server for microservices?,"We have several microservices on Golang and Python, On Golang we are writing finance operations and on Python online store logic, we want to create one API for our front-end and we don't know how to do it. +I have read about API gateway and would it be right if Golang will create its own GraphQL server, Python will create another one and they both will communicate with the third graphql server which will generate API for out front-end.","I do not know much details about your services, but great pattern I successfully used on different projects is as you mentioned GraphQL gateway. +You will create one service, I prefer to create it in Node.js where all requests from frontend will coming through. Then from GraphQL gateway you will request your microservices. This will be basically your only entry point into the backend system. Requests will be authenticated and you are able to unify access to your data and perform also some performance optimizations like implementing data loader's caching and batching to mitigate N+1 problem. In addition you will reduce complexity of having multiple APIs and leverage all the GraphQL benefits. +On my last project we had 7 different frontends and each was using the same GraphQL gateway and I was really happy with our approach. There are definitely some downsides as you need to keep in sync all your frontends and GraphQL gateway, therefore you need to be more aware of your breaking changes, but it is solvable with for example deprecated directive and by performing blue/green deployment with Kubernetes cluster. +The other option is to create the so-called backend for frontend in GraphQL. Right now I do not have enough information which solution would be best for you. You need to decide based on your frontend needs and business domain, but usually I prefer GraphQL gateway as GraphQL has great flexibility and the need to taylor your API to frontend is covered by GraphQL capabilities. Hope it helps David",1.2,True,1,5799 +2018-11-05 18:14:16.803,What should be the 5th dimension for the input to 3D-CNN while working with hyper-spectral images?,"I have a hyperspectral image having dimension S * S * L where S*S is the spatial size and L denotes the number of spectral bands. +Now the shape of my X (image array) is: (1, 145, 145, 200) where 1 is the number of examples, 145 is the length and width of the image and 200 is no. of channels of the image. +I want to input this small windows of this image (having dimension like W * W * L; W < S) into a 3D CNN, but for that, I need to have 5 dimensions in the following format: (batch, length, width, depth, channels). +It seems to me I am missing one of the spatial dimensions, how do I convert my image array into a 5-dimensional array without losing any information? +I am using python and Keras for the above.","What you want is a 2D CNN, not a 3D one. A 2D CNN already supports multiple channels, so you should have no problem using it with a hyperspectral image.",0.2012947653214861,False,2,5800 +2018-11-05 18:14:16.803,What should be the 5th dimension for the input to 3D-CNN while working with hyper-spectral images?,"I have a hyperspectral image having dimension S * S * L where S*S is the spatial size and L denotes the number of spectral bands. +Now the shape of my X (image array) is: (1, 145, 145, 200) where 1 is the number of examples, 145 is the length and width of the image and 200 is no. of channels of the image. +I want to input this small windows of this image (having dimension like W * W * L; W < S) into a 3D CNN, but for that, I need to have 5 dimensions in the following format: (batch, length, width, depth, channels). +It seems to me I am missing one of the spatial dimensions, how do I convert my image array into a 5-dimensional array without losing any information? +I am using python and Keras for the above.","If you want to convolve along the dimension of your channels, you should add a singleton dimension in the position of channel. If you don't want to convolve along the dimension of your channels, you should use a 2D CNN.",1.2,True,2,5800 +2018-11-06 05:41:57.087,Family tree in Python,"I need to model a four generational family tree starting with a couple. After that if I input a name of a person and a relation like 'brother' or 'sister' or 'parent' my code should output the person's brothers or sisters or parents. I have a fair bit of knowledge of python and self taught in DSA. I think I should model the data as a dictionary and code for a tree DS with two root nodes(i.e, the first couple). But I am not sure how to start. I just need to know how to start modelling the family tree and the direction of how to proceed to code. Thank you in advance!","There's plenty of ways to skin a cat, but I'd suggest to create: + +A Person class which holds relevant data about the individual (gender) and direct relationship data (parents, spouse, children). +A dictionary mapping names to Person elements. + +That should allow you to answer all of the necessary questions, and it's flexible enough to handle all kinds of family trees (including non-tree-shaped ones).",0.9999092042625952,False,1,5801 +2018-11-06 07:03:33.130,Tensorflow MixtureSameFamily and gaussian mixture model,"I am really new to Tensorflow as well as gaussian mixture model. +I have recently used tensorflow.contrib.distribution.MixtureSameFamily class for predicting probability density function which is derived from gaussian mixture of 4 components. +When I plotted the predicted density function using ""prob()"" function as Tensorflow tutorial explains, I found the plotted pdf with only one mode. I expected to see 4 modes as the mixture components are 4. +I would like to ask whether Tensorflow uses any global mode predicting algorithm in their MixtureSameFamily class. If not, I would also like to know how MixtureSameFamily class forms the pdf with statistical values. +Thank you very much.","I found an answer for above question thanks to my collegue. +The 4 components of gaussian mixture have had very similar means that the mixture seems like it has only one mode. +If I put four explicitly different values as means to the MixtureSameFamily class, I could get a plot of gaussian mixture with 4 different modes. +Thank you very much for reading this.",0.0,False,1,5802 +2018-11-07 04:43:09.720,How to run pylint plugin in Intellij IDEA?,"I have installed pylint plugin and restarted the Intellij IDEA. It is NOT external tool (so please avoid providing answers on running as an external tool as I know how to). +However I have no 'pylint' in the tool menu or the code menu. +Is it invoked by running 'Analyze'? or is there a way to run the pylint plugin on py files?","This is for the latest IntelliJ IDEA version 2018.3.5 (Community Edition): + +Type ""Command ,"" or click ""IntelliJ IDEA -> Preferences..."" +From the list on the left of the popped up window select ""Plugins"" +Make sure that on the right top the first tab ""Marketplace"" is picked if it's not +Search for ""Pylint"" and when the item is found, click the greed button ""Install"" associated with the found item + +The plugin should then be installed properly. +One can then turn on/off real-time Pylint scan via the same window by navigating in the list on the left: ""Editor -> Inspections"", then in the list on the right unfolding ""Pylint"" and finally checking/unchecking the corresponding checkbox on the right of the unfolded item. +One can also in the same window go the very last top-level item within the list on the left named ""Other Settings"" and unfold it. +Within it there's an item called ""Pylint"", click on it. +On the top right there should be a button ""Test"", click on it. +If in a few seconds to the left of the ""Test"" text there appears a green checkmark, then Pylint is installed correctly. +Finally, to access the actual Pylint window, click ""View""->""Tool Windows""->""Pylint""! +Enjoy!",0.9999092042625952,False,1,5803 +2018-11-08 02:59:54.810,nltk bags of words showing emotions,"i am working on NLP using python and nltk. +I was wondering whether is there any dataset which have bags of words which shows keywords relating to emotions such as happy, joy, anger, sadness and etc +from what i dug up in the nltk corpus, i see there are some sentiment analysis corpus which contain positive and negative review which doesn't exactly related to keywords showing emotions. +Is there anyway which i could build my own dictionary containing words which shows emotion for this purpose? is so, how do i do it and is there any collection of such words? +Any help would be greatly appreciated","I'm not aware of any dataset that associates sentiments to keywords, but you can easily built one starting from a generic sentiment analysis dataset. +1) Clean the datasets from the stopwords and all the terms that you don't want to associate to a sentiment. +2)Compute the count of each words in the two sentiment classes and normalize it. In this way you will associate a probability to each word to belong to a class. Let's suppose that you have 300 times the word ""love"" appearing in the positive sentences and the same word appearing 150 times in the negative sentences. Normalizing you have that the word ""love"" belongs with a probability of 66% (300/(150+300)) to the positive class and 33% to the negative one. +3) In order to make the dictionary more robust to the borderline terms you can set a threshold to consider neutral all the words with the max probability lower than the threshold. +This is an easy approach to build the dictionary that you are looking for. You could use more sophisticated approach as Term Frequency-Inverse Document Frequency.",0.0,False,1,5804 +2018-11-09 01:48:39.963,Operating the Celery Worker in the ECS Fargate,"I am working on a project using AWS ECS. I want to use Celery as a distributed task queue. Celery Worker can be build up as EC2 type, but because of the large amount of time that the instance is in the idle state, I think it would be cost-effective for AWS Fargate to run the job and quit immediately. +Do you have suggestions on how to use the Celery Worker efficiently in the AWS cloud?","Fargate launch type is going to take longer to spin up than EC2 launch type, because AWS is doing all the ""host things"" for you when you start the task, including the notoriously slow attaching of an ENI, and likely downloading the image from a Docker repo. Right now there's no contest, EC2 launch type is faster every time. +So it really depends on the type of work you want the workers to do. You can expect a new Fargate task to take a few minutes to enter a RUNNING state for the aforementioned reasons. EC2 launch, on the other hand, because the ENI is already in place on your host and the image is already downloaded (at best) or mostly downloaded (likely worst), will move from PENDING to RUNNING very quickly. + +Use EC2 launch type for steady workloads, use Fargate launch type for burst capacity +This is the current prevailing wisdom, often discussed as a cost factor because Fargate can't take advantage of the typical EC2 cost savings mechanisms like reserved instances and spot pricing. It's expensive to run Fargate all the time, compared to EC2. +To be clear, it's perfectly fine to run 100% in Fargate (we do), but you have to be willing to accept the downsides of doing that - slower scaling and cost. +Note you can run both launch types in the same cluster. Clusters are logical anyway, just a way to organize your resources. + +Example cluster +This example shows a static EC2 launch type service running 4 celery tasks. The number of tasks, specs, instance size and all doesn't really matter, do it up however you like. The important thing is - EC2 launch type service doesn't need to scale; the Fargate launch type service is able to scale from nothing running (during periods where there's little or no work to do) to as many workers as you can handle, based on your scaling rules. +EC2 launch type Celery service +Running 1 EC2 launch type t3.medium (2vcpu/4GB). +Min tasks: 2, Desired: 4, Max tasks: 4 +Running 4 celery tasks at 512/1024 in this EC2 launch type. +No scaling policies +Fargate launch type Celery service +Min tasks: 0, Desired: (x), Max tasks: 32 +Running (x) celery tasks (same task def as EC2 launch type) at 512/1024 +Add scaling policies to this service",1.2,True,1,5805 +2018-11-09 07:20:23.930,how do I insert some rows that I select from remote MySQL database to my local MySQL database,"My remote MySQL database and local MySQL database have the same table structure, and the remote and local MySQL database is utf-8charset.","You'd better merge value and sql template string and print it , make sure the sql is correct.",0.0,False,1,5806 +2018-11-09 16:42:21.617,Run external Python script that could only read/write only a subset of main app variables,"I have a Python application that simulates the behaviour of a system, let's say a car. +The application defines a quite large set of variables, some corresponding to real world parameters (the remaining fuel volume, the car speed, etc.) and others related to the simulator internal mechanics which are of no interest to the user. +Everything works fine, but currently the user can have no interaction with the simulation whatsoever during its execution: she just sets simulation parameters, lauchs the simulation, and waits for its termination. +I'd like the user (i.e. not the creator of the application) to be able to write Python scripts, outside of the app, that could read/write the variables associated with the real world parameters (and only these variables). +For instance, at t=23s (this condition I know how to check for), I'd like to execute user script gasLeak.py, that reads the remaining fuel value and sets it to half its current value. +To sum up, how is it possible, from a Python main app, to execute user-written Python scripts that can access and modifiy only a pre-defined subset of the main script variables. In a perfect world, I'd also like that modifications applied to user scripts during the running of the app to be taken into account without having to restart said app (something along the reloading of a module).",Make the user-written scripts read command-line arguments and print to stdout. Then you can call them with the subprocess module with the variables they need to know about as arguments and read their responses with subprocess.check_output.,0.0,False,1,5807 +2018-11-09 23:03:45.930,pytest-xdist generate random & uniqe ports for each test,"I'm using pytest-xdist plugin to run some test using the @pytest.mark.parametrize to run the same test with different parameters. +As part of these tests, I need to open/close web servers and the ports are generated at collection time. +xdist does the test collection on the slave and they are not synchronised, so how can I guarantee uniqueness for the port generation. +I can use the same port for each slave but I don't know how to archive this.","I figured that I did not give enough information regarding my issue. +What I did was to create one parameterized test using @pytest.mark.parametrize and before the test, I collect the list of parameters, the collection query a web server and receive a list of ""jobs"" to process. +Each test contains information on a port that he needs to bind to, do some work and exit because the tests are running in parallel I need to make sure that the ports will be different. +Eventually, I make sure that the job ids will be in the rand on 1024-65000 and used that for the port.",1.2,True,1,5808 +2018-11-10 23:45:59.803,how to detect if photo is mostly a document?,I think i am looking for something simpler than detecting a document boundaries in a photo. I am only trying to flag photos which are mostly of documents rather than just a normal scene photo. is this an easier problem to solve?,"Are the documents mostly white? If so, you could analyse the images for white content above a certain percentage. Generally text documents only have about 10% printed content on them in total.",0.0,False,1,5809 +2018-11-11 14:15:01.157,"Sending data to Django backend from RaspberryPi Sensor (frequency, bulk-update, robustness)","I’m currently working on a Raspberry Pi/Django project slightly more complex that i’m used to. (i either do local raspberry pi projects, or simple Django websites; never the two combined!) +The idea is two have two Raspberry Pi’s collecting information running a local Python script, that would each take input from one HDMI feed (i’ve got all that part figured out - I THINK) using image processing. Now i want these two Raspberry Pi’s (that don’t talk to each other) to connect to a backend server that would combine, store (and process) the information gathered by my two Pis +I’m expecting each Pi to be working on one frame per second, comparing it to the frame a second earlier (only a few different things he is looking out for) isolate any new event, and send it to the server. I’m therefore expecting no more than a dozen binary timestamped data points per second. +Now what is the smart way to do it here ? + +Do i make contact to the backend every second? Every 10 seconds? +How do i make these bulk HttpRequests ? Through a POST request? Through a simple text file that i send for the Django backend to process? (i have found some info about “bulk updates” for django but i’m not sure that covers it entirely) +How do i make it robust? How do i make sure that all data what successfully transmitted before deleting the log locally ? (if one call fails for a reason, or gets delayed, how do i make sure that the next one compensates for lost info? + +Basically, i’m asking advise for making a IOT based project, where a sensor gathers bulk information and want to send it to a backend server for processing, and how should that archiving process be designed. +PS: i expect the image processing part (at one fps) to be fast enough on my Pi Zero (as it is VERY simple); backlog at that level shouldn’t be an issue. +PPS: i’m using a django backend (even if it seems a little overkill) + a/ because i already know the framework pretty well + b/ because i’m expecting to build real-time performance indicators from the combined data points gathered, using django, and displaying them in (almost) real-time on a webpage. +Thank you very much !","This partly depends on just how resilient you need it to be. If you really can't afford for a single update to be lost, I would consider using a message queue such as RabbitMQ - the clients would add things directly to the queue and the server would pop them off in turn, with no need to involve HTTP requests at all. +Otherwise it would be much simpler to just POST each frame's data in some serialized format (ie JSON) and Django would simply deserialize and iterate through the list, saving each entry to the db. This should be fast enough for the rate you describe - I'd expect saving a dozen db entries to take significantly less than half a second - but this still leaves the problem of what to do if things get hung up for some reason. Setting a super-short timeout on the server will help, as would keeping the data to be posted until you have confirmation that it has been saved - and creating unique IDs in the client to ensure that the request is idempotent.",0.6730655149877884,False,1,5810 +2018-11-12 08:56:25.160,run python from Microsoft Dynamics,"I know i can access a Dynamics instance from a python script by using the oData API, but what about the other way around? Is it possible to somehow call a python script from within Dynamics and possible even pass arguments? +Would this require me to use custom js/c#/other code within Dynamics?","You won't be able to nativley execute a python script within Dynamics. +I would approach this by placing the Python script in a service that can be called via a web service call from Dynamics. You could make the call from form JavaScript or a Plugin using C#.",1.2,True,1,5811 +2018-11-12 20:04:05.643,Extracting URL from inside docx tables,"I'm pretty much stuck right now. +I wrote a parser in python3 using the python-docx library to extract all tables found in an existing .docx and store it in a python datastructure. +So far so good. Works as it should. +Now I have the problem that there are hyperlinks in these tables which I definitely need! Due to the structure (xml underneath) the docx library doesn't catch these. Neither the url nor the display text provided. I found many people having similar concerns about this, but most didn't seem to have 'just that' dilemma. +I thought about unpacking the .docx and scan the _ref document for the corresponding 'rid' and fill the actual data I have with the links found in the _ref xml. +Either way it seems seriously weary to do it that way, so I was wondering if there is a more pythonic way to do it or if somebody got good advise how to tackle this problem?","You can extract the links by parsing xml of docx file. +You can extract all text from the document by using document.element.getiterator() +Iterate all the tags of xml and extract its text. You will get all the missing data which python-docx failed to extract.",0.0,False,1,5812 +2018-11-12 23:39:45.557,"openpyxl how to read formula result after editing input data on the sheet? data_only=True gives me a ""None"" result","Using openpyxl, I'm able to read 2 numbers on a sheet, and also able to read their sum by loading the sheet with data_only=True. +However, when I alter the 2 numbers using openpyxl and then try to read the answer using data_only=True, it returns no output. How do I do this?",You can have either the value or the formula in openpyxl. It is precisely to avoid the confusion that this kind of edit could introduce that the library works like this. To evaluate the changed formulae you'll need to load the file in an app like MS Excel or LibreOffice that can evaluate the formulae and store the results.,0.1352210990936997,False,1,5813 +2018-11-13 01:35:08.450,inception v3 using tf.data?,"I'm using a bit of code that is derived from inception v3 as distributed by the Google folks, but it's now complaining that the queue runners used to read the data are deprecated (tf.train.string_input_producer in image_processing.py, and similar). Apparently I'm supposed to switch to tf.data for this kind of stuff. +Unfortunately, the documentation on tf.data isn't doing much to relieve my concern that I've got too much data to fit in memory, especially given that I want to batch it in a reusable way, etc. I'm confident that the tf.data stuff can do this; I just don't know how to do it. Can anyone point me to a full example of code that uses tf.data to deal with batches of data that won't all fit in memory? Ideally, it would simply be an updated version of the inception-v3 code, but I'd be happy to try and work with anything. Thanks!","Well, I eventually got this working. The various documents referenced in the comment on my question had what I needed, and I gradually figured out which parameters passed to queuerunners corresponded to which parameters in the tf.data stuff. +There was one gotcha that took a while for me to sort out. In the inception implementation, the number of examples used for validation is rounded up to be a multiple of the batch size; presumably the validation set is reshuffled and some examples are used more than once. (This does not strike me as great practice, but generally the number of validation instances is way larger than the batch size, so only a relative few are double counted.) +In the tf.data stuff, enabling shuffling and reuse is a separate thing and I didn't do it on the validation data. Then things broke because there weren't enough unique validation instances, and I had to track that down. +I hope this helps the next person with this issue. Unfortunately, my code has drifted quite far from Inception v3 and I doubt that it would be helpful for me to post my modification. Thanks!",0.3869120172231254,False,1,5814 +2018-11-13 20:39:25.877,how to reformat a text paragrath using python,"Hi I was wondering how I could format a large text file by adding line breaks after certain characters or words. For instance, everytime a comma was in the paragraph could I use python to make this output an extra linebreak.","you can use the ''.replace() method like so: +'roses can be blue, red, white'.replace(',' , ',\n') gives +'roses can be blue,\n red,\n white' efectively inserting '\n' after every ,",0.0,False,1,5815 +2018-11-14 23:48:25.957,Python detecting different extensions on files,"How do i make python listen for changes to a folder on my desktop, and every time a file was added, the program would read the file name and categorize it it based on the extension? +This is a part of a more detailed program but I don't know how to get started on this part. This part of the program detects when the user drags a file into a folder on his/her desktop and then moves that file to a different location based on the file extension.","Periodically read the files in the folder and compare to a set of files remaining after the last execution of your script. Use os.listdir() and isfile(). +Read the extension of new files and copy them to a directory based on internal rules. This is a simple string slice, e.g., filename[-3:] for 3-character extensions. +Remove moved files from your set of last results. Use os.rename() or shutil.move(). +Sleep until next execution is scheduled.",1.2,True,1,5816 +2018-11-15 02:12:27.683,How do I configure settings for my Python Flask app on GoDaddy,"This app is working fine on heroku but how do i configure it on godaddy using custom domain. +When i navigate to custom domain, it redirects to mcc.godaddy.com. +What all settings need to be changed.","The solution is to add a correct CNAME record and wait till the value you entered has propagated. +Go to DNS management and make following changes: +In the 'Host' field enter 'www' and in 'Points to' field add 'yourappname.herokuapp.com'",0.0,False,1,5817 +2018-11-15 03:51:30.570,Compare stock indices of different sizes Python,"I am using Python to try and do some macroeconomic analysis of different stock markets. I was wondering about how to properly compare indices of varying sizes. For instance, the Dow Jones is around 25,000 on the y-axis, while the Russel 2000 is only around 1,500. I know that the website tradingview makes it possible to compare these two in their online charter. What it does is shrink/enlarge a background chart so that it matches the other on a new y-axis. Is there some statistical method where I can do this same thing in Python?","I know that the website tradingview makes it possible to compare these two in their online charter. What it does is shrink/enlarge a background chart so that it matches the other on a new y-axis. + +These websites rescale them by fixing the initial starting points for both indices at, say, 100. I.e. if Dow is 25000 points and S&P is 2500, then Dow is divided by 250 to get to 100 initially and S&P by 25. Then you have two indices that start at 100 and you then can compare them side by side. +The other method (works good only if you have two series) - is to set y-axis on the right hand side for one series, and on the left hand side for the other one.",1.2,True,1,5818 +2018-11-15 06:53:57.707,How to convert 2D matrix to 3D tensor without blending corresponding entries?,"I have data with the shape of (3000, 4), the features are (product, store, week, quantity). Quantity is the target. +So I want to reconstruct this matrix to a tensor, without blending the corresponding quantities. +For example, if there are 30 product, 20 stores and 5 weeks, the shape of the tensor should be (5, 20, 30), with the corresponding quantity. Because there won't be an entry like (store A, product X, week 3) twice in entire data, so every store x product x week pair should have one corresponding quantity. +Any suggestions about how to achieve this, or there is any logical error? Thanks.","You can first go through each of your first three columns and count the number of different products, stores and weeks that you have. This will give you the shape of your new array, which you can create using numpy. Importantly now, you need to create a conversion matrix for each category. For example, if product is 'XXX', then you want to know to which row of the first dimension (as product is the first dimension of your array) 'XXX' corresponds; same idea for store and week. Once you have all of this, you can simply iterate through all lines of your existing array and assign the value of quantity to the correct location inside your new array based on the indices stored in your conversion matrices for each value of product, store and week. As you said, it makes sense because there is a one-to-one correspondence.",0.0,False,1,5819 +2018-11-15 11:02:06.533,Installing packages to Anaconda Environments,"I've been having an issue with Anaconda, on two separate Windows machines. +I've downloaded and installed Anaconda. I know the commands, how to install libraries, I've even installed tensorflow-gpu (which works). I also use Jupyter notebook and I'm quite familiar with it by this point. +The issue: +For some reason, when I create new environments and install libraries to that environment... it ALWAYS installs them to (base). Whenever I try to run code in a jupyter notebook that is located in an environment other than (base), it can't find any of the libraries I need... because it's installing them to (base) by default. +I always ensure that I've activated the correct environment before installing any libraries. But it doesn't seem to make a difference. +Can anyone help me with this... am I doing something wrong?","Kind of fixed my problem. It is to do with launching Jupyter notebook. +After switching environment via command prompt... the command 'jupyter notebook' runs jupyter notebook via the default python environment, regardless. +However, if I switch environments via anaconda navigator and launch jupyter notebook from there, it works perfectly. +Maybe I'm missing a command via the prompt?",1.2,True,1,5820 +2018-11-15 11:25:13.747,How Do I store downloaded pdf files to Mongo DB,"I download the some of pdf and stored in directory. Need to insert them into mongo database with python code so how could i do these. Need to store them by making three columns (pdf_name, pdf_ganerateDate, FlagOfWork)like that.","You can use GridFS. Please check this url http://api.mongodb.com/python/current/examples/gridfs.html. +It will help you to store any file to mongoDb and get them. In other collection you can save file metadata.",0.3869120172231254,False,1,5821 +2018-11-15 15:28:09.797,how to use pipenv to run file in current folder,"Using pipenv to create a virtual environment in a folder. +However, the environment seems to be in the path: + +/Users/....../.local/share/virtualenvs/...... + +And when I run the command pipenv run python train.py, I get the error: + +can't open file 'train.py': [Errno 2] No such file or directory + +How to run a file in the folder where I created the virtual environment?","You need to be in the same directory of the file you want to run then use: +pipenv run python train.py +Note: + +You may be at the project main directory while the file you need to run is inside a directory inside your project directory +If you use django to create your project, it will create two folders inside each other with the same name so as a best practice change the top directory name to 'yourname-project' then inside the directory 'yourname' run the pipenv run python train.py command",1.2,True,1,5822 +2018-11-15 20:21:37.897,xgboost feature importance of categorical variable,"I am using XGBClassifier to train in python and there are a handful of categorical variables in my training dataset. Originally, I planed to convert each of them into a few dummies before I throw in my data, but then the feature importance will be calculated for each dummy, not the original categorical ones. Since I also need to order all of my original variables (including numerical + categorical) by importance, I am wondering how to get importance of my original variables? Is it simply adding up?","You could probably get by with summing the individual categories' importances into their original, parent category. But, unless these features are high-cardinality, my two cents would be to report them individually. I tend to err on the side of being more explicit with reporting model performance/importance measures.",0.0,False,1,5823 +2018-11-15 20:22:49.817,How to run a briefly running Docker container on Azure on a daily basis?,"In the past, I've been using WebJobs to schedule small recurrent tasks that perform a specific background task, e.g., generating a daily summary of user activities. For each task, I've written a console application in C# that was published as an Azure Webjob. +Now I'd like to daily execute some Python code that is already working in a Docker container. I think I figured out how to get a container running in Azure. Right now, I want to minimize the operation cost since the container will only run for a duration of 5 minutes. Therefore, I'd like to somehow schedule that my container starts once per day (at 1am) and shuts down after completion. How can I achieve this setup in Azure?",I'd probably write a scheduled build job on vsts\whatever to run at 1am daily to launch a container on Azure Container Instances. Container should shutdown on its own when the program exists (so your program has to do that without help from outside).,1.2,True,1,5824 +2018-11-16 16:47:57.803,MongoDB - how can i set a documents limit to my capped collection?,"I'm fairly new to MongoDB. I need my Python script to query new entries from my Database in real time, but the only way to do this seems to be replica sets, but my Database is not a replica set, or with a Tailable cursor, which is only for capped collections. +From what i understood, a capped collection has a limit, but since i don't know how big my Database is gonna be and for when i'm gonna need to send data there, i am thinking of putting the limit to 3-4 million documents. Would this be possible?. +How can i do that?.","so do you want to increase the size of capped collection ? +if yes then if you know average document size then you may define size like: +db.createCollection(""sample"", { capped : true, size : 10000000, max : 5000000 } ) here 5000000 is max documents with size limit of 10000000 bytes",0.3869120172231254,False,1,5825 +2018-11-17 02:57:21.293,Import aar of Android library in Python,"I have wrote an Android library and build an aar file. And I want to write a python program to use the aar library. Is it possible to do that? If so, how to do that? Thanks",There is no way to include all dependencies to your aar file. So According to the open source licenses you can add their sources to your project.,0.0,False,1,5826 +2018-11-17 12:15:24.270,GraphQL/Graphene for backend calls in Django's templates,"I just installed Graphene on my Django project and would like to use it also for the back-end, templating. So far, I find just tutorials how to use it only for front-end, no mention about back-end. + +Should I suppose that it is not a good idea to use it instead of a SQL database? If yes, then why? Is there a downside in the speed in the comparison to a SQL databases like MySQL? +What's the best option how to retrieve the data for templates in Python? I mean, best for the performance. + +Thnx.","GraphQL is an API specification. It doesn't specify how data is stored, so it is not a replacement for a database. +If you're using GraphQL, you don't use Django templates to specify the GraphQL output, because GraphQL specifies the entire HTTP response from the web service, so this question doesn't make sense.",0.6730655149877884,False,1,5827 +2018-11-17 18:20:40.807,How to use F-score as error function to train neural networks?,"I am pretty new to neural networks. I am training a network in tensorflow, but the number of positive examples is much much less than negative examples in my dataset (it is a medical dataset). +So, I know that F-score calculated from precision and recall is a good measure of how well the model is trained. +I have used error functions like cross-entropy loss or MSE before, but they are all based on accuracy calculation (if I am not wrong). But how do I use this F-score as an error function? Is there a tensorflow function for that? Or I have to create a new one? +Thanks in advance.","the loss value and accuracy is a different concept. The loss value is used for training the NN. However, accuracy or other metrics is to value the training result.",0.0,False,1,5828 +2018-11-17 20:57:16.567,How to determine file path in Google colab?,"I mounted my drive using this : +from google.colab import drive +drive.mount('/content/drive/') +I have a file inside a folder that I want the path of how do I determine the path? +Say the folder that contains the file is named 'x' inside my drive",The path will be /content/drive/My\ Drive/x/the_file.,1.2,True,2,5829 +2018-11-17 20:57:16.567,How to determine file path in Google colab?,"I mounted my drive using this : +from google.colab import drive +drive.mount('/content/drive/') +I have a file inside a folder that I want the path of how do I determine the path? +Say the folder that contains the file is named 'x' inside my drive","The path as parameter for a function will be /content/drive/My Drive/x/the_file, so without backslash inside My Drive",0.5457054096481145,False,2,5829 +2018-11-17 23:12:26.597,virtualenv - Birds Eye View of Understanding,"Using Windows +Learning about virtualenv. Here is my understanding of it and a few question that I have. Please correct me if my understanding is incorrect. +virtualenv are environments where your pip dependencies and its selected version are stored for a particular project. A folder is made for your project and inside there are the dependencies. + +I was told you would not want to save your .py scripts in side of virtual ENV, if that's the case how do I access the virtual env when I want to run that project? Open it up in the command line under source ENV/bin/activate then cd my way to where my script is stored? +By running pip freeze that creates a requirements.txt file in that project folder that is just a txt. copy of the dependencies of that virtual env? +If I'm in a second virutalenv who do I import another virtualenv's requirements? I've been to the documentation but I still don't get it. +$ env1/bin/pip freeze > requirements.txt +$ env2/bin/pip install -r requirements.txt + +Guess I'm confused on the ""requirements"" description. Isn't best practice to always call our requirements, requirements.txt? If that's the case how does env2 know I'm want env1 requirements? +Thank you for any info or suggestions. Really appreciate the assistance. +I created a virtualenv C:\Users\admin\Documents\Enviorments>virtualenv django_1 +Using base prefix'c:\\users\\admin\\appdata\\local\\programs\\python\\python37-32' +New python executable in C:\Users\admin\Documents\Enviorments\django_1\Scripts\python.exe Installing setuptools, pip, wheel...done. +How do I activate it? source django_1/bin/activate doesn't work? +I've tried: source C:\Users\admin\Documents\Enviorments\django_1/bin/activate Every time I get : 'source' is not recognized as an internal or external command, operable program or batch file.","* disclaimer * I mainly use conda environments instead of virtualenv, but I believe that most of this is the same across both of them and is true to your case. + +You should be able to access your scripts from any environment you are in. If you have virtenvA and virtenvB then you can access your script from inside either of your environments. All you would do is activate one of them and then run python /path/to/my/script.py, but you need to make sure any dependent libraries are installed. +Correct, but for clarity the requirements file contains a list of the dependencies by name only. It doesn't contain any actual code or packages. You can print out a requirements file but it should just be a list which says package names and their version numbers. Like pandas 1.0.1 numpy 1.0.1 scipy 1.0.1 etc. +In the lines of code you have here you would export the dependencies list of env1 and then you would install these dependencies in env2. If env2 was empty then it will now just be a copy of env1, otherwise it will be the same but with all the packages of env1 added and if it had a different version number of some of the same packages then this would be overwritten",0.0,False,2,5830 +2018-11-17 23:12:26.597,virtualenv - Birds Eye View of Understanding,"Using Windows +Learning about virtualenv. Here is my understanding of it and a few question that I have. Please correct me if my understanding is incorrect. +virtualenv are environments where your pip dependencies and its selected version are stored for a particular project. A folder is made for your project and inside there are the dependencies. + +I was told you would not want to save your .py scripts in side of virtual ENV, if that's the case how do I access the virtual env when I want to run that project? Open it up in the command line under source ENV/bin/activate then cd my way to where my script is stored? +By running pip freeze that creates a requirements.txt file in that project folder that is just a txt. copy of the dependencies of that virtual env? +If I'm in a second virutalenv who do I import another virtualenv's requirements? I've been to the documentation but I still don't get it. +$ env1/bin/pip freeze > requirements.txt +$ env2/bin/pip install -r requirements.txt + +Guess I'm confused on the ""requirements"" description. Isn't best practice to always call our requirements, requirements.txt? If that's the case how does env2 know I'm want env1 requirements? +Thank you for any info or suggestions. Really appreciate the assistance. +I created a virtualenv C:\Users\admin\Documents\Enviorments>virtualenv django_1 +Using base prefix'c:\\users\\admin\\appdata\\local\\programs\\python\\python37-32' +New python executable in C:\Users\admin\Documents\Enviorments\django_1\Scripts\python.exe Installing setuptools, pip, wheel...done. +How do I activate it? source django_1/bin/activate doesn't work? +I've tried: source C:\Users\admin\Documents\Enviorments\django_1/bin/activate Every time I get : 'source' is not recognized as an internal or external command, operable program or batch file.","virtualenv simply creates a new Python environment for your project. Think of it as another copy of Python that you have in your system. Virutual environment is helpful for development, especially if you will need different versions of the same libraries. +Answer to your first question is, yes, for each project that you use virtualenv, you need to activate it first. After activating, when you run python script, not just your project's scripts, but any python script, will use dependencies and configuration of the active Python environment. +Answer to the second question, pip freeze > requirements.txt will create requirements file in active folder, not in your project folder. So, let's say in your cmd/terminal you are in C:\Desktop, then the requirements file will be created there. If you're in C\Desktop\myproject folder, the file will be created there. Requirements file will contain the packages installed on active virtualenv. +Answer to 3rd question is related to second. Simply, you need to write full path of the second requirements file. So if you are in first project and want to install packages from second virtualenv, you run it like env2/bin/pip install -r /path/to/my/first/requirements.txt. If in your terminal you are in active folder that does not have requirements.txt file, then running pip install will give you an error. So, running the command does not know which requirements file you want to use, you specify it. +I created a virtualenv +C:\Users\admin\Documents\Enviorments>virtualenv django_1 Using base prefix 'c:\\users\\admin\\appdata\\local\\programs\\python\\python37-32' New python executable in C:\Users\admin\Documents\Enviorments\django_1\Scripts\python.exe Installing setuptools, pip, wheel...done. +How do I activate it? source django_1/bin/activate doesn't work? +I've tried: source C:\Users\admin\Documents\Enviorments\django_1/bin/activate Every time I get : 'source' is not recognized as an internal or external command, operable program or batch file.",0.0,False,2,5830 +2018-11-19 08:19:34.017,How do I efficiently understand a framework with sparse documentation?,"I have the problem that for a project I need to work with a framework (Python), that has a poor documentation. I know what it does since it is the back end of a running application. I also know that no framework is good if the documentation is bad and that I should prob. code it myself. But, I have a time constraint. Therefore my question is: Is there a cooking recipe on how to understand a poorly documented framework? +What I tried until now is checking some functions and identify the organizational units in the framework but I am lacking a system to do it more effectively.","If I were you, with time constaraints, and bound to use a specific framework. I'll go in the following manner: + +List down the use cases I desire to implement using the framework +Identify the APIs provided by the framework that helps me implement the use cases +Prototype the usecases based on the available documentation and reading + +The prototyping is not implementing the entire use case, but to identify the building blocks around the case and implementing them. e.g., If my usecase is to fetch the Students, along with their courses, and if I were using Hibernate to implement, I would prototype the database accesss, validating how easily am I able to access the database using Hibernate, or how easily I am able to get the relational data by means of joining/aggregation etc. +The prototyping will help me figure out the possible limitations/bugs in the framework. If the limitations are more of show-stoppers, I will implement the supporting APIs myself; or I can take a call to scrap out the entire framework and write one for myself; whichever makes more sense.",0.3869120172231254,False,1,5831 +2018-11-20 02:45:03.200,Python concurrent.futures.ThreadPoolExecutor max_workers,"I am searching for a long time on net. But no use. Please help or try to give me some ideas how to achieve this. +When I use python module concurrent.futures.ThreadPoolExecutor(max_workers=None), I want to know the max_workers how much the number of suitable. +I've read the official document. +I still don't know the number of suitable when I coding. + +Changed in version 3.5: If max_worker is None or not give, it will default to the number of processors on the machine, multiplied by 5, assuming that ThreadPoolExecutor is often used to overlap I/O instead of CPU work and the number of workers should be higher than the number of workers for ProcessPoolExecutor. + +How to understand ""max_workers"" better? +For the first time to ask questions, thank you very much.","max_worker, you can take it as threads number. +If you want to make the best of CPUs, you should keep it running (instead of sleeping). +Ideally if you set it to None, there will be ( CPU number * 5) threads at most. On average, each CPU has 5 thread to schedule. Then if one of them falls into sleep, another thread will be scheduled.",0.9999092042625952,False,1,5832 +2018-11-20 20:23:47.973,wget with subprocess.call(),"I'm working on a domain fronting project. Basically I'm trying to use the subprocess.call() function to interpret the following command: +wget -O - https://fronteddomain.example --header 'Host: targetdomain.example' +With the proper domains, I know how to domain front, that is not the problem. Just need some help with writing using the python subprocess.call() function with wget.","I figured it out using curl: +call([""curl"", ""-s"", ""-H"" ""Host: targetdomain.example"", ""-H"", ""Connection: close"", ""frontdomain.example""])",1.2,True,1,5833 +2018-11-20 23:58:45.450,How to install Poppler to be used on AWS Lambda,"I have to run pdf2image on my Python Lambda Function in AWS, but it requires poppler and poppler-utils to be installed on the machine. +I have tried to search in many different places how to do that but could not find anything or anyone that have done that using lambda functions. +Would any of you know how to generate poppler binaries, put it on my Lambda package and tell Lambda to use that? +Thank you all.","Hi @Alex Albracht thanks for compiled easy instructions! They helped a lot. But I really struggled with getting the lambda function find the poppler path. So, I'll try to add that up with an effort to make it clear. +The binary files should go in a zip folder having structure as: +poppler.zip -> bin/poppler +where poppler folder contains the binary files. This zip folder can be then uploaded as a layer in AWS lambda. +For pdf2image to work, it needs poppler path. This should be included in the lambda function in the format - ""/opt/bin/poppler"". +For example, +poppler_path = ""/opt/bin/poppler"" +pages = convert_from_path(PDF_file, 500, poppler_path=poppler_path)",0.0,False,1,5834 +2018-11-21 13:30:25.713,"CPLEX Error 1016: Promotional version , use academic version CPLEX","I am using python with clpex, when I finished my model I run the program and it throws me the following error: +CplexSolverError: CPLEX Error 1016: Promotional version. Problem size limits exceeded. +I have the IBM Academic CPLEX installed, how can I make python recognize this and not the promotional version?","you can go to the direction you install CPLEX. For Example, D:\Cplex +After that you will see a foler name cplex, then you click on that, --> python --> choose the version of your python ( Ex: 3.6 ), then choose the folder x64_win64, you will see another file name cplex. +You copy this file into your python site packakges ^^ and then you will not be restricted",1.2,True,1,5835 +2018-11-23 22:49:20.307,How can i create a persistent data chart with Flask and Javascript?,"I want to add a real-time chart to my Flask webapp. This chart, other than current updated data, should contain historical data too. +At the moment i can create the chart and i can make it real time but i have no idea how to make the data 'persistent', so i can't see what the chart looked like days or weeks ago. +I'm using a Javascript charting library, while Data is being sent from my Flask script, but what it's not really clear is how i can ""store"" my data on Javascript. At the moment, indeed, the chart will reset each time the page is loaded. +How would it be possible to accomplish that? Is there an example for it?","You can try to store the data in a database and or in a file and extract from there . +You can also try to use dash or you can make on the right side a menu with dates like 21 september and see the chart from that day . +For dash you can look on YouTube at Sentdex",0.0,False,1,5836 +2018-11-25 13:55:55.643,How do I count how many items are in a specific row in my RDD,"as you can tell I’m fairly new to using Pyspark Python my RDD is set out as follows: +(ID, First name, Last name, Address) +(ID, First name, Last name, Address) +(ID, First name, Last name, Address) +(ID, First name, Last name, Address) +(ID, First name, Last name, Address) + Is there anyway I can count how many of these records I have stored within my RDD such as count all the IDs in the RDD. So that the output would tell me I have 5 of them. +I have tried using RDD.count() but that just seems to return how many items I have in my dataset in total.","If you have RDD of tuples like RDD[(ID, First name, Last name, Address)] then you can perform below operation to do different types of counting. + +Count the total number of elements/Rows in your RDD. +rdd.count() +Count Distinct IDs from your above RDD. Select the ID element and then do a distinct on top of it. +rdd.map(lambda x : x[0]).distinct().count() + +Hope it helps to do the different sort of counting. +Let me know if you need any further help here. +Regards, +Neeraj",0.0,False,1,5837 +2018-11-25 19:22:29.680,Adding charts to a Flask webapp,"I created a web app with Flask where I'll be showing data, so I need charts for it. +The problem is that I don't really know how to do that, so I'm trying to find the best way to do that. I tried to use a Javascript charting library on my frontend and send the data to the chart using SocketIO, but the problem is that I need to send that data frequently and at a certain point I'll be having a lot of data, so sending each time a huge load of data through AJAX/SocketIO would not be the best thing to do. +To solve this, I had this idea: could I generate the chart from my backend, instead of sending data to the frontend? I think it would be the better thing to do, since I won't have to send the data to the frontend each time and there won't be a need to generate a ton of data each time the page is loaded, since the chart will be processed on the frontend. +So would it be possible to generate a chart from my Flask code in Python and visualize it on my webpage? Is there a good library do that?",Try to use dash is a python library for web charts,1.2,True,1,5838 +2018-11-25 22:35:57.257,How to strip off left side of binary number in Python?,"I got this binary number 101111111111000 +I need to strip off the 8 most significant bits and have 11111000 at the end. +I tried to make 101111111111000 << 8, but this results in 10111111111100000000000, it hasn't the same effect as >> which strips the lower bits. So how can this be done? The final result MUST BE binary type.","To achieve this for a number x with n digits, one can use this +x&(2**(len(bin(x))-2-8)-1) +-2 to strip 0b, -8 to strip leftmost +Simply said it ands your number with just enough 1s that the 8 leftmost bits are set to 0.",0.0,False,1,5839 +2018-11-26 06:17:56.463,how do I clear a printed line and replace it with updated variable IDLE,"I need to clear a printed line, but so far I have found no good answers for using python 3.7, IDLE on windows 10. I am trying to make a simple code that prints a changing variable. But I don't want tons of new lines being printed. I want to try and get it all on one line. +Is it possible to print a variable that has been updated later on in the code? +Do remember I am doing this in IDLE, not kali or something like that. +Thanks for all your help in advance.","The Python language definition defines when bytes will be sent to a file, such as sys.stdout, the default file for print. It does not define what the connected device does with the bytes. +When running code from IDLE, sys.stdout is initially connected to IDLE's Shell window. Shell is not a terminal and does not interpret terminal control codes other than '\n'. The reasons are a) IDLE is aimed at program development, by programmers, rather than program running by users, and developers sometimes need to see all the output from a program; and b) IDLE is cross-platform, while terminal behaviors are various, depending on the system, settings, and current modes (such as insert versus overwrite). +However, I am planning to add an option to run code in an IDLE editor with sys.stdout directed to the local system terminal/console.",0.3869120172231254,False,1,5840 +2018-11-27 09:51:12.057,how to run python in eclipse with both py2 and py3?,"pre: + +I installed both python2.7 and python 3.70 +eclipse installed pydev, and configured two interpreters for each py version +I have a project with some py scripts + +question: +I choose one py file, I want run it in py2, then i want it run in py3(manually). +I know that each file cound has it's run configuration, but it could only choose one interpreter a time. +I also know that py.exe could help you get the right version of python. +I tried to add an interpreter with py.exe, but pydev keeps telling me that ""python stdlibs"" is necessary for a interpreter while only python3's lib shows up. +so, is there a way just like right click the file and choose ""run use interpreter xxx""? +or, does pydev has the ability to choose interpreters by ""#! python2""/""#! python3"" at file head?","I didn't understand what's the actual workflow you want... +Do you want to run each file on a different interpreter (say you have mod1.py and want to run it always on py2 and then mod2.py should be run always on py3) or do you want to run the same file on multiple interpreters (i.e.: you have mod1.py and want to run it both on py2 and py3) or something else? +So, please give more information on what's your actual problem and what you want to achieve... + +Options to run a single file in multiple interpreters: + +Always run with the default interpreter (so, make a regular run -- F9 to run the current editor -- change the default interpreter -- using Ctrl+shift+Alt+I -- and then rerun with Ctrl+F11). +Create a .sh/.bat which will always do 2 launches (initially configure it to just be a wrapper to launch with one python, then, after properly configuring it inside of PyDev that way change it to launch python 2 times, one with py2 and another with py3 -- note that I haven't tested, but it should work in theory).",0.3869120172231254,False,1,5841 +2018-11-27 23:32:32.593,Python regex to identify capitalised single word lines in a text abstract,"I am looking for a way to extract words from text if they match the following conditions: +1) are capitalised +and +2) appear on a new line on their own (i.e. no other text on the same line). +I am able to extract all capitalised words with this code: + caps=re.findall(r""\b[A-Z]+\b"", mytext) +but can't figure out how to implement the second condition. Any help will be greatly appreciated.",please try following statements \r\n at the begining of your regex expression,-0.2012947653214861,False,1,5842 +2018-11-28 12:15:31.400,Python and Dart Integration in Flutter Mobile Application,"Can i do these two things: + +Is there any library in dart for Sentiment Analysis? +Can I use Python (for Sentiment Analysis) in dart? + +My main motive for these questions is that I'm working on an application in a flutter and I use sentiment analysis and I have no idea that how I do that. +Can anyone please help me to solve this Problem.? +Or is there any way that I can do text sentiment analysis in the flutter app?","You can create an api using Python then serve it your mobile app (FLUTTER) using http requests. +I",0.6730655149877884,False,1,5843 +2018-11-28 15:25:07.900,Why is LocationLocal: Relative Alt dropping into negative values on a stationary drone?,"I'm running the Set_Attitude_Target example on an Intel Aero with Ardupilot. The code is working as intended but on top of a clear sensor error, that becomes more evident the longer I run the experiment. +In short, the altitude report from the example is reporting that in LocationLocal there is a relative altitude of -0.01, which gets smaller and smaller the longer the drone stays on. +If the drone takes off, say, 1 meter, then the relative altitude is less than that, so the difference is being taken out. +I ran the same example with the throttle set to a low value so the drone would stay stationary while ""trying to take off"" with insufficient thrust. For the 5 seconds that the drone was trying to take off, as well as after it gave up, disarmed and continued to run the code, the console read incremental losses to altitude, until I stopped it at -1 meter. +Where is this sensor error coming from and how do I remedy it?","As per Agustinus Baskara's comment on the original post, it would appear the built-in sensor is simply that bad - it can't be improved upon with software.",0.0,False,1,5844 +2018-11-29 00:38:11.560,The loss function and evaluation metric of XGBoost,"I am confused now about the loss functions used in XGBoost. Here is how I feel confused: + +we have objective, which is the loss function needs to be minimized; eval_metric: the metric used to represent the learning result. These two are totally unrelated (if we don't consider such as for classification only logloss and mlogloss can be used as eval_metric). Is this correct? If I am, then for a classification problem, how you can use rmse as a performance metric? +take two options for objective as an example, reg:logistic and binary:logistic. For 0/1 classifications, usually binary logistic loss, or cross entropy should be considered as the loss function, right? So which of the two options is for this loss function, and what's the value of the other one? Say, if binary:logistic represents the cross entropy loss function, then what does reg:logistic do? +what's the difference between multi:softmax and multi:softprob? Do they use the same loss function and just differ in the output format? If so, that should be the same for reg:logistic and binary:logistic as well, right? + +supplement for the 2nd problem +say, the loss function for 0/1 classification problem should be +L = sum(y_i*log(P_i)+(1-y_i)*log(P_i)). So if I need to choose binary:logistic here, or reg:logistic to let xgboost classifier to use L loss function. If it is binary:logistic, then what loss function reg:logistic uses?","'binary:logistic' uses -(y*log(y_pred) + (1-y)*(log(1-y_pred))) +'reg:logistic' uses (y - y_pred)^2 +To get a total estimation of error we sum all errors and divide by number of samples. + +You can find this in the basics. When looking on Linear regression VS Logistic regression. +Linear regression uses (y - y_pred)^2 as the Cost Function +Logistic regression uses -(y*log(y_pred) + (y-1)*(log(1-y_pred))) as the Cost function + +Evaluation metrics are completely different thing. They design to evaluate your model. You can be confused by them because it is logical to use some evaluation metrics that are the same as the loss function, like MSE in regression problems. However, in binary problems it is not always wise to look at the logloss. My experience have thought me (in classification problems) to generally look on AUC ROC. +EDIT + +according to xgboost documentation: + +reg:linear: linear regression + + +reg:logistic: logistic regression + + +binary:logistic: logistic regression for binary classification, output +probability + +So I'm guessing: +reg:linear: is as we said, (y - y_pred)^2 +reg:logistic is -(y*log(y_pred) + (y-1)*(log(1-y_pred))) and rounding predictions with 0.5 threshhold +binary:logistic is plain -(y*log(y_pred) + (1-y)*(log(1-y_pred))) (returns the probability) +You can test it out and see if it do as I've edited. If so, I will update the answer, otherwise, I'll just delete it :<",0.9999665971563038,False,1,5845 +2018-11-29 09:16:08.143,"After I modified my Python code in Pycharm, how to deploy the change to my Portainer?","Perhaps it is a basic question but I am really not a profession in Portainer. +I have a local Portainer, a Pycharm to manage the Python code. What should I do after I modified my code and deploy this change to the local Portainer? +Thx","If you have mounted the folder where your code resides directly in the container the changes will be also be applied in your container so no further action is required. +If you have not mounted the folder to your container (for example if you copy the code when you build the image), you would have to rebuild the image. Of course this is a lot more work so I would recommend using the mounted volumes.",0.0,False,1,5846 +2018-11-30 04:23:07.330,"Sqlalchemy before_execute event - how to pass some external variable, say app user id?","I am trying to obtain an application variable (app user id) in before_execute(conn, clauseelement, multiparam, param) method. The app user id is stored in python http request object which I do not have any access to in the db event. +Is there any way to associate a piece of sqlalchemy external data somewhere to fetch it in before_execute event later? +Appreciate your time and help.","Answering my own question here with a possible solution :) + +From http request copied the piece of data to session object +Since the session binding was at engine level, copied the data from session to connection object in SessionEvent.after_begin(session, transaction, connection). [Had it been Connection level binding, we could have directly set the objects from session object to connection object.] + +Now the data is available in connection object and in before_execute() too.",0.0,False,1,5847 +2018-11-30 05:17:50.717,Session cookie is too large flask application,"I'm trying to load certain data using sessions (locally) and it has been working for some time but, now I get the following warning and my data that was loaded through sessions is no longer being loaded. + +The ""b'session'"" cookie is too large: the value was 13083 bytes but + the header required 44 extra bytes. The final size was 13127 bytes but + the limitis 4093 bytes. Browsers may silently ignore cookies larger + than this. + +I have tried using session.clear(). I also opened up chrome developer tools and tried deleting the cookies associated with 127.0.0.1:5000. I have also tried using a different secret key to use with the session. +It would be greatly appreciated if I could get some help on this, since I have been searching for a solution for many hours. +Edit: +I am not looking to increase my limit by switching to server-side sessions. Instead, I would like to know how I could clear my client-side session data so I can reuse it. +Edit #2: +I figured it out. I forgot that I pushed way more data to my database, so every time a query was performed, the session would fill up immediately.","It looks like you are using the client-side type of session that is set by default with Flask which has a limited capacity of 4KB. You can use a server-side type session that will not have this limit, for example, by using a back-end file system (you save the session data in a file system in the server, not in the browser). To do so, set the configuration variable 'SESSION_TYPE' to 'filesystem'. +You can check other alternatives for the 'SESSION_TYPE' variable in the Flask documentation.",1.2,True,1,5848 +2018-11-30 12:32:27.360,not having to load a dataset over and over,"Currently in R, once you load a dataset (for example with read.csv), Rstudio saves it as a variable in the global environment. This ensures you don't have to load the dataset every single time you do a particular test or change. +With Python, I do not know which text editor/IDE will allow me to do this. E.G - I want to load a dataset once, and then subsequently do all sorts of things with it, instead of having to load it every time I run the script. +Any points as to how to do this would be very useful","It depends how large your data set is. +For relatively smaller datasets you could look at installing Anaconda Python Jupyter notebooks. Really great for working with data and visualisation once the dataset is loaded. For larger datasets you can write some functions / generators to iterate efficiently through the dataset.",0.0,False,1,5849 +2018-11-30 14:16:09.813,pymysql - Get value from a query,"I am executing the query using pymysql in python. + +select (sum(acc_Value)) from accInfo where acc_Name = 'ABC' + +The purpose of the query is to get the sum of all the values in acc_Value column for all the rows matchin acc_Name = 'ABC'. +The output i am getting when using cur.fetchone() is + +(Decimal('256830696'),) + +Now how to get that value ""256830696"" alone in python. +Thanks in advance.","It's a tuple, just take the 0th index",-0.3869120172231254,False,1,5850 +2018-12-01 14:09:56.980,Saving objects from tk canvas,"I'm trying to make a save function in a program im doing for bubbling/ballooning drawings. The only thing I can't get to work is save a ""work copy"". As if a drawing gets revision changes, you don't need to redo all the work. Just load the work copy, and add/remove/re-arrage bubbles. +I'm using tkinter and canvas. And creates ovals and text for bubbles. But I can't figure out any good way to save the info from the oval/text objects. +I tried to pickle the whole canvas, but that seems like it won't work after some googeling. +And pickle every object when created seems to only save the object id. 1, 2 etc. And that also won't work since some bubbles will be moved and receive new coordinates. They might also have a different color, size etc. +In my next approach I'm thinking of saving the whole ""can.create_oval( x1, y1, x2, y2, fill = fillC, outli...."" as a string to a txt and make the function to recreate a with eval() +Any one have any good suggestion on how to approach this?","There is no built-in way to save and restore the canvas. However, the canvas has methods you can use to get all of the information about the items on the canvas. You can use these methods to save this information to a file and then read this file back and recreate the objects. + +find_all - will return an ordered list of object ids for all objects on the canvas +type - will return the type of the object as a string (""rectangle"", ""circle"", ""text"", etc) +itemconfig - returns a dictionary with all of the configuration values for the object. The values in the dictionary are a list of values which includes the default value of the option at index 3 and the current value at index 4. You can use this to save only the option values that have been explicitly changed from the default. +gettags - returns a list of tags associated with the object",1.2,True,1,5851 +2018-12-03 01:15:30.087,Different sized vectors in word2vec,"I am trying to generate three different sized output vectors namely 25d, 50d and 75d. I am trying to do so by training the same dataset using the word2vec model. I am not sure how I can get three vectors of different sizes using the same training dataset. Can someone please help me get started on this? I am very new to machine learning and word2vec. Thanks","You run the code for one model three times, each time supplying a different vector_size parameter to the model initialization.",1.2,True,1,5852 +2018-12-03 03:23:29.990,data-item-url is on localhost instead of pythonanywhere (wagtail + snipcart project),"So instead of having data-item-url=""https://miglopes.pythonanywhere.com/ra%C3%A7%C3%A3o-de-c%C3%A3o-purina-junior-10kg/"" +it keeps on appearing +data-item-url=""http://localhost/ra%C3%A7%C3%A3o-de-c%C3%A3o-purina-junior-10kg/"" +how do i remove the localhost so my snipcart can work on checkout?","Without more details of where this tag is coming from it's hard to know for sure... but most likely you need to update your site's hostname in the Wagtail admin, under Settings -> Sites.",0.0,False,1,5853 +2018-12-03 21:09:40.843,Using MFCC's for voice recognition,"I'm currently using the Fourier transformation in conjunction with Keras for voice recogition (speaker identification). I have heard MFCC is a better option for voice recognition, but I am not sure how to use it. +I am using librosa in python (3) to extract 20 MFCC features. My question is: which MFCC features should I use for speaker identification? +In addition to this I am unsure on how to implement these features. What I would do is to get the necessary features and make one long vector input for a neural network. However, it is also possible to display colors, so could image recognition also be possible, or is this more aimed at speech, and not speaker recognition? +In short, I am unsure where I should start, as I am not very experienced with image recognition and have no idea where to start. +Thanks in advance!!","You can use MFCCs with dense layers / multilayer perceptron, but probably a Convolutional Neural Network on the mel-spectrogram will perform better, assuming that you have enough training data.",0.0,False,1,5854 +2018-12-04 18:22:55.240,How to add text to a file in python3,"Let's say i have the following file, +dummy_file.txt(contents below) +first line +third line +how can i add a line to that file right in the middle so the end result is: +first line +second line +third line +I have looked into opening the file with the append option, however that adds the line to the end of the file.","The standard file methods don't support inserting into the middle of a file. You need to read the file, add your new data to the data that you read in, and then re-write the whole file.",1.2,True,1,5855 +2018-12-05 08:13:04.893,DataFrame view in PyCharm when using pyspark,"I create a pyspark dataframe and i want to see it in the SciView tab in PyCharm when i debug my code (like I used to do when i have worked with pandas). +It says ""Nothing to show"" (the dataframe exists, I can see it when I use the show() command). +someone knows how to do it or maybe there is no integration between pycharm and pyspark dataframe in this case?","Pycharm does not support spark dataframes, you should call the toPandas() method on the dataframe. As @abhiieor mentioned in a comment, be aware that you can potentially collect a lot of data, you should first limit() the number of rows returned.",1.2,True,1,5856 +2018-12-08 01:12:11.607,"Is it possible to trigger a script or program if any data is updated in a database, like MySQL?","It doesn't have to be exactly a trigger inside the database. I just want to know how I should design this, so that when changes are made inside MySQL or SQL server, some script could be triggered.","One Way would be to keep a counter on the last updated row in the database, and then you need to keep polling(Checking) the database through python for new records in short intervals. +If the value in the counter is increased then you could use the subprocess module to call another Python script.",0.0,False,1,5857 +2018-12-09 22:47:38.660,Error for word2vec with GoogleNews-vectors-negative300.bin,"the version of python is 3.6 +I tried to execute my code but, there are still some errors as below: +Traceback (most recent call last): + +File + ""C:\Users\tmdgu\Desktop\NLP-master1\NLP-master\Ontology_Construction.py"", + line 55, in + , binary=True) +File ""E:\Program + Files\Python\Python35-32\lib\site-packages\gensim\models\word2vec.py"", + line 1282, in load_word2vec_format + raise DeprecationWarning(""Deprecated. Use gensim.models.KeyedVectors.load_word2vec_format instead."") +DeprecationWarning: Deprecated. Use + gensim.models.KeyedVectors.load_word2vec_format instead. + +how to fix the code? or is the path to data wrong?","This is just a warning, not a fatal error. Your code likely still works. +""Deprecation"" means a function's use has been marked by the authors as no longer encouraged. +The function typically still works, but may not for much longer – becoming unreliable or unavailable in some future library release. Often, there's a newer, more-preferred way to do the same thing, so you don't trigger the warning message. +Your warning message points you at the now-preferred way to load word-vectors of that format: use KeyedVectors.load_word2vec_format() instead. +Did you try using that, instead of whatever line of code (not shown in your question) that you were trying before seeing the warning?",0.6730655149877884,False,1,5858 +2018-12-11 00:40:44.053,Use of Breakpoint Method,"I am new to python and am unsure of how the breakpoint method works. Does it open the debugger for the IDE or some built-in debugger? +Additionally, I was wondering how that debugger would be able to be operated. +For example, I use Spyder, does that mean that if I use the breakpoint() method, Spyder's debugger will open, through which I could the Debugger dropdown menu, or would some other debugger open? +I would also like to know how this function works in conjunction with the breakpointhook() method.","No, debugger will not open itself automatically as a consequence of setting a breakpoint. +So you have first set a breakpoint (or more of them), and then manually launch a debugger. +After this, the debugger will perform your code as usually, but will stop performing instructions when it reaches a breakpoint - the instruction at the breakpoint itself it will not perform. It will pause just before it, given you an opportunity to perform some debug tasks, as + +inspect variable values, +set variables manually to other values, +continue performing instructions step by step (i. e. only the next instruction), +continue performing instructions to the next breakpoint, +prematurely stop debugging your program. + +This is the common scenario for all debuggers of all programming languages (and their IDEs). +For IDEs, launching a debugger will + +enable or reveal debugging instructions in their menu system, +show a toolbar for them and will, +enable hot keys for them. + +Without setting at least one breakpoint, most debuggers perform the whole program without a pause (as launching it without a debugger), so you will have no opportunity to perform any debugging task. +(Some IDEs have an option to launch a debugger in the ""first instruction, then a pause"" mode, so you need not set breakpoints in advance in this case.) + +Yes, the breakpoint() built-in function (introduced in Python 3.7) stops executing your program, enters it in the debugging mode, and you may use Spyder's debugger drop-down menu. +(It isn't a Spyders' debugger, only its drop-down menu; the used debugger will be still the pdb, i. e. the default Python DeBugger.) +The connection between the breakpoint() built-in function and the breakpointhook() function (from the sys built-in module) is very straightforward - the first one directly calls the second one. +The natural question is why we need two functions with the exactly same behavior? +The answer is in the design - the breakpoint() function may be changed indirectly, by changing the behavior of the breakpointhook() function. +For example, IDE creators may change the behavior of the breakpointhook() function so that it will launch their own debugger, not the pdb one.",1.2,True,1,5859 +2018-12-11 01:14:39.167,Is there an appropriate version of Pygame for Python 3.7 installed with Anaconda?,"I'm new to programming and I just downloaded Anaconda a few days ago for Windows 64-bit. I came across the Invent with Python book and decided I wanted to work through it so I downloaded that too. I ended up running into a couple issues with it not working (somehow I ended up with Spyder (Python 2.7) and end=' ' wasn't doing what it was supposed to so I uninstalled and reinstalled Anaconda -- though originally I did download the 3.7 version). It looked as if I had the 2.7 version of Pygame. I'm looking around and I don't see a Pygame version for Python 3.7 that is compatible with Anaconda. The only ones I saw were for Mac or not meant to work with Anaconda. This is all pretty new to me so I'm not sure what my options are. Thanks in advance. +Also, how do I delete the incorrect Pygame version?","just use pip install pygame & python will look for a version compatible with your installation. +If you're using Anaconda and pip doesn't work on CMD prompt, try using the Anaconda prompt from start menu.",0.6730655149877884,False,1,5860 +2018-12-11 17:54:00.677,python-hypothesis: Retrieving or reformatting a falsifying example,"Is it possible to retrieve or reformat the falsifying example after a test failure? The point is to show the example data in a different format - data generated by the strategy is easy to work with in the code but not really user friendly, so I'm looking at how to display it in a different form. Even a post-mortem tool working with the example database would be enough, but there does not seem to be any API allowing that, or am I missing something?","Even a post-mortem tool working with the example database would be enough, but there does not seem to be any API allowing that, or am I missing something? + +The example database uses a private format and only records the choices a strategy made to generate the falsifying example, so there's no way to extract the data of the example short of re-running the test. +Stuart's recommendation of hypothesis.note(...) is a good one.",0.0,False,1,5861 +2018-12-11 19:43:33.823,Template rest one day from the date,"In my view.py I obtain a date from my MSSQL database in this format 2018-12-06 00:00:00.000 so I pass that value as context like datedb and in my html page I render it like this {{datedb|date:""c""}} but it shows the date with one day less like this: + +2018-12-05T18:00:00-06:00 + +Is the 06 not the 05 day. +why is this happening? how can I show the right date?","One way of solve the problem was chage to USE_TZ = False has Willem said in the comments, but that gives another error so I found the way to do it just adding in the template this {% load tz %} and using the flter |utc on the date variables like datedb|utc|date:'Y-m-d'.",1.2,True,1,5862 +2018-12-12 12:15:09.190,Add full anaconda package list to existing conda environment,"I know how to add single packages and I know that the conda create command supports adding a new environment with all anaconda packages installed. +But how can I add all anaconda packages to an existing environment?","I was able to solve the problem as following: + +Create a helper env with anaconda: conda create -n env_name anaconda +Activate that env conda activate env_name +Export packages into specification file: conda list --explicit > spec-file.txt +Activate the target environment: activate target_env_name +Import that specification file: conda install --file spec-file.txt",0.3869120172231254,False,1,5863 +2018-12-12 17:20:31.293,how to compare two text document with tfidf vectorizer?,"I have two different text which I want to compare using tfidf vectorization. +What I am doing is: + +tokenizing each document +vectorizing using TFIDFVectorizer.fit_transform(tokens_list) + +Now the vectors that I get after step 2 are of different shape. +But as per the concept, we should have the same shape for both the vectors. Only then the vectors can be compared. +What am I doing wrong? Please help. +Thanks in advance.","As G. Anderson already pointed out, and to help the future guys on this, when we use the fit function of TFIDFVectorizer on document D1, it means that for the D1, the bag of words are constructed. +The transform() function computes the tfidf frequency of each word in the bag of word. +Now our aim is to compare the document D2 with D1. It means we want to see how many words of D1 match up with D2. Thats why we perform fit_transform() on D1 and then only the transform() function on D2 would apply the bag of words of D1 and count the inverse frequency of tokens in D2. +This would give the relative comparison of D1 against D2.",1.2,True,1,5864 +2018-12-13 13:43:34.987,"python, dictionaries how to get the first value of the first key","So basically I have a dictionary with x and y values and I want to be able to get only the x value of the first coordinate and only the y value of the first coordinate and then the same with the second coordinate and so on, so that I can use it in an if-statement.","if the values are ordered in columns just use + +x=your_variable[:,0] y=your_variable[:,1] + +i think",0.3869120172231254,False,1,5865 +2018-12-15 21:55:17.020,how to install tkinter with Pycharm?,"I used sudo apt-get install python3.6-tk and it works fine. Tkinter works if I open python in terminal, but I cannot get it installed on my Pycharm project. pip install command says it cannot find Tkinter. I cannot find python-tk in the list of possible installs either. +Is there a way to get Tkinter just standard into every virtualenv when I make a new project in Pycharm? +Edit: on Linux Mint +Edit2: It is a clear problem of Pycharm not getting tkinter guys. If I run my local python file from terminal it works fine. Just that for some reason Pycharm cannot find anything tkinter related.","Python already has tkinter installed. It is a base module, like random or time, therefore you don't need to install it.",-0.0679224682270276,False,1,5866 +2018-12-18 01:57:32.877,Print output to console while redirect the output to a file in linux,"I am using python in linux and tried to use command line to print out the output log while redirecting the output and error to a txt.file. However, after I searched and tried the methods such as +python [program] 2>&1 | tee output.log +But it just redirected the output the the output.log and the print content disappeared. I wonder how I could print the output to console while save/redirect them to output.log ? It would be useful if we hope to tune the parameter while having notice on the output loss and parameter.","You can create a screen like this: screen -L and then run the python script in this screen which would give the output to the console and also would save it the file: screenlog.0. You could leave the screen by using Ctrl+A+D while the script is running and check the script output by reattaching to the screen by screen -r. Also, in the screen, you won't be able to scroll past the current screen view.",0.0,False,1,5867 +2018-12-18 10:17:19.160,Regex for Sentences in python,"I have one more Query +here is two sentences + +[1,12:12] call basic_while1() Error Code: 1046. No database selected +[1,12:12] call add() Asdfjgg Error Code: 1046. No database selected +[1,12:12] call add() +[1,12:12] +Error Code: 1046. No database selected +now I want to get output like this +['1','12:12',""call basic_while1""] , ['1','12:12', 'call add() Asdfjgg'],['1','12:12', 'call add()'],['1','12:12'],['','','',' Error Code: 1046. No database selected'] + +I used this r'^\[(\d+),(\s[0-9:]+)\]\s+(.+) this is my main regex then as per my concern I modified it but It didn't help me +I want to cut everything exact before ""Error Code"" +how to do that?","basically you asked to get everything before the ""Error Code"" + +I want to cut everything exact before ""Error Code"" + +so it is simple, try: find = re.search('((.)+)(\sError Code)*',s) and find.group(1) will give you '[1,12:12] call add() Asdfjgg' which is what you wanted. +if after you got that string you want list that you requested : +desired_list = find.group(1).replace('[','').replace(']','').replace(',',' ').split()",0.0,False,1,5868 +2018-12-18 23:09:13.550,install numpy on python 3.5 Mac OS High sierra,"I wanted to install the numpy package for python 3.5 on my Mac OS High Sierra, but I can't seem to make it work. +I have it on python2.7, but I would also like to install it for the next versions. +Currently, I have installed python 2.7, python 3.5, and python 3.7. +I tried to install numpy using: + +brew install numpy --with-python3 (no error) +sudo port install py35-numpy@1.15.4 (no error) +sudo port install py37-numpy@1.15.4 (no error) +pip3.5 install numpy (gives ""Could not find a version that satisfies the requirement numpy (from versions: ) +No matching distribution found for numpy"" ) + +I can tell that it is not installed because when I type python3 and then import numpy as np gives ""ModuleNotFoundError: No module named 'numpy'"" +Any ideas on how to make it work? +Thanks in advance.","First, you need to activate the virtual environment for the version of python you wish to run. After you have done that then just run ""pip install numpy"" or ""pip3 install numpy"". +If you used Anaconda to install python then, after activating your environment, type conda install numpy.",1.2,True,2,5869 +2018-12-18 23:09:13.550,install numpy on python 3.5 Mac OS High sierra,"I wanted to install the numpy package for python 3.5 on my Mac OS High Sierra, but I can't seem to make it work. +I have it on python2.7, but I would also like to install it for the next versions. +Currently, I have installed python 2.7, python 3.5, and python 3.7. +I tried to install numpy using: + +brew install numpy --with-python3 (no error) +sudo port install py35-numpy@1.15.4 (no error) +sudo port install py37-numpy@1.15.4 (no error) +pip3.5 install numpy (gives ""Could not find a version that satisfies the requirement numpy (from versions: ) +No matching distribution found for numpy"" ) + +I can tell that it is not installed because when I type python3 and then import numpy as np gives ""ModuleNotFoundError: No module named 'numpy'"" +Any ideas on how to make it work? +Thanks in advance.","If running pip3.5 --version or pip3 --version works, what is the output when you run pip3 freeze? If there is no output, it indicates that there are no packages installed for the Python 3 environment and you should be able to install numpy with pip3 install numpy.",0.0,False,2,5869 +2018-12-19 15:33:16.960,Python Vscode extension - can't change remote jupyter notebook kernel,"I've got the updated Python VSCode extension installed and it works great. I'm able to use the URL with the token to connect to a remote Jupyter notebook. I just cannot seem to figure out how to change the kernel on the remote notebook for use in VSCode. +If I connect to the remote notebook through a web browser, I can see my two environments through the GUI and change kernels. Is there a similar option in the VSCode extension?","Run the following command in vscode: +Python: Select interpreter to start Jupyter server +It will allow you to choose the kernel that you want.",0.0,False,2,5870 +2018-12-19 15:33:16.960,Python Vscode extension - can't change remote jupyter notebook kernel,"I've got the updated Python VSCode extension installed and it works great. I'm able to use the URL with the token to connect to a remote Jupyter notebook. I just cannot seem to figure out how to change the kernel on the remote notebook for use in VSCode. +If I connect to the remote notebook through a web browser, I can see my two environments through the GUI and change kernels. Is there a similar option in the VSCode extension?","The command that worked for me in vscode: +Notebook: Select Notebook Kernel",0.0,False,2,5870 +2018-12-21 02:43:43.240,Backtesting a Universe of Stocks,"I would like to develop a trend following strategy via back-testing a universe of stocks; lets just say all NYSE or S&P500 equities. I am asking this question today because I am unsure how to handle the storage/organization of the massive amounts of historical price data. +After multiple hours of research I am here, asking for your experience and awareness. I would be extremely grateful for any information/awareness you can share on this topic + +Personal Experience background: +-I know how to code. Was a Electrical Engineering major, not a CS major. +-I know how to pull in stock data for individual tickers into excel. +Familiar with using filtering and custom studies on ThinkOrSwim. +Applied Context: +From 1995 to today lets evaluate the best performing equities on a relative strength/momentum basis. We will look to compare many technical characteristics to develop a strategy. The key to this is having data for a universe of stocks that we can run backtests on using python, C#, R, or any other coding language. We can then determine possible strategies by assesing the returns, the omega ratio, median excess returns, and Jensen's alpha (measured weekly) of entries and exits that are technical driven. + +Here's where I am having trouble figuring out what the next step is: +-Loading data for all S&P500 companies into a single excel workbook is just not gonna work. Its too much data for excel to handle I feel like. Each ticker is going to have multiple MB of price data. +-What is the best way to get and then store the price data for each ticker in the universe? Are we looking at something like SQL or Microsoft access here? I dont know; I dont have enough awareness on the subject of handling lots of data like this. What are you thoughts? + +I have used ToS to filter stocks based off of true/false parameters over a period of time in the past; however the capabilities of ToS are limited. +I would like a more flexible backtesting engine like code written in python or C#. Not sure if Rscript is of any use. - Maybe, there are libraries out there that I do not have awareness of that would make this all possible? If there are let me know. +I am aware that Quantopia and other web based Quant platforms are around. Are these my best bets for backtesting? Any thoughts on them? + +Am I making this too complicated? +Backtesting a strategy on a single equity or several equities isnt a problem in excel, ToS, or even Tradingview. But with lots of data Im not sure what the best option is for storing that data and then using a python script or something to perform the back test. + +Random Final thought:-Ultimately would like to explore some AI assistance with optimizing strategies that were created based off parameters. I know this is a thing but not sure where to learn more about this. If you do please let me know. + +Thank you guys. I hope this wasn't too much. If you can share any knowledge to increase my awareness on the topic I would really appreciate it. +Twitter:@b_gumm","The amout of data is too much for EXCEL or CALC. Even if you want to screen only 500 Stocks from S&P 500, you will get 2,2 Millions of rows (approx. 220 days/year * 20 years * 500 stocks). For this amount of data, you should use a SQL Database like MySQL. It is performant enough to handle this amount of data. But you have to find a way for updating. If you get the complete time series daily and store it into your database, this process can take approx. 1 hour. You could also use delta downloads but be aware of corporate actions (e.g. splits). +I don't know Quantopia, but I know a similar backtesting service where I have created a python backtesting script last year. The outcome was quite different to what I have expected. The research result was that the backtesting service was calculating wrong results because of wrong data. So be cautious about the results.",0.0,False,1,5871 +2018-12-21 11:15:31.803,Date Range for Facebook Graph API request on posts level,"I am working on a tool for my company created to get data from our Facebook publications. It has not been working for a while, so I have to get all the historical data from June to November 2018. +My two scripts (one that get title and type of publication, and the other that get the number of link clicks) are working well to get data from last pushes, but when I try to add a date range in my Graph API request, I have some issues: + +the regular query is [page_id]/posts?fields=id,created_time,link,type,name +the query for historical data is [page_id]/posts?fields=id,created_time,link,type,name,since=1529280000&until=1529712000, as the API is supposed to work with unixtime +I get perfect results for regular use, but the results for historical data only shows video publications in Graph API Explorer, with a debug message saying: + + +The since field does not exist on the PagePost object. + +Same for ""until"" field when not using ""since"". I tried to replace ""posts/"" with ""feed/"" but it returned the exact same result... +Do you have any idea of how to get all the publications from a Page I own on a certain date range?","So it seems that it is not possible to request this kind of data unfortunately, third party services must be used...",0.0,False,1,5872 +2018-12-23 03:14:14.787,Pyautogui mouse click on different resolution,"I'm writing a script for automatizing some tasks at my job. However, I need to make my script portable and try it on different screen resolution. +So far right now I've tried to multiply my coordinate with the ratio between the old and new resolutions, but this doesn't work properly. +Do you know how I can convert my X, Y coordinates for mouse's clicks make it works on different resolution?","Quick question: Are you trying to get it to click on certain buttons? (i.e. buttons that look the same on every computer you plug it into) And by portable, do you mean on a thumb drive (usb)? +You may be able to take an image of the button (i.e. cropping a screenshot), pass it on to the opencv module, one of the modules has an Image within Image searching ability. you can pass that image along with a screenshot (using pyautogui.screenshot()) and it will return the (x,y) coordinates of the button, pass that on to pyautogui.moveto(x,y) and pyautogui.click(), it might be able to work. you might have to describe the action you are trying to get Pyautogui to do a little better.",0.3869120172231254,False,1,5873 +2018-12-24 13:58:52.250,extracting text just after a particular tag using beautifulsoup?,"I need to extract the text just after strong tag from html page given below? how can i do it using beautiful soup. It is causing me problem as it doesn't have any class or id so only way to select this tag is using text. +{strong}Name:{/strong} Sam smith{br} +Required result +Sam smith","Thanks for all your answers but i was able to do this by following: +b_el = soup.find('strong',text='Name:') +print b_el.next_sibling +This works fine for me. This prints just next sibling how can i print next 2 sibling is there anyway ?",-0.3869120172231254,False,1,5874 +2018-12-25 10:26:24.547,How to train your own model in AWS Sagemaker?,"I just started with AWS and I want to train my own model with own dataset. I have my model as keras model with tensorflow backend in Python. I read some documentations, they say I need a Docker image to load my model. So, how do I convert keras model into Docker image. I searched through internet but found nothing that explained the process clearly. How to make docker image of keras model, how to load it to sagemaker. And also how to load my data from a h5 file into S3 bucket for training? Can anyone please help me in getting clear explanation?","You can convert your Keras model to a tf.estimator and train using the TensorFlow framework estimators in Sagemaker. +This conversion is pretty basic though, I reimplemented my models in TensorFlow using the tf.keras API which makes the model nearly identical and train with the Sagemaker TF estimator in script mode. +My initial approach using pure Keras models was based on bring-your-own-algo containers similar to the answer by Matthew Arthur.",0.0,False,1,5875 +2018-12-25 21:14:39.453,Installing Python Dependencies locally in project,"I am coming from NodeJS and learning Python and was wondering how to properly install the packages in requirements.txt file locally in the project. +For node, this is done by managing and installing the packages in package.json via npm install. However, the convention for Python project seems to be to add packages to a directory called lib. When I do pip install -r requirements.txt I think this does a global install on my computer, similar to nodes npm install -g global install. How can I install the dependencies of my requirements.txt file in a folder called lib?","use this command +pip install -r requirements.txt -t ",1.2,True,1,5876 +2018-12-26 11:44:32.850,P4Python check if file is modified after check-out,I need to check-in the file which is in client workspace. Before check-in i need to verify if the file has been changed. Please tell me how to check this.,Use the p4 diff -sr command. This will do a diff of opened files and return the names of ones that are unchanged.,1.2,True,1,5877 +2018-12-26 21:26:16.360,How can I source two paths for the ROS environmental variable at the same time?,"I have a problem with using the rqt_image_view package in ROS. Each time when I type rqt_image_view or rosrun rqt_image_view rqt_image_view in terminal, it will return: + +Traceback (most recent call last): + File ""/opt/ros/kinetic/bin/rqt_image_view"", line 16, in + plugin_argument_provider=add_arguments)) + File ""/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_gui/main.py"", line 59, in main + return super(Main, self).main(argv, standalone=standalone, plugin_argument_provider=plugin_argument_provider, plugin_manager_settings_prefix=str(hash(os.environ['ROS_PACKAGE_PATH']))) + File ""/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/main.py"", line 338, in main + from python_qt_binding import QT_BINDING + ImportError: cannot import name QT_BINDING + +In the /.bashrc file, I have source : + +source /opt/ros/kinetic/setup.bash + source /home/kelu/Dropbox/GET_Lab/leap_ws/devel/setup.bash --extend + source /eda/gazebo/setup.bash --extend + +They are the default path of ROS, my own working space, the robot simulator of our university. I must use all of them. I have already finished many projects with this environmental variable setting. However, when I want to use the package rqt_image_view today, it returns the above error info. +When I run echo $ROS_PACKAGE_PATH, I get the return: + +/eda/gazebo/ros/kinetic/share:/home/kelu/Dropbox/GET_Lab/leap_ws/src:/opt/ros/kinetic/share + +And echo $PATH + +/usr/local/cuda/bin:/opt/ros/kinetic/bin:/usr/local/cuda/bin:/usr/local/cuda/bin:/home/kelu/bin:/home/kelu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin + +Then I only source the /opt/ros/kinetic/setup.bash ,the rqt_image_view package runs!! +It seems that, if I want to use rqt_image_view, then I can not source both /opt/ros/kinetic/setup.bash and /home/kelu/Dropbox/GET_Lab/leap_ws/devel/setup.bash at the same time. +Could someone tell me how to fix this problem? I have already search 5 hours in google and haven't find a solution.","Different solutions to try: + +It sounds like the first path /eda/gazebo/ros/kinetic/share or /home/kelu/Dropbox/GET_Lab/leap_ws/src has an rqt_image_view package that is being used. Try to remove that dependency. +Have you tried switching the source files being sourced? This depends on how the rqt_image_view package was built, such as by source or through a package manager. + +Initially, it sounds like there is a problem with the paths being searched or wrong package being run since the package works with the default ROS environment setup.",0.0,False,1,5878 +2018-12-27 09:49:47.840,how to constrain scipy curve_fit in positive result,"I'm using scipy curve_fit to curve a line for retention. however, I found the result line may produce negative number. how can i add some constrain? +the 'bounds' only constrain parameters not the results y","One of the simpler ways to handle negative value in y, is to make a log transformation. Get the best fit for log transformed y, then do exponential transformation for actual error in the fit or for any new value prediction.",0.0,False,1,5879 +2018-12-27 10:57:53.617,Vpython using Spyder : how to prevent browser tab from opening?,"I am using vpython library in spyder. After importing the library when I call simple function like print('x') or carry out any assignment operation and execute the program, immediately a browser tab named localhost and port address opens up and I get the output in console {if I used print function}. +I would like to know if there is any option to prevent the tab from opening and is it possible to make the tab open only when it is required. +PS : I am using windows 10, chrome as browser, python 3.5 and spyder 3.1.4.",There is work in progress to prevent the opening of a browser tab when there are no 3D objects or graph to display. I don't know when this will be released.,0.0,False,1,5880 +2018-12-27 16:54:21.267,ImportError: cannot import name 'AFAVSignature',"I get this error after already having installed autofocus when I try to run a .py file from the command line that contains the line: +from autofocus import Autofocus2D +Output: +ImportError: cannot import name 'AFAVSignature' +Is anyne familiar with this package and how to import it? +Thanks","It doesn't look like the library is supported for python 3. I was getting the same error, but removed that line from init.py and found that there was another error with of something like 'print e' not working, so I put the line back in and imported with python2 and it worked.",0.0,False,1,5881 +2018-12-28 00:04:02.473,how can I find out which python virtual environment I am using?,I have several virtual environment in my computer and sometimes I am in doubt about which python virtual environment I am using. Is there an easy way to find out which virtual environment I am connected to?,"Usually it's set to display in your prompt. You can also try typing in which python or which pip in your terminal to see if it points to you venv location, and which one. (Use where instead of which on Windows.)",0.9974579674738372,False,2,5882 +2018-12-28 00:04:02.473,how can I find out which python virtual environment I am using?,I have several virtual environment in my computer and sometimes I am in doubt about which python virtual environment I am using. Is there an easy way to find out which virtual environment I am connected to?,"From a shell prompt, you can just do echo $VIRTUAL_ENV (or in Windows cmd.exe, echo %VIRTUAL_ENV%). +From within Python, sys.prefix provides the root of your Python installation (the virtual environment if active), and sys.executable tells you which Python executable is running your script.",0.9903904942256808,False,2,5882 +2018-12-30 14:34:30.510,how to delete django relation and rebuild model,"ive made a mistake with my django and messed up my model +I want to delete it & then recreate it - how do I do that? +I get this when I try to migrate - i just want to drop it +relation ""netshock_todo"" already exists +Thanks in advance","Delete all of your migrations file except __init__.py +Then go to database and find migrations table, delete all row in migrations table. Then run makemigrations and migrate command",1.2,True,1,5883 +2018-12-31 14:33:34.473,Scrapy shell doesn't crawl web page,"I am trying to use Scrapy shell to try and figure out the selectors for zone-h.org. I run scrapy shell 'webpage' afterwards I tried to view the content to be sure that it is downloaded. But all I can see is a dash icon (-). It doesn't download the page. I tried to enter the website to check if my connection to the website is somehow blocked, but it was reachable. I tried setting user agent to something more generic like chrome but no luck there either. The website is blocking me somehow but I don't know how can I bypass it. I digged through the the website if they block crawling and it doesn't say it is forbidden to crawl it. Can anyone help out?","Can you use scrapy shell ""webpage"" on another webpage that you know works/doesn't block scraping? +Have you tried using the view(response) command to open up what scrapy sees in a web browser? +When you go to the webpage using a normal browser, are you redirected to another, final homepage? +- if so, try using the final homepage's URL in your scrapy shell command +Do you have firewalls that could interfere with a Python/commandline app from connecting to the internet?",0.0,False,1,5884 +2019-01-03 23:22:36.667,How to add to pythonpath in virtualenvironment,"On my windows machine I created a virtual environement in conda where I run python 3.6. I want to permanently add a folder to the virtual python path environment. If I append something to sys.path it is lost on exiting python. +Outside of my virtual enviroment I can just add to user variables by going to advanced system settings. I have no idea how to do this within my virtual enviroment. +Any help is much appreciated.","If you are on Windows 10+, this should work: +1) Click on the Windows button on the screen or on the keyboard, both in the bottom left section. +2) Type ""Environment Variables"" (without the quotation marks, of course). +3) Click on the option that says something like ""Edit the System Environment Variables"" +4) Click on the ""Advanced Tab,"" and then click ""Environment Variables"" (Near the bottom) +5) Click ""Path"" in the top box - it should be the 3rd option - and then click ""Edit"" (the top one) +6) Click ""New"" at the top, and then add the path to the folder you want to create. +7) Click ""Ok"" at the bottom of all the pages that were opened as a result of the above-described actions to save. +That should work, please let me know in the comments if it doesn't.",-0.2012947653214861,False,1,5885 +2019-01-04 08:03:05.297,Do Dash apps reload all data upon client log in?,"I'm wondering about how a dash app works in terms of loading data, parsing and doing initial calcs when serving to a client who logs onto the website. +For instance, my app initially loads a bunch of static local csv data, parses a bunch of dates and loads them into a few pandas data frames. This data is then displayed on a map for the client. +Does the app have to reload/parse all of this data every time a client logs onto the website? Or does the dash server load all the data only the first time it is instantiated and then just dish it out every time a client logs on? +If the data reloads every time, I would then use quick parsers like udatetime, but if not, id prefer to use a convenient parser like pendulum which isn't as efficient (but wouldn't matter if it only parses once). +I hope that question makes sense. Thanks in advance!","The only thing that is called on every page load is the function you can assign to app.layout. This is useful if you want to display dynamic content like the current date on your page. +Everything else is just executed once when the app is starting. +This means if you load your data outside the app.layout (which I assume is the case) everything is loaded just once.",1.2,True,1,5886 +2019-01-05 23:50:56.660,How do i implement Logic to Django?,"So I have an assignment to build a web interface for a smart sensor, +I've already written the python code to read the data from the sensor and write it into sqlite3, control the sensor etc. +I've built the HTML, CSS template and implemented it into Django. +My goal is to run the sensor reading script pararel to the Django interface on the same server, so the server will do all the communication with the sensor and the user will be able to read and configure the sensor from the web interface. (Same logic as modern routers - control and configure from a web interface) +Q: Where do I put my sensor_ctl.py script in my Django project and how I make it to run independent on the server. (To read sensor data 24/7) +Q: Where in my Django project I use my classes and method from sensor_ctl.py to write/read data to my djangos database instead of the local sqlite3 database (That I've used to test sensor_ctl.py)","Place your code in app/appname/management/commands folder. Use Official guide for management commands. Then you will be able to use your custom command like this: +./manage getsensorinfo +So when you will have this command registered, you can just put in in cron and it will be executed every minute. +Secondly you need to rewrite your code to use django ORM models like this: +Stat.objects.create(temp1=60,temp2=70) instead of INSERT into....",1.2,True,1,5887 +2019-01-06 02:49:09.817,How does selenium work with hosting services?,"I have a Flask app that uses selenium to get data from a website. I have spent 10+ hours trying to get heroku to work with it, but no success. My main problem is selenium. with heroku, there is a ""buildpack"" that you use to get selenium working with it, but with all the other hosting services, I have found no information. I just would like to know how to get selenium to work with any other recommended service than heroku. Thank you.","You need hosting service that able to install Chrome, chromedriver and other dependencies. Find for Virtual Private hosting (VPS), or Dedicated Server or Cloud Hosting but not Shared hosting.",0.0,False,1,5888 +2019-01-06 10:28:46.997,How do I root in python (other than square root)?,"I'm trying to make a calculator in python, so when you type x (root) y it will give you the x root of y, e.g. 4 (root) 625 = 5. +I'm aware of how to do math.sqrt() but is there a way to do other roots?","If you want to 625^(1/4){which is the same as 4th root of 625} +then you type 625**(1/4) +** is the operator for exponents in python. +print(625**(1/4)) +Output: +5.0 +To generalize: +if you want to find the xth root of y, you do: +y**(1/x)",0.6730655149877884,False,1,5889 +2019-01-08 17:44:43.800,TF-IDF + Multiple Regression Prediction Problem,"I have a dataset of ~10,000 rows of vehicles sold on a portal similar to Craigslist. The columns include price, mileage, no. of previous owners, how soon the car gets sold (in days), and most importantly a body of text that describes the vehicle (e.g. ""accident free, serviced regularly""). +I would like to find out which keywords, when included, will result in the car getting sold sooner. However I understand how soon a car gets sold also depends on the other factors especially price and mileage. +Running a TfidfVectorizer in scikit-learn resulted in very poor prediction accuracy. Not sure if I should try including price, mileage, etc. in the regression model as well, as it seems pretty complicated. Currently am considering repeating the TF-IDF regression on a particular segment of the data that is sufficiently huge (perhaps Toyotas priced at $10k-$20k). +The last resort is to plot two histograms, one of vehicle listings containing a specific word/phrase and another for those that do not. The limitation here would be that the words that I choose to plot will be based on my subjective opinion. +Are there other ways to find out which keywords could potentially be important? Thanks in advance.","As you mentioned you could only so much with the body of text, which signifies the amount of influence of text on selling the cars. +Even though the model gives very poor prediction accuracy, you could ahead to see the feature importance, to understand what are the words that drive the sales. +Include phrases in your tfidf vectorizer by setting ngram_range parameter as (1,2) +This might gives you a small indication of what phrases influence the sales of a car. +If would also suggest you to set norm parameter of tfidf as None, to check if has influence. By default, it applies l2 norm. +The difference would come based the classification model, which you are using. Try changing the model also as a last option.",1.2,True,1,5890 +2019-01-09 15:12:08.163,"Linux Jupyter Notebook : ""The kernel appears to have died. It will restart automatically""","I am using the PYNQ Linux on Zedboard and when I tried to run a code on Jupyter Notebook to load a model.h5 I got an error message: +""The kernel appears to have died. It will restart automatically"" +I tried to upgrade keras and Jupyter but still have the same error +I don't know how to fix this problem ?",Model is too large to be loaded into memory so kernel has died.,0.0,False,1,5891 +2019-01-09 22:59:39.340,Difference between Python Interpreter and IDLE?,"For homework in my basic python class, we have to start python interpreter in interactive mode and type a statement. Then, we have to open IDLE and type a statement. I understand how to write statements in both, but can't quite tell them apart? I see that there are to different desktop apps for python, one being the python 3.7 (32-bit), and the other being IDLE. Which one is the interpreter, and how do I get it in interactive mode? Also, when I do open IDLE do I put my statement directly in IDLE or, do I open a 'new file' and do it like that? I'm just a bit confused about the differences between them all. But I do really want to learn this language! Please help!","Python unlike some languages can be written one line at a time with you getting feedback after every line . This is called interactive mode. You will know you are in interactive mode if you see "">>>"" on the far left side of the window. This mode is really only useful for doing small tasks you don't think will come up again. +Most developers write a whole program at once then save it with a name that ends in "".py"" and run it in an interpreter to get the results.",1.2,True,1,5892 +2019-01-10 15:30:10.413,How to handle SQL dump with Python,"I received a data dump of the SQL database. +The data is formatted in an .sql file and is quite large (3.3 GB). I have no idea where to go from here. I have no access to the actual database and I don't know how to handle this .sql file in Python. +Can anyone help me? I am looking for specific steps to take so I can use this SQL file in Python and analyze the data. +TLDR; Received an .sql file and no clue how to process/analyze the data that's in the file in Python. Need help in necessary steps to make the .sql usable in Python.","Eventually I had to install MAMP to create a local mysql server. I imported the SQL dump with a program like SQLyog that let's you edit SQL databases. +This made it possible to import the SQL database in Python using SQLAlchemy, MySQLconnector and Pandas.",0.3869120172231254,False,2,5893 +2019-01-10 15:30:10.413,How to handle SQL dump with Python,"I received a data dump of the SQL database. +The data is formatted in an .sql file and is quite large (3.3 GB). I have no idea where to go from here. I have no access to the actual database and I don't know how to handle this .sql file in Python. +Can anyone help me? I am looking for specific steps to take so I can use this SQL file in Python and analyze the data. +TLDR; Received an .sql file and no clue how to process/analyze the data that's in the file in Python. Need help in necessary steps to make the .sql usable in Python.","It would be an extraordinarily difficult process to try to construct any sort of Python program that would be capable of parsing the SQL syntax of any such of a dump-file and to try to do anything whatsoever useful with it. +""No. Absolutely not. Absolute nonsense."" (And I have over 30 years of experience, including senior management.) You need to go back to your team, and/or to your manager, and look for a credible way to achieve your business objective ... because, ""this isn't it."" +The only credible thing that you can do with this file is to load it into another mySQL database ... and, well, ""couldn't you have just accessed the database from which this dump came?"" Maybe so, maybe not, but ""one wonders."" +Anyhow – your team and its management need to ""circle the wagons"" and talk about your credible options. Because, the task that you've been given, in my professional opinion, ""isn't one."" Don't waste time – yours, or theirs.",0.2012947653214861,False,2,5893 +2019-01-10 18:42:54.360,Interfacing a QR code recognition to a django database,"I'm coming to you with the following issue: +I have a bunch of physical boxes onto which I still stick QR codes generated using a python module named qrcode. In a nutshell, what I would like to do is everytime someone wants to take the object contained in a box, he scans the qr code with his phone, then takes it and put it back when he is done, not forgetting to scan the QR code again. +Pretty simple, isn't it? +I already have a django table containing all my objects. +Now my question is related to the design. I suspect the easiest way to achieve that is to have a POST request link in the QR code which will create a new entry in a table with the name of the object that has been picked or put back, the time (I would like to store this information). +If that's the correct way to do, how would you approach it? I'm not too sure I see how to make a POST request with a QR code. Would you have any idea? +Thanks. +PS: Another alternative I can think of would be to a link in the QR code to a form with a dummy button the user would click on. Once clicked the button would update the database. But I would fine a solution without any button more convenient...","The question boils down to a few choices: (a) what data do you want to encode into the QR code; (b) what app will you use to scan the QR code; and (c) how do you want the app to use / respond to the encoded data. +If you want your users to use off-the-shelf QR code readers (like free smartphone apps), then encoding a full URL to the appropriate API on your backend makes sense. Whether this should be a GET or POST depends on the QR code reader. I'd expect most to use GET, but you should verify that for your choice of app. That should be functionally fine, if you don't have any concerns about who should be able to scan the code. +If you want more control, e.g. you'd like to keep track of who scanned the code or other info not available to the server side just from a static URL request, you need a different approach. Something like, store the item ID (not URL) in the QR code; create your own simple QR code scanner app (many good examples exist) and add a little extra logic to that client, like requiring the user to log in with an ID + password, and build the URL dynamically from the item ID and the user ID. Many security variations possible (like JWT token) -- how you do that won't be dictated by the contents of the QR code. You could do a lot of other things in that QR code scanner / client, like add GPS location, ask the user to indicate why or where they're taking the item, etc. +So you can choose between a simple way with no controls, and a more complex way that would allow you to layer in whatever other controls and extra data you need.",1.2,True,1,5894 +2019-01-11 08:09:37.980,How can I read a file having different column for each rows?,"my data looks like this. +0 199 1028 251 1449 847 1483 1314 23 1066 604 398 225 552 1512 1598 +1 1214 910 631 422 503 183 887 342 794 590 392 874 1223 314 276 1411 +2 1199 700 1717 450 1043 540 552 101 359 219 64 781 953 +10 1707 1019 463 827 675 874 470 943 667 237 1440 892 677 631 425 +How can I read this file structure in python? I want to extract a specific column from rows. For example, If I want to extract value in the second row, second column, how can I do that? I've tried 'loadtxt' using data type string. But it requires string index slicing, so that I could not proceed because each column has different digits. Moreover, each row has a different number of columns. Can you guys help me? +Thanks in advance.","Use something like this to split it +split2=[] +split1=txt.split(""\n"") +for item in split1: + split2.append(item.split("" ""))",0.0,False,1,5895 +2019-01-11 11:02:30.650,How to align training and test set when using pandas `get_dummies` with `drop_first=True`?,"I have a data set from telecom company having lots of categorical features. I used the pandas.get_dummies method to convert them into one hot encoded format with drop_first=True option. Now how can I use the predict function, test input data needs to be encoded in the same way, as the drop_first=True option also dropped some columns, how can I ensure that encoding takes place in similar fashion. +Data set shape before encoding : (7043, 21) +Data set shape after encoding : (7043, 31)","When not using drop_first=True you have two options: + +Perform the one-hot encoding before splitting the data in training and test set. (Or combine the data sets, perform the one-hot encoding, and split the data sets again). +Align the data sets after one-hot encoding: an inner join removes the features that are not present in one of the sets (they would be useless anyway). train, test = train.align(test, join='inner', axis=1) + +You noted (correctly) that method 2 may not do what you expect because you are using drop_first=True. So you are left with method 1.",0.3869120172231254,False,1,5896 +2019-01-11 19:30:04.483,Python anytree application challenges with my jupyter notebook ​,"I am working in python 3.7.0 through a 5.6.0 jupyter notebook inside Anaconda Navigator 1.9.2 running in a windows 7 environment. It seems like I am assuming a lot of overhead, and from the jupyter notebook, python doesn’t see the anytree application module that I’ve installed. (Anytree is working fine with python from my command prompt.) +I would appreciate either 1) IDE recommendations or 2) advise as to how to make my Anaconda installation better integrated. +​","The core problem with my python IDE environment was that I could not utilize the functions in the anytree module. The anytree functions worked fine from the command prompt python, but I only saw error messages from any of the Anaconda IDE portals. +Solution: +1) From the windows start menu, I opened Anaconda Navigator, ""run as administrator."" +2) Select Environments. My application only has the single environment, “base”, +3.) Open selection “terminal”, and you then have a command terminal window in that environment. +4.) Execute [ conda install -c techtron anytree ] and the anytree module functions are now available. +5.) Execute [ conda update –n base –all ] and all the modules are updated to be current.",1.2,True,1,5897 +2019-01-12 03:01:39.153,How do I get VS Code to recognize modules in virtual environment?,"I set up a virtual environment in python 3.7.2 using ""python -m venv foldername"". I installed PIL in that folder. Importing PIL works from the terminal, but when I try to import it in VS code, I get an ImportError. Does anyone know how to get VS code to recognize that module? +I've tried switching interpreters, but the problem persists.","I ended up changing the python.venvpath setting to a different folder, and then moving the virtual env folder(The one with my project in it) to that folder. After restarting VS code, it worked.",0.0,False,1,5898 +2019-01-15 06:52:45.623,Good resources for video processing in Python?,"I am using the yolov3 model running on several surveillance cameras. Besides this I also run tensorflow models on these surveillaince streams. I feel a little lost when it comes to using anything but opencv for rtsp streaming. +So far I haven't seen people use anything but opencv in python. Are there any places I should be looking into. Please feel free to chime in. +Sorry if the question is a bit vague, but I really don't know how to put this better. Feel free to edit mods.",Of course are the alternatives to OpenCV in python if it comes to video capture but in my experience none of them preformed better,1.2,True,1,5899 +2019-01-15 06:54:00.607,Automate File loading from s3 to snowflake,"In s3 bucket daily new JSON files are dumping , i have to create solution which pick the latest file when it arrives PARSE the JSON and load it to Snowflake Datawarehouse. may someone please share your thoughts how can we achieve","There are some aspects to be considered such as is it a batch or streaming data , do you want retry loading the file in case there is wrong data or format or do you want to make it a generic process to be able to handle different file formats/ file types(csv/json) and stages. +In our case we have built a generic s3 to Snowflake load using Python and Luigi and also implemented the same using SSIS but for csv/txt file only.",0.0,False,1,5900 +2019-01-15 20:16:34.613,pythonnet clr is not recognized in jupyter notebook,"I have installed pythonnet to use clr package for a specific API, which only works with clr in python. Although in my python script (using command or regular .py files) it works without any issues, in jupyter notebook, import clr gives this error, ModuleNotFoundError: No module named 'clr'. Any idea how to address this issue?",Here is simple suggestion: compare sys.path in both cases and see the differences. Your ipython kernel in jupyter is probably searching in different directories than in normal python process.,1.2,True,2,5901 +2019-01-15 20:16:34.613,pythonnet clr is not recognized in jupyter notebook,"I have installed pythonnet to use clr package for a specific API, which only works with clr in python. Although in my python script (using command or regular .py files) it works without any issues, in jupyter notebook, import clr gives this error, ModuleNotFoundError: No module named 'clr'. Any idea how to address this issue?","since you are intended to use clr in jupyter, in jupyter cell, you could also +!pip install pythonnet for the first time and every later time if the vm is frequently nuked",0.0,False,2,5901 +2019-01-15 20:47:18.657,"Tried importing Java 8 JDK for PySpark, but PySpark still won't let me start a session","Ok here's my basic information before I go on: +MacBook Pro: OS X 10.14.2 +Python Version: 3.6.7 +Java JDK: V8.u201 +I'm trying to install the Apache Spark Python API (PySpark) on my computer. I did a conda installation: conda install -c conda-forge pyspark +It appeared that the module itself was properly downloaded because I can import it and call methods from it. However, opening the interactive shell with myuser$ pyspark gives the error: +No Java runtime present, requesting install. +Ok that's fine. I went to Java's download page to get the current JDK, in order to have it run, and downloaded it on Safari. Chrome apparently doesn't support certain plugins for it to work (although initially I did try to install it with Chrome). Still didn't work. +Ok, I just decided to start trying to use it. +from pyspark.sql import SparkSession It seemed to import the module correctly because it was auto recognizing SparkSession's methods. However, +spark = SparkSession.builder.getOrCreate() gave the error: +Exception: Java gateway process exited before sending its port number +Reinstalling the JDK doesn't seem to fix the issue, and now I'm stuck with a module that doesn't seem to work because of an issue with Java that I'm not seeing. Any ideas of how to fix this problem? Any and all help is appreciated.",This problem is coming with spark 2.4. please try spark 2.3.,0.0,False,1,5902 +2019-01-16 08:53:00.437,Install python packages offline on server,I want to install some packages on the server which does not access to internet. so I have to take packages and send them to the server. But I do not know how can I install them.,"Download the package from website and extract the tar ball. +run python setup.py install",-0.2012947653214861,False,1,5903 +2019-01-17 08:51:46.440,Dask: delayed vs futures and task graph generation,"I have a few basic questions on Dask: + +Is it correct that I have to use Futures when I want to use dask for distributed computations (i.e. on a cluster)? +In that case, i.e. when working with futures, are task graphs still the way to reason about computations. If yes, how do I create them. +How can I generally, i.e. no matter if working with a future or with a delayed, get the dictionary associated with a task graph? + +As an edit: +My application is that I want to parallelize a for loop either on my local machine or on a cluster (i.e. it should work on a cluster). +As a second edit: +I think I am also somewhat unclear regarding the relation between Futures and delayed computations. +Thx","1) Yup. If you're sending the data through a network, you have to have some way of asking the computer doing the computing for you how's that number-crunching coming along, and Futures represent more or less exactly that. +2) No. With Futures, you're executing the functions eagerly - spinning up the computations as soon as you can, then waiting for the results to come back (from another thread/process locally, or from some remote you've offloaded the job onto). The relevant abstraction here would be a Queque (Priority Queque, specifically). +3) For a Delayed instance, for instance, you could do some_delayed.dask, or for an Array, Array.dask; optionally wrap the whole thing in either dict() or vars(). I don't know for sure if it's reliably set up this way for every single API, though (I would assume so, but you know what they say about what assuming makes of the two of us...). +4) The simplest analogy would probably be: Delayed is essentially a fancy Python yield wrapper over a function; Future is essentially a fancy async/await wrapper over a function.",1.2,True,1,5904 +2019-01-19 00:00:55.483,Python how to get labels of a generated adjacency matrix from networkx graph?,"If i've an networkx graph from a python dataframe and i've generated the adjacency matrix from it. +So basically, how to get labels of that adjacency matrix ?","Assuming you refer to nodes' labels, networkx only keeps the the indices when extracting a graph's adjacency matrix. Networkx represents each node as an index, and you can add more attributes if you wish. All node's attributes except for the index are kept in a dictionary. When generating graph's adjacency matrix only the indices are kept, so if you only wish to keep a single string per node, consider indexing nodes by that string when generating your graph.",1.2,True,2,5905 +2019-01-19 00:00:55.483,Python how to get labels of a generated adjacency matrix from networkx graph?,"If i've an networkx graph from a python dataframe and i've generated the adjacency matrix from it. +So basically, how to get labels of that adjacency matrix ?","If the adjacency matrix is generated without passing the nodeList, then you can call G.nodes to obtain the default NodeList, which should correspond to the rows of the adjacency matrix.",-0.2012947653214861,False,2,5905 +2019-01-20 12:48:34.697,How to wait for some time between user inputs in tkinter?,"I am making a GUI program where the user can draw on a canvas in Tkinter. What I want to do is that I want the user to be able to draw on the canvas and when the user releases the Mouse-1, the program should wait for 1 second and clear the canvas. If the user starts drawing within that 1 second, the canvas should stay as it is. +I am able to get the user input fine. The draw function in my program is bound to B1-Motion. +I have tried things like inducing a time delay but I don't know how to check whether the user has started to draw again. +How do I check whether the user has started to draw again?","You can bind the mouse click event to a function that sets a bool to True or False, then using after to call a function after 1 second which depending on that bool clears the screen.",1.2,True,1,5906 +2019-01-21 21:13:07.617,Persistent Machine Learning,"I have a super basic machine learning question. I've been working through various tutorials and online classes on machine learning and the various techniques to learning how to use it, but what I'm not seeing is the persistent application piece. +So, for example, I train a network to recognize what a garden gnome looks like, but, after I run the training set and validate with test data, how do I persist the network so that I can feed it an individual picture and have it tell me whether the picture is of a garden gnome or not? Every tutorial seems to have you run through the training/validation sets without any notion as of how to host the network in a meaningful way for future use. +Thanks!",Use python pickle library to dump your trained model on your hard drive and load model and test for persistent results.,0.0,False,1,5907 +2019-01-21 23:31:10.607,Is it possible to extract an SSRS report embedded in the body of an email and export to csv?,"We currently are receiving reports via email (I believe they are SSRS reports) which are embedded in the email body rather than attached. The reports look like images or snapshots; however, when I copy and paste the ""image"" of a report into Excel, the column/row format is retained and it pastes into Excel perfectly, with the columns and rows getting pasted into distinct columns and rows accordingly. So it isn't truly an image, as there is a structure to the embedded report. +Right now, someone has to manually copy and paste each report into excel (step 1), then import the report into a table in SQL Server (step 2). There are 8 such reports every day, so the manual copy/pasting from the email into excel is very time consuming. +The question is: is there a way - any way - to automate step 1 so that we don't have to manually copy and paste each report into excel? Is there some way to use python or some other language to detect the format of the reports in the emails, and extract them into .csv or excel files? +I have no code to show as this is more of a question of - is this even possible? And if so, any hints as to how to accomplish it would be greatly appreciated.","The most efficient solution is to have the SSRS administrator (or you, if you have permissions) set the subscription to send as CSV. To change this in SSRS right click the report and then click manage. Select ""Subscriptions"" on the left and then click edit next to the subscription you want to change. Scroll down to Delivery Options and select CSV in the Render Format dropdown. Viola, you receive your report in the correct format and don't have to do any weird extraction.",0.0,False,1,5908 +2019-01-22 05:44:57.673,How to install sympy package in python,"I am a beginner to python, I wanted to symbolic computations. I came to know with sympy installation into our pc we can do symbolic computation. I have installed python 3.6 and I am using anaconda nagavitor, through which I am using spyder as an editor. now I want to install symbolic package sympy how to do that. +I checked some post which says use 'conda install sympy'. but where to type this? I typed this in spyder editor and I am getting syntax error. thankyou","In anaconda navigator: + +Click Environments (on the left) +Choose your environment (if you have more than one) +On the middle pick ""All"" from dropbox (""installed"" by default) +Write sympy in search-box on the right +Check the package that showed out +Click apply",0.1352210990936997,False,2,5909 +2019-01-22 05:44:57.673,How to install sympy package in python,"I am a beginner to python, I wanted to symbolic computations. I came to know with sympy installation into our pc we can do symbolic computation. I have installed python 3.6 and I am using anaconda nagavitor, through which I am using spyder as an editor. now I want to install symbolic package sympy how to do that. +I checked some post which says use 'conda install sympy'. but where to type this? I typed this in spyder editor and I am getting syntax error. thankyou","To use conda install, open the Anaconda Prompt and enter the conda install sympy command. +Alternatively, navigate to the scripts sub-directory in the Anaconda directory, and run pip install sympy.",0.0,False,2,5909 +2019-01-22 18:26:43.977,tkinter.root.destroy and cv2.imshow - X Windows system error,"I found this rather annoying bug and I couldn’t find anything other than a unanswered question on the opencv website, hopefully someone with more knowledge about the two libraries will be able to point me in the right direction. +I won’t provide code because that would be beside the point of learning what causes the crash. +If I draw a tkinter window and then root.destroy() it, trying to draw a cv2.imshow window will result in a X Window System error as soon as the cv2.waitKey delay is over. I’ve tried to replicate in different ways and it always gets to the error (error_code 3 request_code 15 minor_code 0). +It is worth noting that a root.quit() command won’t cause the same issue (as it is my understanding this method will simply exit the main loop rather than destroying the widgets). Also, while any cv2.imshow call will fail, trying to draw a new tkinter window will work just fine. +What resources are being shared among the two libraries? What does root.destroy() cause in the X environment to prevent any cv2 window to be drawn? +Debian Jessie - Python 3.4 - OpenCV 3.2.0","When you destroy the root window, it destroys all children windows as well. If cv2 uses a tkinter window or child window of the root window, it will fail if you destroy the root window.",0.0,False,1,5910 +2019-01-22 23:09:52.430,How do I use Pyinstaller to make a Mac file on Windows?,"I am on Windows and I am trying to figure how to use Pyinstaller to make a file (on Windows) for a Mac. +I have no trouble with Windows I am just not sure how I would make a file for another OS on it. +What I tried in cmd was: pyinstaller -F myfile.py and I am not sure what to change to make a Mac compatible file.",Not Possible without using a Virtual Machine,0.0,False,1,5911 +2019-01-23 03:02:55.387,Parsing list of URLs with regex patterns,"I have a large text file of URLs (>1 million URLs). The URLs represent product pages across several different domains. +I'm trying to parse out the SKU and product name from each URL, such as: + +www.amazon.com/totes-Mens-Mike-Duck-Boot/dp/B01HQR3ODE/ + + +totes-Mens-Mike-Duck-Boot +B01HQR3ODE + +www.bestbuy.com/site/apple-airpods-white/5577872.p?skuId=5577872 + + +apple-airpods-white +5577872 + + +I already have the individual regex patterns figured out for parsing out the two components of the URL (product name and SKU) for all of the domains in my list. This is nearly 100 different patterns. +While I've figured out how to test this one URL/pattern at a time, I'm having trouble figuring out how to architect a script which will read in my entire list, then go through and parse each line based on the relevant regex pattern. Any suggestions how to best tackle this? +If my input is one column (URL), my desired output is 4 columns (URL, domain, product_name, SKU).","While it is possible to roll this all into one massive regex, that might not be the easiest approach. Instead, I would use a two-pass strategy. Make a dict of domain names to the regex pattern that works for that domain. In the first pass, detect the domain for the line using a single regex that works for all URLs. Then use the discovered domain to lookup the appropriate regex in your dict to extract the fields for that domain.",0.2012947653214861,False,1,5912 +2019-01-24 09:30:09.097,Python Azure function processing blob storage,"I am trying to make a pipeline using Data Factory In MS Azure of processing data in blob storage and then running a python processing code/algorithm on the data and then sending it to another source. +My question here is, how can I do the same in Azure function apps? Or is there a better way to do it? +Thanks in advance. +Shyam",I created a Flask API and called my python code through that. And then put it in Azure as a web app and called the blob.,0.0,False,1,5913 +2019-01-24 11:46:02.647,Django Admin Interface - Privileges On Development Server,"I have an old project running (Django 1.6.5, Python 2.7) live for several years. I have to make some changes and have set up a working development environment with all the right django and python requirements (packages, versions, etc.) +Everything is running fine, except when I am trying to make changes inside the admin panel. I can log on fine and looking at the database (sqlite3) I see my user has superuser privileges. However django says ""You have no permissions to change anything"" and thus not even displaying any of the models registered for the admin interface. +I am using the same database that is running on the live server. There I have no issues at all (Live server also running in development mode with DEBUG=True has no issues) -> I can only see the history (My Change Log) - Nothing else +I have also created a new superuser - but same problem here. +I'd appreciate any pointers (Maybe how to debug this?)","Finally, I found the issue: +admin.autodiscover() +was commented out in the project's urls.py for some reason. (I may have done that trying to get the project to work in a more recent version of django) - So admin.site.register was never called and the app_dict never filled. index.html template of django.contrib.admin then returns + +You don't have permission to edit anything. + +or it's equivalent translation (which I find confusing, given that the permissions are correct, only no models were added to the admin dictionary. +I hope this may help anyone running into a similar problem",0.0,False,1,5914 +2019-01-24 19:31:18.407,How to handle EULA pop-up window that appears only on first login?,"I am new to Selenium. The web interface of our product pops up a EULA agreement which the user has to scroll down and accept before proceeding. This happens ONLY on initial login using that browser for that user. +I looked at the Selenium API but I am unable to figure out which one to use and how to use it. +Would much appreciate any suggestions in this regard. +I have played around with the IDE for Chrome but even over there I don't see anything that I can use for this. I am aware there is an 'if' command but I don't know how to use it to do something like: +if EULA-pops-up: + Scroll down and click 'accept' +proceed with rest of test.","You may disable the EULA if that is an option for you, I am sure there is a way to do it in registries as well : +C:\Program Files (x86)\Google\Chrome\Application there should be a file called master_preferences. +Open the file and setting: +require_eula to false",0.0,False,1,5915 +2019-01-25 09:21:41.077,Predicting values using trained MNB Classifier,"I am trying to train a model for sentiment analysis and below is my trained Multinomial Naive Bayes Classifier returning an accuracy of 84%. +I have been unable to figure out how to use the trained model to predict the sentiment of a sentence. For example, I now want to use the trained model to predict the sentiment of the phrase ""I hate you"". +I am new to this area and any help is highly appreciated.","I don't know the dataset and what is semantic of individual dictionaries, but you are training your model on a dataset which has form as follows: +[[{""word"":True, ""word2"": False}, 'neg'], [{""word"":True, ""word2"": False}, 'pos']] + +That means your input is in form of a dictionary, and output in form of 'neg' label. If you want to predict you need to input a dictionary in a form: + +{""I"": True, ""Hate"": False, ""you"": True}. + +Then: + +MNB_classifier.classify({""love"": True}) +>> 'neg' +or +MNB_classifier.classify_many([{""love"": True}]) +>> ['neg']",1.2,True,1,5916 +2019-01-25 11:29:23.027,Deliver python external libraries with script,"I want to use my script that uses pandas library on another linux machine where is no internet access or pip installed. +Is there a way how to deliver the script with all dependencies? +Thanks",or set needed dependices in script manually by appending sys.modules and pack together all the needed files.,0.0,False,1,5917 +2019-01-26 14:14:21.693,importing an entire folder of .py files into google colab,"I have a folder of . py files(a package made by me) which i have uploaded into my google drive. +I have mounted my google drive in colab but I still can not import the folder in my notebook as i do in my pc. +I know how to upload a single .py file into google colab and import it into my code, but i have no idea about how to upload a folder of .py files and import it in notebook and this is what i need to do. +This is the code i used to mount drive: + +from google.colab import drive +drive.mount('/content/drive') +!ls 'drive/My Drive'","I found how to do it. +after uploading all modules and packages into the directory which my notebook file is in, I changed colab's directory from ""/content"" to this directory and then i simply imported the modules and packages(folder of .py files) into my code",1.2,True,1,5918 +2019-01-27 06:38:41.497,How to redirect -progress option output of ffmpeg to stderr?,"I'm writing my own wraping for ffmpeg on Python 3.7.2 now and want to use it's ""-progress"" option to read current progress since it's highly machine-readable. The problem is ""-progress"" option of ffmpeg accepts as its parameter file names and urls only. But I don't want to create additional files not to setup the whole web-server for this purpose. +I've google a lot about it, but all the ""progress bars for ffmpeg"" projects rely on generic stderr output of ffmpeg only. Other answers here on Stackoverflow and on Superuser are being satisfied with just ""-v quiet -stats"", since ""progress"" is not very convenient name for parameter to google exactly it's cases. +The best solution would be to force ffmpeg write it's ""-progress"" output to separate pipe, since there is some useful data in stderr as well regarding file being encoded and I don't want to throw it away with ""-v quiet"". Though if there is a way to redirect ""-progress"" output to stderr, it would be cool as well! Any pipe would be ok actually, I just can't figure out how to make ffmpeg write it's ""-progress"" not to file in Windows. I tried ""ffmpeg -progress stderr ..."", but it just create the file with this name.","-progress pipe:1 will write out to stdout, pipe:2 to stderr. If you aren't streaming from ffmpeg, use stdout.",1.2,True,1,5919 +2019-01-28 14:38:40.990,How can I check how often all list elements from a list B occur in a list A?,"I have a python list A and a python list B with words as list elements. I need to check how often the list elements from list B are contained in list A. Is there a python method or how can I implement this efficient? +The python intersection method only tells me that a list element from list B occurs in list A, but not how often.","You could convert list B to a set, so that checking if the element is in B is faster. +Then create a dictionary to count the amount of times that the element is in A if the element is also in the set of B +As mentioned in the comments collections.Counter does the ""heavy lifting"" for you",0.0,False,1,5920 +2019-01-29 07:42:00.640,Can't install packages via pip or npm,"I'm trying to install some packages globally on my Mac. But I'm not able to install them via npm or pip, because I'll always get the message that the packages does not exist. For Python, I solved this by always using a virtualenv. But now I'm trying to install the @vue/cli via npm, but I'm not able to access it. The commands are working fine, but I'm just not able to access it. I think it has something to do with my $PATH, but I don't know how to fix that. +If I look in my Finder, I can find the @vue folder in /users/.../node_modules/. Does someone know how I can access this folder with the vue command in Terminal?","If it's a PATH problem: +1) Open up Terminal. +2) Run the following command: +sudo nano /etc/paths +3) Enter your password, when prompted. +4) Check if the correct paths exist in the file or not. +5) Fix, if needed +6) Hit Control-X to quit. +7) Enter “Y” to save the modified buffer. +Everything, should work fine now. If it doesn't try re-installing NPM/PIP.",1.2,True,1,5921 +2019-01-31 10:19:40.180,"How to get disk space total, used and free using Python 2.7 without PSUtil","Is there a way I get can the following disk statistics in Python without using PSUtil? + +Total disk space +Used disk space +Free disk space + +All the examples I have found seem to use PSUtil which I am unable to use for this application. +My device is a Raspberry PI with a single SD card. I would like to get the total size of the storage, how much has been used and how much is remaining. +Please note I am using Python 2.7.",You can do this with the os.statvfs function.,0.2012947653214861,False,1,5922 +2019-02-01 14:09:13.800,How can a same entity function as a parameter as well as an object?,"In the below operation, we are using a as an object as well as an argument. +a = ""Hello, World!"" + +print(a.lower()) -> a as an object +print(len(a)) -> a as a parameter + +May I know how exactly each operations differs in the way they are accessing a?","Everything in python (everything that can go on the rhs of an assignment) is an object, so what you can pass as an argument to a function IS an object, always. Actually, those are totally orthogonal concepts: you don't ""use"" something ""as an object"" - it IS an object - but you can indeed ""use it"" (pass it) as an argument to a function / method / whatever callable. + +May I know how exactly each operations differs in the way they are accessing a? + +Not by much actually (except for the fact they do different things with a)... +a.lower() is only syntactic sugar for str.lower(a) (obj.method() is syntactic sugar for type(obj).method(obj), so in both cases you are ""using a as an argument"".",0.3869120172231254,False,1,5923 +2019-02-02 02:41:43.413,Loading and using a trained TensorFlow model in Python,"I trained a model in TensorFlow using the tf.estimator API, more specifically using tf.estimator.train_and_evaluate. I have the output directory of the training. How do I load my model from this and then use it? +I have tried using the tf.train.Saver class by loading the most recent ckpt file and restoring the session. However, then to call sess.run() I need to know what the name of the output node of the graph is so I can pass this to the fetches argument. What is the name/how can I access this output node? Is there a better way to load and use the trained model? +Note that I have already trained and saved the model in a ckpt file, so please do not suggest that I use the simple_save function.","(Answering my own question) I realized that the easiest way to do this was to use the tf.estimator API. By initializing an estimator that warm starts from the model directory, it's possible to just call estimator.predict and pass the correct args (predict_fn) and get the predictions immediately. It's not required to deal with the graph variables in any way.",0.0,False,1,5924 +2019-02-02 08:14:24.520,Best way to map words with multiple spellings to a list of key words?,"I have a pile of ngrams of variable spelling, and I want to map each ngram to it's best match word out of a list of known desired outputs. +For example, ['mob', 'MOB', 'mobi', 'MOBIL', 'Mobile] maps to a desired output of 'mobile'. +Each input from ['desk', 'Desk+Tab', 'Tab+Desk', 'Desktop', 'dsk'] maps to a desired output of 'desktop' +I have about 30 of these 'output' words, and a pile of about a few million ngrams (much fewer unique). +My current best idea was to get all unique ngrams, copy and paste that into Excel and manually build a mapping table, took too long and isn't extensible. +Second idea was something with fuzzy (fuzzy-wuzzy) matching but it didn't match well. +I'm not experienced in Natural Language terminology or libraries at all so I can't find an answer to how this might be done better, faster and more extensibly when the number of unique ngrams increases or 'output' words change. +Any advice?","The classical approach would be, to build a ""Feature Matrix"" for each ngram. Each word maps to an Output which is a categorical value between 0 and 29 (one for each class) +Features can for example be the cosine similarity given by fuzzy wuzzy but typically you need many more. Then you train a classification model based on the created features. This model can typically be anything, a neural network, a boosted tree, etc.",0.1352210990936997,False,1,5925 +2019-02-04 21:09:00.383,Use VRAM (graphics card memory) in pygame for images,"I'm programming a 2D game with Python and Pygame and now I want to use my internal graphics memory to load images to. +I have an Intel HD graphics card (2GB VRAM) and a Nvidia GeForce (4GB VRAM). +I want to use one of them to load images from the hard drive to it (to use the images from there). +I thought it might be a good idea as I don't (almost) need the VRAM otherwise. +Can you tell me if and how it is possible? I do not need GPU-Acceleration.","You have to create your window with the FULLSCREEN, DOUBLEBUF and HWSURFACE flags. +Then you can create and use a hardware surface by creating it with the HWSURFACE flag. +You'll also have to use pygame.display.flip() instead of pygame.display.update(). +But even pygame itself discourages using hardware surfaces, since they have a bunch of disadvantages, like +- no mouse cursor +- only working in fullscreen (at least that's what pygame's documentation says) +- you can't easily manipulate the surfaces +- they may not work on all platforms +(and I never got transparency to work with them). +And it's not even clear if you really get a notable performance boot. +Maybe they'll work better in a future pygame release when pygame switches to SDL 2 and uses SDL_TEXTURE instead of SDL_HWSURFACE, who knows....",1.2,True,1,5926 +2019-02-05 02:42:03.343,Installed Anaconda to macOS that has Python2.7 and 3.7. Pandas only importing to 2.7; how can I import to 3.7?,"New to coding; I just downloaded the full Anaconda package for Python 3.7 onto my Mac. However, I can't successfully import Pandas into my program on SublimeText when running my Python3.7 build. It DOES work though, when I change the build to Python 2.7. Any idea how I can get it to properly import when running 3.7 on SublimeText? I'd just like to be able to execute the code within Sublime. +Thanks!","Uninstall python 2.7. Unless you use it, its better to uninstall it.",0.0,False,1,5927 +2019-02-05 12:40:24.703,How to check learning feasibility on a binary classification problem with Hoeffding's inequality/VC dimension with Python?,"I have a simple binary classification problem, and I want to assess the learning feasibility using Hoeffding's Inequality and also if possible VC dimension. +I understand the theory but, I am still stuck on how to implement it in Python. +I understand that In-sample Error (Ein) is the training Error. Out of sample Error(Eout) is the error on the test subsample I guess. +But how do I plot the difference between these two errors with the Hoeffdings bound?","Well here is how I handled it : I generate multiple train/test samples, run the algorithm on them, calculate Ein as the train set error, Eout estimated by the test set error, calculate how many times their differnces exceeds the value of epsilon (for a range of epsilons). And then I plot the curve of these rates of exceeding epsilon and the curve of the right side of the Hoeffding's /VC inequality so I see if the differences curve is always under the Hoeffding/VC's Bound curve, this informs me about the learning feasiblity.",1.2,True,1,5928 +2019-02-06 20:20:54.933,python keeps saying that 'imput is undefined. how do I fix this?,"Please help me with this. I'd really appreciate it. I have tried alot of things but nothing is working, Please suggest any ideas you have. +This is what it keeps saying: + name = imput('hello') +NameError: name 'imput' is not defined","You misspelled input as imput. imput() is not a function that python recognizes - thus, it assumes it's the name of some variable, searches for wherever that variable was declared, and finds nothing. So it says ""this is undefined"" and raises an error.",1.2,True,1,5929 +2019-02-07 02:36:18.047,Understanding each component of a web application architecture,"Here is a scenario for a system where I am trying to understand what is what: +I'm Joe, a novice programmer and I'm broke. I've got a Flask app and one physical machine. Since I'm broke, I cannot afford another machine for each piece of my system, thus the web server, application and database all live on my one machine. +I've never deployed an app before, but I know that a server can refer to a machine or software. From here on, lets call the physical machine the Rack. I've loaded an instance of MongoDB on my machine and I know that is the Database Server. In order to handle API requests, I need something on the rack that will handle HTTP/S requests, so I install and run an instance of NGINX on it and I know that this is the Web Server. However, my web server doesnt know how to run the app, so I do some research and learn about WSGI and come to find out I need another component. So I install and run an instance of Gunicorn and I know that this is the WSGI Server. +At this point I have a rack that is home to a web server to handle API calls (really just acts as a reverse proxy and pushes requests to the WSGI server), a WSGI server that serves up dynamic content from my app and a database server that stores client information used by the app. +I think I've got my head on straight, then my friend asks ""Where is your Application Server?"" +Is there an application server is this configuration? Do I need one?","Any basic server architecture has three layers. On one end is the web server, which fulfills requests from clients. The other end is the database server, where the data resides. +In between these two is the application server. It consists of the business logic required to interact with the web server to receive the request, and then with the database server to perform operations. +In your configuration, the WSGI serve/Flask app is the application server. +Most application servers can double up as web servers.",0.0,False,1,5930 +2019-02-07 04:21:01.713,How keras model H5 works in theory,After training the trained model will be saved as H5 format. But I didn't know how that H5 file can be used as classifier to classifying new data. How H5 model works in theory when classifying new data?,"When you save your model as h5-file, you save the model structure, all its parameters and further informations like state of your optimizer and so on. It is just an efficient way to save huge amounts of information. You could use json or xml file formats to do this as well. +You can't classifiy anything only using this file (it is not executable). You have to rebuild the graph as a tensorflow graph from this file. To do so you simply use the load_model() function from keras, which returns a keras.models.Model object. Then you can use this object to classifiy new data, with keras predict() function.",0.2012947653214861,False,1,5931 +2019-02-07 19:36:54.707,Using pyautogui with multiple monitors,"I'm trying to use the pyautogui module for python to automate mouse clicks and movements. However, it doesn't seem to be able to recognise any monitor other than my main one, which means i'm not able to input any actions on any of my other screens, and that is a huge problem for the project i am working on. +I've searched google for 2 hours but i can't find any straight answers on whether or not it's actually possible to work around. If anyone could either tell me that it is or isn't possible, tell me how to do it if it is, or suggest an equally effective alternative (for python) i would be extremely grateful.",not sure if this is clear but I subtracted an extended monitor's horizontal resolution from 0 because my 2nd monitor is on the left of my primary display. That allowed me to avoid the out of bounds warning. my answer probably isn't the clearest but I figured I would chime in to let folks know it actually can work.,0.0,False,1,5932 +2019-02-07 21:14:35.190,How to encrypt(?) a document to prove it was made at a certain time?,"So, a bit of a strange question, but let's say that I have a document (jupyter notebook) and I want to be able to prove to someone that it was made before a certain date, or that it was created on a certain date - does anyone have any ideas as to how I'd achieve that? +It would need to be a solution that couldn't be technically re-engineered after the fact (faking the creation date). +Keen to hear your thoughts :) !","email it to yourself or a trusted party – dandavis yesterday +Good solution. +Thanks!",0.0,False,1,5933 +2019-02-08 03:38:25.450,How to reset Colab after the following CUDA error 'Cuda assert fails: device-side assert triggered'?,"I'm running my Jupyter Notebook using Pytorch on Google Colab. After I received the 'Cuda assert fails: device-side assert triggered' I am unable to run any other code that uses my pytorch module. Does anyone know how to reset my code so that my Pytorch functions that were working before can still run? +I've already tried implementing CUDA_LAUNCH_BLOCKING=1but my code still doesn't work as the Assert is still triggered!","You need to reset the Colab notebook. To run existing Pytorch modules that used to work before, you have to do the following: + +Go to 'Runtime' in the tool bar +Click 'Restart and Run all' + +This will reset your CUDA assert and flush out the module so that you can have another shot at avoiding the error!",1.2,True,1,5934 +2019-02-08 07:38:41.967,How change hostpython for use python3 on MacOS for compile Python+Kivy project for Xcode,"I use toolchain from Kivy for compile Python + Kivy project on MacOS, but by default, toolchain use python2 recipes but I need change to python3. +I´m googling but I don't find how I can do this. +Any idea? +Thanks","your kivy installation is likely fine already. Your kivy-ios installation is not. Completely remove your kivy-ios folder on your computer, then do git clone git://github.com/kivy/kivy-ios to reinstall kivy-ios. Then try using toolchain.py to build python3 instead of python 2 +This solution work for me. Thanks very much Erik.",1.2,True,2,5935 +2019-02-08 07:38:41.967,How change hostpython for use python3 on MacOS for compile Python+Kivy project for Xcode,"I use toolchain from Kivy for compile Python + Kivy project on MacOS, but by default, toolchain use python2 recipes but I need change to python3. +I´m googling but I don't find how I can do this. +Any idea? +Thanks","For example, recipe ""ios"" and ""pyobjc"" dependency is changed from depends = [""python""] to depends = [""python3""]. (__init__.py in each packages in receipe folder in kivy-ios package) +These recipes are loaded from your request implicitly or explicitly +This description of the problem recipes is equal to require hostpython2/python2. then conflict with python3. +The dependency of each recipe can be traced from output of kivy-ios. ""hostpython"" or ""python"" in output(console) were equaled to hostpython2 or python2.(now ver.)",0.0,False,2,5935 +2019-02-09 15:50:20.647,How to reach streaming learning in Neural network?,"As title, I know there're some model supporting streaming learning like classification model. And the model has function partial_fit() +Now I'm studying regression model like SVR and RF regressor...etc in scikit. +But most of regression models doesn't support partial_fit . +So I want to reach the same effect in neural network. If in tensorflow, how to do like that? Is there any keyword?","There is no some special function for it in TensorFlow. You make a single training pass over a new chunk of data. And then another training pass over another new chunk of data, etc till you reach the end of the data stream (which, hopefully, will never happen).",0.0,False,1,5936 +2019-02-10 09:38:54.947,How to pickle or save a WxPython FontData Object,"I've been coding a text editor, and it has the function to change the default font displayed in the wx.stc.SyledTextCtrl. +I would like to be able to save the font as a user preference, and I have so far been unable to save it. +The exact object type is . +Would anyone know how to pickle/save this?","Probably due to its nature, you cannot pickle a wx.Font. +Your remaining option is to store its constituent parts. +Personally, I store facename, point size, weight, slant, underline, text colour and background colour. +How you store them is your own decision. +I use 2 different options depending on the code. + +Store the entries in an sqlite3 database, which allows for multiple +indexed entries. +Store the entries in an .ini file using +configobj + +Both sqlite3 and configobj are available in the standard python libraries.",1.2,True,1,5937 +2019-02-10 09:51:41.193,how to decode gzip string in JS,"I have one Django app and in the view of that I am using gzip_str(str) method to compress data and send back to the browser. Now I want to get the original string back in the browser. How can I decode the string in JS. +P.S. I have found few questions here related to the javascript decode of gzip string but I could not figure out how to use those. Please tell me how can I decode and get the original string.","Serve the string with an appropriate Content-Encoding, then the browser will decode it for you.",0.0,False,1,5938 +2019-02-10 15:03:18.307,How to remove unwanted python packages from the Base environment in Anaconda,"I am using Anaconda. I would like to know how to remove or uninstall unwanted packages from the base environment. I am using another environment for my coding purpose. +I tried to update my environment by using yml file (Not base environment). Unexpectedly some packages installed by yml into the base environment. So now it has 200 python packages which have another environment also. I want to clear unwanted packages in the base environment and I am not using any packages in the base environment. Also, my memory is full because of this. +Please give me a solution to remove unwanted packages in the base environment in anaconda. +It is very hard to remove one by one each package, therefore, I am looking for a better solution.","Please use the below code: +conda uninstall -n base ",0.0,False,1,5939 +2019-02-11 00:05:55.277,Pythonic way to split project into modules?,"Say, there is a module a which, among all other stuff, exposes some submodule a.b. +AFAICS, it is desired to maintain modules in such a fashion that one types import a, import a.b and then invokes something b-specific in a following way: a.b.b_specific_function() or a.a_specific_function(). +The questions I'd like to ask is how to achive such effect? +There is directory a and there is source-code file a.py inside of it. Seems to be logical choice, thought it would look like import a.a then, rather than import a. The only way I see is to put a.py's code to the __init__.py in the a directory, thought it is definitely wrong... +So how do I keep my namespaces clean?",You can put the code into __init__.py. There is nothing wrong with this for a small subpackage. If the code grows large it is also common to have a submodule with a repeated name like a/a.py and then inside __init__.py import it using from .a import *.,1.2,True,1,5940 +2019-02-11 11:28:57.127,Fastest way in numpy to sum over upper triangular elements with the least memory,"I need to perform a summation of the kind i Configure IDLE => Settings => Highlights there is a highlight setting for builtin names (default purple), including a few non-functions like Ellipsis. There is another setting for the names in def (function) and class statements (default blue). You can make def (and class) names be purple also. +This will not make function names purple when used because the colorizer does not know what the name will be bound to when the code is run.",1.2,True,1,5949 +2019-02-17 13:30:30.583,Count number of Triggers in a given Span of Time,"I've been working for a while with some cheap PIR modules and a raspberry pi 3. My aim is to use 4 of these guys to understand if a room is empty, and turn off some lights in case. +Now, this lovely sensors aren't really precise. They false trigger from time to time, and they don't trigger right after their status has changed, and this makes things much harder. +I thought I could solve the problem measuring a sort of ""density of triggers"", meaning how many triggers occurred during the last 60 seconds or something. +My question is how could I implement effectively this solution? I thought to build a sort of container and fill it with elements with a timer or something, but I'm not really sure this would do the trick. +Thank you!",How are you powering PIR sensors? They should be powered with 5V. I had similar problem with false triggers when I was powered PIR sensor with only 3.3V.,0.0,False,1,5950 +2019-02-18 02:33:10.543,"While debugging in pycharm, how to debug only through a certain iteration of the for loop?","I have a for loop in Python in Pycharm IDE. I have 20 iterations of the for loop. However, the bug seems to be coming from the dataset looped during the 18th iteration. Is it possible to skip the first 17 values of the for loop, and solely jump to debug the 18th iteration? +Currently, I have been going through all 17 iterations to reach the 18th. The logic encompassed in the for loop is quite intricate and long. Hence, every cycle of debug through each iteration takes a very long. +Is there some way to skip to the desired iteration in Pycharm without going in in-depth debugging of the previous iterations?",You can set a break point with a condition (i == 17 [right click on the breakpoint to put it]) at the start of the loop.,-0.1352210990936997,False,1,5951 +2019-02-18 17:11:25.750,How to evaluate the path to a python script to be executed within Jupyter Notebook,"Note: I am not simply asking how to execute a Python script within Jupyter, but how to evaluate a python variable which would then result in the full path of the Python script I was to execute. +In my particular scenario, some previous cell on my notebook generates a path based on some condition. +Example on two possible cases: + +script_path = /project_A/load.py +script_path = /project_B/load.py + +Then some time later, I have a cell where I just want to execute the script. Usually, I would just do: +%run -i /project_A/load.py +but I want to keep the cell's code generic by doing something like: +%run -i script_path +where script_path is a Python variable whose value is based on the conditions that are evaluated earlier in my Jupyter notebook. +The above would not work because Jupyter would then complain that it cannot find script_path.py. +Any clues how I can have a Python variable passed to the %run magic?","One hacky way would be to change the directory via %cd path +and then run the script with %run -i file.py +E: I know that this is not exactly what you were asking but maybe it helps with your problem.",0.0,False,1,5952 +2019-02-19 09:11:19.870,How to use pretrained word2vec vectors in doc2vec model?,"I am trying to implement doc2vec, but I am not sure how the input for the model should look like if I have pretrained word2vec vectors. +The problem is, that I am not sure how to theoretically use pretrained word2vec vectors for doc2vec. I imagine, that I could prefill the hidden layer with the vectors and the rest of the hidden layer fill with random numbers +Another idea is to use the vector as input for word instead of a one-hot-encoding but I am not sure if the output vectors for docs would make sense. +Thank you for your answer!","You might think that Doc2Vec (aka the 'Paragraph Vector' algorithm of Mikolov/Le) requires word-vectors as a 1st step. That's a common belief, and perhaps somewhat intuitive, by analogy to how humans learn a new language: understand the smaller units before the larger, then compose the meaning of the larger from the smaller. +But that's a common misconception, and Doc2Vec doesn't do that. +One mode, pure PV-DBOW (dm=0 in gensim), doesn't use conventional per-word input vectors at all. And, this mode is often one of the fastest-training and best-performing options. +The other mode, PV-DM (dm=1 in gensim, the default) does make use of neighboring word-vectors, in combination with doc-vectors in a manner analgous to word2vec's CBOW mode – but any word-vectors it needs will be trained-up simultaneously with doc-vectors. They are not trained 1st in a separate step, so there's not a easy splice-in point where you could provide word-vectors from elsewhere. +(You can mix skip-gram word-training into the PV-DBOW, with dbow_words=1 in gensim, but that will train word-vectors from scratch in an interleaved, shared-model process.) +To the extent you could pre-seed a model with word-vectors from elsewhere, it wouldn't necessarily improve results: it could easily send their quality sideways or worse. It might in some lucky well-managed cases speed model convergence, or be a way to enforce vector-space-compatibility with an earlier vector-set, but not without extra gotchas and caveats that aren't a part of the original algorithms, or well-described practices.",1.2,True,1,5953 +2019-02-21 02:24:51.223,How to convert every other character in a string to ascii in Python?,"I know how to convert characters to ascii and stuff, and I'm making my first encryption algorithm just as a little fun project, nothing serious. I was wondering if there was a way to convert every other character in a string to ascii, I know this is similar to some other questions but I don't think it's a duplicate. Also P.S. I'm fairly new to Python :)",Use ord() function to get ascii value of a character. You can then do a chr() of that value to get the character.,0.0,False,1,5954 +2019-02-21 05:36:13.653,Run python script by PHP from another server,"I am making APIs. +I'm using CentOS for web server, and another windows server 2016 for API server. +I'm trying to make things work between web server and window server. +My logic is like following flow. +1) Fill the data form and click button from web server +2) Send data to windows server +3) Python script runs and makes more data +4) More made data must send back to web server +5) Web server gets more made datas +6) BAMM! Datas append on browser! +I had made python scripts. +but I can't decide how to make datas go between two servers.. +Should I use ajax Curl in web server? +I was planning to send a POST type request by Curl from web server to Windows server. +But I don't know how to receipt those datas in windows server. +Please help! Thank you in advance.","First option: (Recommended) +You can create the python side as an API endpoint and from the PHP server, you need to call the python API. +Second option: +You can create the python side just like a normal webpage and whenever you call that page from PHP server you pass the params along with HTTP request, and after receiving data in python you print the data in JSON format.",1.2,True,1,5955 +2019-02-21 11:00:17.487,Kivy Android App - Switching screens with a swipe,"Every example I've found thus-far for development with Kivy in regards to switching screens is always done using a button, Although the user experience doesn't feel very ""native"" or ""Smooth"" for the kind of app I would like to develop. +I was hoping to incorperate swiping the screen to change the active screen. +I can sort of imagine how to do this by tracking the users on_touch_down() and on_touch_up() cords (spos) and if the difference is great enough, switch over to the next screen in a list of screens, although I can't envision how this could be implemented within the kv language +perhaps some examples could help me wrap my head around this better? +P.S. +I want to keep as much UI code within the kv language file as possible to prevent my project from producing a speghetti-code sort of feel to it. I'm also rather new to Kivy development altogether so I appologize if this question has an official answer somewhere and I just missed it.","You might want to use a Carousel instead of ScreenManager, but if you want that logic while using the ScreenManager, you'll certainly have to write some python code to manage that in a subclass of it, then use it in kv as a normal ScreenManager. Using previous and next properties to get the right screen to switch to depending on the action. This kind of logic is better done in python, and that doesn't prevent using the widgets in kv after.",1.2,True,1,5956 +2019-02-21 14:37:29.177,is it possible to code in python inside android studio?,"is it possible to code in python inside android studio? +how can I do it. +I have an android app that I am try to develop. and I want to code some part in python. +Thanks for the help +how can I do it. +I have an android app that I am try to develop. and I want to code some part in python. +Thanks for the help","If you mean coding part of your Android application in python (and another part for example in Java) it's not possible for now. However, you can write Python script and include it in your project, then write in your application part that will invoke it somehow. Also, you can use Android Studio as a text editor for Python scripts. To develop apps for Android in Python you have to use a proper library for it.",1.2,True,1,5957 +2019-02-22 09:08:55.793,"How to create .cpython-37 file, within __pycache__","I'm working on a project with a few scripts in the same directory, a pychache folder has been created within that directory, it contains compiled versions of two of my scripts. This has happened by accident I do not know how I did it. One thing I do know is I have imported functions between the two scripts that have been compiled. +I would like a third compiled python script for a separate file however I do not want to import any modules(if this is even the case). Does anyone know how I can manually create a .cpython-37 file? Any help is appreciated.","There is really no reason to worry about __pycache__ or *.pyc files - these are created and managed by the Python interpreter when it needs them and you cannot / should not worry about manually creating them. They contain a cached version of the compiled Python bytecode. Creating them manually makes no sense (and I am not aware of a way to do that), and you should probably let the interpreter decide when it makes sense to cache the bytecode and when it doesn't. +In Python 3.x, __pycache__ directories are created for modules when they are imported by a different module. AFAIK Python will not create a __pycache__ entry when a script is ran directly (e.g. a ""main"" script), only when it is imported as a module.",1.2,True,1,5958 +2019-02-22 10:05:07.620,Install python packages in windows server 2016 which has no internet connection,"I need to install python packages in a windows server 2016 sandbox for running a developed python model in production.This doesn't have internet connection. +My laptop is windows 2010 and the model is now running in my machine and need to push this to the server. +My question is how can i install all the required packages in my server which has no internet connection. +Thanks +Mithun","A simply way is to install the same python version on another machine having internet access, and use normally pip on that machine. This will download a bunch of files and installs them cleanly under Lib\site_packages of your Python installation. +You can they copy that folder to the server Python installation. If you want to be able to later add packages, you should keep both installations in sync: do not add or remove any package on the laptop without syncing with the server.",0.0,False,1,5959 +2019-02-22 18:47:07.843,How to write unit tests for text parser?,"For background, I am somewhat of a self-taught Python developer with only some formal training with a few CS courses in school. +In my job right now, I am working on a Python program that will automatically parse information from a very large text file (thousands of lines) that's a output result of a simulation software. I would like to be doing test driven development (TDD) but I am having a hard time understanding how to write proper unit tests. +My trouble is, the output of some of my functions (units) are massive data structures that are parsed versions of the text file. I could go through and create those outputs manually and then test but it would take a lot of time. The whole point of a parser is to save time and create structured outputs. Only testing I've been doing so far is trial and error manually which is also cumbersome. +So my question is, are there more intuitive ways to create tests for parsers? +Thank you in advance for any help!","Usually parsers are tested using a regression testing system. You create sample input sets and verify that the output is correct. Then you put the input and output in libraries. Each time you modify the code, you run the regression test system over the library to see if anything changes.",0.6730655149877884,False,1,5960 +2019-02-22 20:17:16.640,Specific reasons to favor pip vs. conda when installing Python packages,"I use miniconda as my default python installation. What is the current (2019) wisdom regarding when to install something with conda vs. pip? +My usual behavior is to install everything with pip, and only using conda if a package is not available through pip or the pip version doesn't work correctly. +Are there advantages to always favoring conda install? Are there issues associated with mixing the two installers? What factors should I be considering? + +OBJECTIVITY: This is not an opinion-based question! My question is when I have the option to install a python package with pip or conda, how do I make an informed decision? Not ""tell me which is better, but ""Why would I use one over the other, and will oscillating back & forth cause problems / inefficiencies?""","This is what I do: + +Activate your conda virutal env +Use pip to install into your virtual env +If you face any compatibility issues, use conda + +I recently ran into this when numpy / matplotlib freaked out and I used the conda build to resolve the issue.",0.3275988531455109,False,1,5961 +2019-02-24 14:21:54.997,how can I use python 3.6 if I have python 3.7?,"I'm trying to make a discord bot, and I read that I need to have an older version of Python so my code will work. I've tried using ""import discord"" on IDLE but an error message keeps on coming up. How can I use Python 3.6 and keep Python 3.7 on my Windows 10 computer?","Just install it in different folder (e.g. if current one is in C:\Users\noob\AppData\Local\Programs\Python\Python37, install 3.6. to C:\Users\noob\AppData\Local\Programs\Python\Python36). +Now, when you'll want to run a script, right click the file and under ""edit with IDLE"" will be multiple versions to choose. Works on my machine :)",0.0,False,2,5962 +2019-02-24 14:21:54.997,how can I use python 3.6 if I have python 3.7?,"I'm trying to make a discord bot, and I read that I need to have an older version of Python so my code will work. I've tried using ""import discord"" on IDLE but an error message keeps on coming up. How can I use Python 3.6 and keep Python 3.7 on my Windows 10 computer?","Install in different folder than your old Python 3.6 then update path +Using Virtualenv and or Pyenv +Using Docker + +Hope it help!",0.0,False,2,5962 +2019-02-25 15:00:24.023,"Is a Pyramid ""model"" also a Pyramid ""resource""?","I'm currently in the process of learning how to use the Python Pyramid web framework, and have found the documentation to be quite excellent. +I have, however, hit a stumbling block when it comes to distinguishing the idea of a ""model"" (i.e. a class defined under SQLAlchemy's declarative system) from the idea of a ""resource"" (i.e. a means of defining access control lists on views for use with Pyramid's auth system). +I understand the above statements seem to show that I already understand the difference, but I'm having trouble understanding whether I should be making models resources (by adding the __acl__ attribute directly in the model class) or creating a separate resource class (which has the proper __parent__ and __name__ attributes) which represents the access to a view which uses the model. +Any guidance is appreciated.","I'm having trouble understanding whether I should be making models resources (by adding the acl attribute directly in the model class) or creating a separate resource class + +The answer depends on what level of coupling you want to have. For a simple app, I would recommend making models resources just for simplicity sake. But for a complex app with a high level of cohesion and low level of coupling it would be better to have models separated from resources.",0.2012947653214861,False,1,5963 +2019-02-25 22:42:31.903,Python Gtk3 - Custom Statusbar w/ Progressbar,"Currently I am working to learn how to use Gtk3 with Python 3.6. So far I have been able to use a combination of resources to piece together a project I am working on, some old 2.0 references, some 3.0 shallow reference guides, and using the python3 interpreters help function. +However I am stuck at how I could customise the statusbar to display a progressbar. Would I have to modify the contents of the statusbar to add it to the end(so it shows up at the right side), or is it better to build my own statusbar? +Also how could I modify the progressbars color? Nothing in the materials list a method/property for it.","GtkStatusbar is a subclass of GtkBox. You can use any GtkBox method including pack_start and pack_end or even add, which is a method of GtkContainer. +Thus you can simply add you progressbar to statusbar.",1.2,True,1,5964 +2019-02-26 04:59:25.937,Can a consumer read records from a partition that stores data of particular key value?,Instead of creating many topics I'm creating a partition for each consumer and store data using a key. So is there a way to make a consumer in a consumer group read from partition that stores data of a specific key. If so can you suggest how it can done using kafka-python (or any other library).,"Instead of using the subscription and the related consumer group logic, you can use the ""assign"" logic (it's provided by the Kafka consumer Java client for example). +While with subscription to a topic and being part of a consumer group, the partitions are automatically assigned to consumers and re-balanced when a new consumer joins or leaves, it's different using assign. +With assign, the consumer asks to be assigned to a specific partition. It's not part of any consumer group. It's also mean that you are in charge of handling rebalancing if a consumer dies: for example, if consumer 1 get assigned partition 1 but at some point it crashes, the partition 1 won't be reassigned automatically to another consumer. It's up to you writing and handling the logic for restarting the consumer (or another one) for getting messages from partition 1.",0.0,False,1,5965 +2019-02-26 08:57:02.207,how to increase fps for raspberry pi for object detection,"I'm having low fps for real-time object detection on my raspberry pi +I trained the yolo-darkflow object detection on my own data set using my laptop windows 10 .. when I tested the model for real-time detection on my laptop with webcam it worked fine with high fps +However when trying to test it on my raspberry pi, which runs on Raspbian OS, it gives very low fps rate that is about 0.3 , but when I only try to use the webcam without the yolo it works fine with fast frames.. also when I use Tensorflow API for object detection with webcam on pi it also works fine with high fps +can someone suggest me something please? is the reason related to the yolo models or opencv or phthon? how can I make the fps rate higher and faster for object detection with webcam?",The raspberry pi not have the GPU procesors and because of that is very hard for it to do image recognition at a high fps .,0.0,False,2,5966 +2019-02-26 08:57:02.207,how to increase fps for raspberry pi for object detection,"I'm having low fps for real-time object detection on my raspberry pi +I trained the yolo-darkflow object detection on my own data set using my laptop windows 10 .. when I tested the model for real-time detection on my laptop with webcam it worked fine with high fps +However when trying to test it on my raspberry pi, which runs on Raspbian OS, it gives very low fps rate that is about 0.3 , but when I only try to use the webcam without the yolo it works fine with fast frames.. also when I use Tensorflow API for object detection with webcam on pi it also works fine with high fps +can someone suggest me something please? is the reason related to the yolo models or opencv or phthon? how can I make the fps rate higher and faster for object detection with webcam?","My detector on raspberry pi without any accelerator can reach 5 FPS. +I used SSD mobilenet, and quantize it after training. +Tensorflow Lite supplies a object detection demo can reach about 8 FPS on raspberry pi 4.",0.0,False,2,5966 +2019-02-26 10:41:52.910,"Python3: FileNotFoundError: [Errno 2] No such file or directory: 'train.txt', even with complete path","I'm currently working with Python3 on Jupyter Notebook. I try to load a text file which is in the exact same directory as my python notebook but it still doesn't find it. My line of code is: +text_data = prepare_text('train.txt') +and the error is a typical +FileNotFoundError: [Errno 2] No such file or directory: 'train.txt' +I've already tried to enter the full path to my text file but then I still get the same error. +Does anyone know how to solve this?","I found the answer. Windows put a secont .txt at the end of the file name, so I should have used train.txt.txt instead.",0.2012947653214861,False,1,5967 +2019-02-26 16:48:18.673,Write own stemmer for stemming,"I have a dataset of 27 files, each containing opcodes. I want to use stemming to map all versions of similar opcodes into the same opcode. For example: push, pusha, pushb, etc would all be mapped to push. +My dictionary contains 27 keys and each key has a list of opcodes as a value. Since the values contain opcodes and not normal english words, I cannot use the regular stemmer module. I need to write my own stemmer code. Also I cannot hard-code a custom dictionary that maps different versions of the opcodes to the root opcode because I have a huge dataset. +I think regex expression would be a good idea but I do not know how to use it. Can anyone help me with this or any other idea to write my own stemmer code?","I would recommend looking at the levenshtein distance metric - it measures the distance between two words in terms of character insertions, deletions, and replacements (so push and pusha would be distance 1 apart if you do the ~most normal thing of weighing insertions = deletions = replacements = 1 each). Based on the example you wrote, you could try just setting up categories that are all distance 1 from each other. However, I don't know if all of your equivalent opcodes will be so similar - if they're not leven might not work.",0.0,False,1,5968 +2019-02-26 18:30:41.820,Elementree Fromstring and iterparse in Python 3.x,"I am able to parse from file using this method: +for event, elem in ET.iterparse(file_path, events=(""start"", ""end"")): +But, how can I do the same with fromstring function? Instead of from file, xml content is stored in a variable now. But, I still want to have the events as before.","From the documentation for the iterparse method: + +...Parses an XML section into an element tree incrementally, and + reports what’s going on to the user. source is a filename or file + object containing XML data... + +I've never used the etree python module, but ""or file object"" says to me that this method accepts an open file-like object as well as a file name. It's an easy thing to construct a file-like object around a string to pass as input to a method like this. +Take a look at the StringIO module.",0.0,False,1,5969 +2019-02-26 21:57:40.743,Why should I use tf.data?,"I'm learning tensorflow, and the tf.data API confuses me. It is apparently better when dealing with large datasets, but when using the dataset, it has to be converted back into a tensor. But why not just use a tensor in the first place? Why and when should we use tf.data? +Why isn't it possible to have tf.data return the entire dataset, instead of processing it through a for loop? When just minimizing a function of the dataset (using something like tf.losses.mean_squared_error), I usually input the data through a tensor or a numpy array, and I don't know how to input data through a for loop. How would I do this?","The tf.data module has specific tools which help in building a input pipeline for your ML model. A input pipeline takes in the raw data, processes it and then feeds it to the model. + + +When should I use tf.data module? + +The tf.data module is useful when you have a large dataset in the form of a file such as .csv or .tfrecord. tf.data.Dataset can perform shuffling and batching of samples efficiently. Useful for large datasets as well as small datasets. It could combine train and test datasets. + +How can I create batches and iterate through them for training? + +I think you can efficiently do this with NumPy and np.reshape method. Pandas can read data files for you. Then, you just need a for ... in ... loop to get each batch amd pass it to your model. + +How can I feed NumPy data to a TensorFlow model? + +There are two options to use tf.placeholder() or tf.data.Dataset. + +The tf.data.Dataset is a much easier implementation. I recommend to use it. Also, has some good set of methods. +The tf.placeholder creates a placeholder tensor which feeds the data to a TensorFlow graph. This process would consume more time feeding in the data.",1.2,True,1,5970 +2019-02-27 00:06:57.810,Pipenv: Multiple Environments,"Right now I'm using virtualenv and just switching over to Pipenv. Today in virtualenv I load in different environment variables and settings depending on whether I'm in development, production, or testingby setting DJANGO_SETTINGS_MODULE to myproject.settings.development, myproject.settings.production, and myproject.settings.testing. +I'm aware that I can set an .env file, but how can I have multiple versions of that .env file?","You should create different .env files with different prefixes depending on the environment, such as production.env or testing.env. With pipenv, you can use the PIPENV_DONT_LOAD_ENV=1 environment variable to prevent pipenv shell from automatically exporting the .env file and combine this with export $(cat .env | xargs). +export $(cat production.env | xargs) && PIPENV_DONT_LOAD_ENV=1 pipenv shell would configure your environment variables for production and then start a shell in the virtual environment.",1.2,True,1,5971 +2019-02-27 05:59:35.137,How to architect a GUI application with UART comms which stays responsive to the user,"I'm writing an application in PyQt5 which will be used for calibration and test of a product. The important details: + +The product under test uses an old-school UART/serial communication link at 9600 baud. +...and the test / calibration operation involves communicating with another device which has a UART/serial communication link at 300 baud(!) +In both cases, the communication protocol is ASCII text with messages terminated by a newline \r\n. + +During the test/calibration cycle the GUI needs to communicate with the devices, take readings, and log those readings to various boxes in the screen. The trouble is, with the slow UART communications (and the long time-outs if there is a comms drop-out) how do I keep the GUI responsive? +The Minimally Acceptable solution (already working) is to create a GUI which communicates over the serial port, but the user interface becomes decidedly sluggish and herky-jerky while the GUI is waiting for calls to serial.read() to either complete or time out. +The Desired solution is a GUI which has a nice smooth responsive feel to it, even while it is transmitting and receiving serial data. +The Stretch Goal solution is a GUI which will log every single character of the serial communications to a text display used for debugging, while still providing some nice ""message-level"" abstraction for the actual logic of the application. +My present ""minimally acceptable"" implementation uses a state machine where I run a series of short functions, typically including the serial.write() and serial.read() commands, with pauses to allow the GUI to update. But the state machine makes the GUI logic somewhat tricky to follow; the code would be much easier to understand if the program flow for communicating to the device was written in a simple linear fashion. +I'm really hesitant to sprinkle a bunch of processEvents() calls throughout the code. And even those don't help when waiting for serial.read(). So the correct solution probably involves threading, signals, and slots, but I'm guessing that ""threading"" has the same two Golden Rules as ""optimization"": Rule 1: Don't do it. Rule 2 (experts only): Don't do it yet. +Are there any existing architectures or design patterns to use as a starting point for this type of application?","Okay for the past few days I've been digging, and figured out how to do this. Since there haven't been any responses, and I do think this question could apply to others, I'll go ahead and post my solution. Briefly: + +Yes, the best way to solve this is with with PyQt Threads, and using Signals and Slots to communicate between the threads. +For basic function (the ""Desired"" solution above) just follow the existing basic design pattern for PyQt multithreaded GUI applications: + + +A GUI thread whose only job is to display data and relay user inputs / commands, and, +A worker thread that does everything else (in this case, including the serial comms). + +One stumbling point along the way: I'd have loved to write the worker thread as one linear flow of code, but unfortunately that's not possible because the worker thread needs to get info from the GUI at times. + + +The only way to get data back and forth between the two threads is via Signals and Slots, and the Slots (i.e. the receiving end) must be a callable, so there was no way for me to implement some type of getdata() operation in the middle of a function. Instead, the worker thread had to be constructed as a bunch of individual functions, each one of which gets kicked off after it receives the appropriate Signal from the GUI. + +Getting the serial data monitoring function (the ""Stretch Goal"" above) was actually pretty easy -- just have the low-level serial transmit and receive routines already in my code emit Signals for that data, and the GUI thread receives and logs those Signals. + +All in all it ended up being a pretty straightforward application of existing principles, but I'm writing it down so hopefully the next guy doesn't have to go down so many blind alleys like I did along the way.",0.0,False,1,5972 +2019-02-27 13:33:17.083,how to register users of different kinds using different tables in django?,"I'm new to django, I want to register users using different tables for different users like students, teaching staff, non teaching staff, 3 tables. +How can i do it instead of using default auth_users table for registration","In Django authentication, there is Group model available which have many to many relationship with User model. You can add students, teaching staff and non teaching staff to Group model for separating users by their type.",0.0,False,2,5973 +2019-02-27 13:33:17.083,how to register users of different kinds using different tables in django?,"I'm new to django, I want to register users using different tables for different users like students, teaching staff, non teaching staff, 3 tables. +How can i do it instead of using default auth_users table for registration","cf Sam's answer for the proper solutions from a technical POV. From a design POV, ""student"", ""teaching staff"" etc are not entities but different roles a user can have. +One curious things with living persons and real-life things in general is that they tend to evolve over time without any respect for our well-defined specifications and classifications - for example it's not uncommon for a student to also have teaching duties at some points, for a teacher to also be studying some other topic, or for a teacher to stop teaching and switch to more administrative tasks. If you design your model with distinct entities instead of one single entitie and distinct roles, it won't properly accomodate those kind of situations (and no, having one account as student and one as teacher is not a proper solution either). +That's why the default user model in Django is based on one single entity (the User model) and features allowing roles definitions (groups and permissions) in such a way that one user can have many roles, whether at the same time or in succession.",0.0,False,2,5973 +2019-02-28 01:29:15.947,How do I know if a file has finished copying?,"I've been given a simple file-conversion task: whenever an MP4 file is in a certain directory, I do some magic to it and move it to a different directory. Nice and straightforward, and easy to automate. +However, if a user is copying some huge file into the directory, I worry that my script might catch it mid-copy, and only have half of the file to work with. +Is there a way, using Python 3 on Windows, to check whether a file is done copying (in other words, no process is currently writing to it)? +EDIT: To clarify, I have no idea how the files are getting there: my script just needs to watch a shared network folder and process files that are put there. They might be copied from a local folder I don't have access to, or placed through SCP, or downloaded from the web; all I know is the destination.","you could try first comparing the size of the file initially, or alternatively see if there are new files in the folder, capture the name of the new file and see if its size increases in x time, if you have a script, you could show the code....",0.0,False,1,5974 +2019-02-28 03:04:27.143,Viewing Graph from saved .pbtxt file on Tensorboard,I just have a graph.pbtxt file. I want to view the graph in tensorboard. But I am not aware of how to do that. Do I have to write any python script or can I do it from the terminal itself? Kindly help me to know the steps involved.,"Open tensorboard and use the ""Upload"" button on the left to upload the pbtxt file will directly open the graph in tensorboard.",0.9866142981514304,False,1,5975 +2019-02-28 16:24:27.333,Intersection of interpol1d objects,"I have 2 cumulative distributions that I want to find the intersection of. To get an underlying function, I used the scipy interpol1d function. What I’m trying to figure out now, is how to calculate their intersection. Not sure how I can do it. Tried fsolve, but I can’t find how to restrict the range in which to search for a solution (domain is limited).","Use scipy.optimize.brentq for bracketed root-finding: +brentq(lambda x: interp1d(xx, yy)(x) - interp1d(xxx, yyy)(x), -1, 1)",0.0,False,1,5976 +2019-02-28 18:54:51.120,How to make depth of nii images equal?,"I am having some nii images and each having same height and width but different depth. So I need to make the depth of each image equal, how can I do that? Also I didn't find any Python code, which can help me.","Once you have defined the depth you want for all volumes, let it be D, you can instantiate an image (called volume when D > 1) of dimensions W x H x D, for every volume you have. +Then you can fill every such volume, pixel by pixel, by mapping the pixel position onto the original volume and retrieving the value of the pixel by interpolating the values in neighboring pixels. +For example, a pixel (i_x, i_y, i_z) in the new volume will be mapped in a point (i_x, i_y, i_z') of the old volume. One of the simplest interpolation methods is the linear interpolation: the value of (i_x, i_y, i_z) is a weighted average of the values (i_x, i_y, floor(i_z')) and (i_x, i_y, floor(i_z') + 1).",0.0,False,1,5977 +2019-02-28 21:02:20.790,Tensorflow data pipeline: Slow with caching to disk - how to improve evaluation performance?,"I've built a data pipeline. Pseudo code is as follows: + +dataset -> +dataset = augment(dataset) +dataset = dataset.batch(35).prefetch(1) +dataset = set_from_generator(to_feed_dict(dataset)) # expensive op +dataset = Cache('/tmp', dataset) +dataset = dataset.unbatch() +dataset = dataset.shuffle(64).batch(256).prefetch(1) +to_feed_dict(dataset) + +1 to 5 actions are required to generate the pretrained model outputs. I cache them as they do not change throughout epochs (pretrained model weights are not updated). 5 to 8 actions prepare the dataset for training. +Different batch sizes have to be used, as the pretrained model inputs are of a much bigger dimensionality than the outputs. +The first epoch is slow, as it has to evaluate the pretrained model on every input item to generate templates and save them to the disk. Later epochs are faster, yet they're still quite slow - I suspect the bottleneck is reading the disk cache. +What could be improved in this data pipeline to reduce the issue? +Thank you!","prefetch(1) means that there will be only one element prefetched, I think you may want to have it as big as the batch size or larger. +After first cache you may try to put it second time but without providing a path, so it would cache some in the memory. +Maybe your HDD is just slow? ;) +Another idea is you could just manually write to compressed TFRecord after steps 1-4 and then read it with another dataset. Compressed file has lower I/O but causes higher CPU usage.",0.0,False,1,5978 +2019-03-01 11:32:59.497,Get data from an .asp file,"My girlfriend has been given the task of getting all the data from a webpage. The web page belongs to an adult education centre. To get to the webpage, you must first log in. The url is a .asp file. +She has to put the data in an Excel sheet. The entries are student names, numbers, ID card number, telephone, etc. There are thousands of entries. HR students alone has 70 pages of entries. This all shows up on the webpage as a table. It is possible to copy and paste. +I can handle Python openpyxl reasonably and I have heard of web-scraping, which I believe Python can do. +I don't know what .asp is. +Could you please give me some tips, pointers, about how to get the data with Python? +Can I automate this task? +Is this a case for MySQL? (About which I know nothing.)","Try using the tool called Octoparse. +Disclaimer: I've never used it myself, but only came close to using it. So, from my knowledge of its features, I think it would be useful for your need.",0.2012947653214861,False,2,5979 +2019-03-01 11:32:59.497,Get data from an .asp file,"My girlfriend has been given the task of getting all the data from a webpage. The web page belongs to an adult education centre. To get to the webpage, you must first log in. The url is a .asp file. +She has to put the data in an Excel sheet. The entries are student names, numbers, ID card number, telephone, etc. There are thousands of entries. HR students alone has 70 pages of entries. This all shows up on the webpage as a table. It is possible to copy and paste. +I can handle Python openpyxl reasonably and I have heard of web-scraping, which I believe Python can do. +I don't know what .asp is. +Could you please give me some tips, pointers, about how to get the data with Python? +Can I automate this task? +Is this a case for MySQL? (About which I know nothing.)","This is a really broad question and not really in the style of Stack Overflow. To give you some pointers anyway. In the end .asp files, as far as I know, behave like normal websites. Normal websites are interpreted in the browser like HTML, CSS etc. This can be parsed with Python. There are two approaches to this that I have used in the past that work. One is to use a library like requests to get the HTML of a page and then read it using the BeautifulSoup library. This gets more complex if you need to visit authenticated pages. The other option is to use Selenium for python. This module is more a tool to automate browsing itself. You can use this to automate visiting the website and entering login credentials and then read content on the page. There are probably more options which is why this question is too broad. Good luck with your project though! +EDIT: You do not need MySql for this. Especially not if the required output is an Excel file, which I would generate as a CSV instead because standard Python works better with CSV files than Excel.",0.2012947653214861,False,2,5979 +2019-03-01 22:45:49.617,Pygame/Python/Terminal/Mac related,"I'm a beginner, I have really hit a brick wall, and would greatly appreciate any advice someone more advanced can offer. +I have been having a number of extremely frustrating issues the past few days, which I have been round and round google trying to solve, tried all sorts of things to no avail. +Problem 1) +I can't import pygame in Idle with the error: +ModuleNotFoundError: No module named 'pygame' - even though it is definitely installed, as in terminal, if I ask pip3 to install pygame it says: +Requirement already satisfied: pygame in /usr/local/lib/python3.7/site-packages (1.9.4) +I think there may be a problem with several conflicting versions of python on my computer, as when i type sys.path in Idle (which by the way displays Python 3.7.2 ) the following are listed: +'/Users/myname/Documents', '/Library/Frameworks/Python.framework/Versions/3.7/lib/python37.zip', '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7', '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload', '/Users/myname/Library/Python/3.7/lib/python/site-packages', '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages' +So am I right in thinking pygame is in the python3.7/sitepackages version, and this is why idle won't import it? I don't know I'm just trying to make sense of this. I have absoloutely no clue how to solve this,""re-set the path"" or whatever. I don't even know how to find all of these versions of python as only one appears in my applications folder, the rest are elsewhere? +Problem 2) +Apparently there should be a python 2.7 system version installed on every mac system which is vital to the running of python regardless of the developing environment you use. Yet all of my versions of python seem to be in the library/downloaded versions. Does this mean my system version of python is gone? I have put the computer in recovery mode today and done a reinstall of the macOS mojave system today, so shouldn't any possible lost version of python 2.7 be back on the system now? +Problem 3) +When I go to terminal, frequently every command I type is 'not found'. +I have sometimes found a temporary solution is typing: +export PATH=""/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"" +but the problems always return! +As I say I also did a system reinstall today but that has helped none! +Can anybody please help me with these queries? I am really at the end of my tether and quite lost, forgive my programming ignorance please. Many thanks.","You should actually add the export PATH=""/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"" to your .bash_profile (if you are using bash). Do this by opening your terminal, verifying that it says ""bash"" at the top. If it doesn't, you may have a .zprofile instead. Type ls -al and it will list all the invisible files. If you have .bash_profile listed, use that one. If you have .zprofile, use that. +Type nano .bash_profile to open and edit the profile and add the command to the end of it. This will permanently add the path to your profile after you restart the terminal. +Use ^X to exit nano and type Y to save your changes. Then you can check that it works when you try to run the program from IDLE.",0.0,False,1,5980 +2019-03-03 16:50:01.227,Force screen session to use specific version of python,"I am using a screen on my server. When I ask which python inside the screen I see it is using the default /opt/anaconda2/bin/python version which is on my server, but outside the screen when I ask which python I get ~/anaconda2/bin/python. I want to use the same python inside the screen but I don't know how can I set it. Both path are available in $PATH","You could do either one of the following: + +Use a virtual environment (install virtualenv). You can specify +the version of Python you want to use when creating the virtual +environment with -p /opt/anaconda2/bin/python. +Use an alias: +alias python=/opt/anaconda2/bin/python.",0.3869120172231254,False,1,5981 +2019-03-04 17:51:31.130,How can i remove an object in python?,"i'm trying to create a chess simulator. +consider this scenario: +there is a black rook (instance object of Rook class) in square 2B called rook1. +there is a white rook in square 2C called rook2. +when the player moves rook1 to square 2C , the i should remove rook2 object from memory completely. +how can i do it? +P.S. i'v already tried del rook2 , but i don't know why it doesn't work.","Trying to remove objects from memory is the wrong way to go. Python offers no option to do that manually, and it would be the wrong operation to perform anyway. +You need to alter whatever data structure represents your chess board so that it represents a game state where there is a black rook at c2 and no piece at b2, rather than a game state where there is a black rook at b2 and a white rook at c2. In a reasonable Python beginner-project implementation of a chess board, this probably means assigning to cells in a list of lists. No objects need to be manually removed from memory to do this. +Having rook1 and rook2 variables referring to your rooks is unnecessary and probably counterproductive.",0.999329299739067,False,1,5982 +2019-03-04 22:00:24.150,Text classification beyond the keyword dependency and inferring the actual meaning,"I am trying to develop a text classifier that will classify a piece of text as Private or Public. Take medical or health information as an example domain. A typical classifier that I can think of considers keywords as the main distinguisher, right? What about a scenario like bellow? What if both of the pieces of text contains similar keywords but carry a different meaning. +Following piece of text is revealing someone's private (health) situation (the patient has cancer): +I've been to two clinics and my pcp. I've had an ultrasound only to be told it's a resolving cyst or a hematoma, but it's getting larger and starting to make my leg ache. The PCP said it can't be a cyst because it started out way too big and I swear I have NEVER injured my leg, not even a bump. I am now scared and afraid of cancer. I noticed a slightly uncomfortable sensation only when squatting down about 9 months ago. 3 months ago I went to squat down to put away laundry and it kinda hurt. The pain prompted me to examine my leg and that is when I noticed a lump at the bottom of my calf muscle and flexing only made it more noticeable. Eventually after four clinic visits, an ultrasound and one pcp the result seems to be positive and the mass is getting larger. +[Private] (Correct Classification) +Following piece of text is a comment from a doctor which is definitely not revealing is health situation. It introduces the weaknesses of a typical classifier model: +Don’t be scared and do not assume anything bad as cancer. I have gone through several cases in my clinic and it seems familiar to me. As you mentioned it might be a cyst or a hematoma and it's getting larger, it must need some additional diagnosis such as biopsy. Having an ache in that area or the size of the lump does not really tells anything bad. You should visit specialized clinics few more times and go under some specific tests such as biopsy, CT scan, pcp and ultrasound before that lump become more larger. +[Private] (Which is the Wrong Classification. It should be [Public]) +The second paragraph was classified as private by all of my current classifiers, for obvious reason. Similar keywords, valid word sequences, the presence of subjects seemed to make the classifier very confused. Even, both of the content contains subjects like I, You (Noun, Pronouns) etc. I thought about from Word2Vec to Doc2Vec, from Inferring meaning to semantic embeddings but can't think about a solution approach that best suits this problem. +Any idea, which way I should handle the classification problem? Thanks in advance. +Progress so Far: +The data, I have collected from a public source where patients/victims usually post their own situation and doctors/well-wishers reply to those. I assumed while crawling is that - posts belongs to my private class and comments belongs to public class. All to gether I started with 5K+5K posts/comments and got around 60% with a naive bayes classifier without any major preprocessing. I will try Neural Network soon. But before feeding into any classifier, I just want to know how I can preprocess better to put reasonable weights to either class for better distinction.","(1) Bayes is indeed a weak classifier - I'd try SVM. If you see improvement than further improvement can be achieved using Neural Network (and perhaps Deep Learning) +(2) Feature engineering - use TFiDF , and try other things (many people suggest Word2Vec, although I personally tried and it did not improve). Also you can remove stop words. +One thing to consider, because you give two anecdotes is to measure objectively the level of agreement between human beings on the task. It is sometime overlooked that two people given the same text can disagree on labels (some might say that a specific document is private although it is public). Just a point to notice - because if e.g. the level of agreement is 65%, then it will be very difficult to build an algorithm that is more accurate.",-0.2655860252697744,False,1,5983 +2019-03-05 03:08:47.917,How do you profile a Python script from Windows command line using PyPy and vmprof?,"I have a Python script that I want to profile using vmprof to figure out what parts of the code are slow. Since PyPy is generally faster, I also want to profile the script while it is using the PyPy JIT. If the script is named myscript.py, how do you structure the command on the command line to do this? +I have already installed vmprof using + +pip install vmprof","I would be suprised if it works, but the command is pypy -m vmprof myscript.py . I would expect it to crash saying vmprof is not supported on windows.",0.0,False,1,5984 +2019-03-06 00:43:24.310,How to update python 3.6 to 3.7 using Mac terminal,"OK +I was afraid to use the terminal, so I installed the python-3.7.2-macosx10.9 package downloaded from python.org +Ran the certificate and shell profile scripts, everything seems fine. +Now the ""which python3"" has changed the path from 3.6 to the new 3.7.2 +So everything seems fine, correct? +My question (of 2) is what's going on with the old python3.6 folder still in the applications folder. Can you just delete it safely? Why when you install a new version does it not at least ask you if you want to update or install and keep both versions? +Second question, how would you do this from the terminal? +I see the first step is to sudo to the root. +I've forgotten the rest. +But from the terminal, would this simply add the new version and leave +the older one like the package installer? +It's pretty simple to use the package installer and then delete a folder. +So, thanks in advance. I'm new to python and have not much confidence +using the terminal and all the powerful shell commands. +And yeah I see all the Brew enthusiasts. I DON'T want to use Brew for the moment. +The python snakes nest of pathways is a little confusing, for the moment. +I don't want to get lost with a zillion pathways from Brew because it's +confusing for the moment. +I love Brew, leave me alone.","Each version of the Python installation is independent of each other. So its safe to delete the version you don't want, but be cautious of this because it can lead to broken dependencies :-). +You can run any version by adding the specific version i.e $python3.6 or $python3.7 +The best approach is to use virtual environments for your projects to enhance consistency. see pipenv",0.0,False,1,5985 +2019-03-07 02:42:18.347,How do I figure out what dependencies to install when I copy my Django app from one system to another?,"I'm using Django and Python 3.7. I want to write a script to help me easily migrate my application from my local machien (a Mac High Sierra) to a CentOS Linux instance. I'm using a virtual environment in both places. There are many things that need to be done here, but to keep the question specific, how do I determine on my remote machine (where I'm deploying my project to), what dependencies are lacking? I'm using rsync to copy the files (minus the virtual environment)","On the source system execute pip freeze > requirements.txt, then copy the requiremnts.txt to the target system and then on the target system install all the dependencies with pip install -r requirements.txt. Of course you will need to activate the virtual environments on both systems before execute the pip commands. +If you are using a source code management system like git it is a good idea to keep the requirements.txt up to date in your source code repository.",1.2,True,1,5986 +2019-03-07 10:03:42.277,Does angular server and flask server have both to be running at the same?,"I'm new to both angular and flask framework so plz be patient with me. +I'm trying to build a web app with flask as a backend server and Angular for the frontend (I didn't start it yet), and while gathering infos and looking at tutorials and some documentation (a little bit) I'm wondering: +Does Angular server and flask server need both to be running at the same time or will just flask be enough? Knowing that I want to send data from the server to the frontend to display and collecting data from users and sending it to the backend. +I noticed some guys building the angular app and using the dist files but I don't exactly know how that works. +So can you guys suggest what should I have to do or how to proceed with this? +Thank you ^^","Angular does not need a server. It's a client-side framework so it can be served by any server like Flask. Probably in most tutorials, the backend is served by nodejs, not Flask.",1.2,True,1,5987 +2019-03-08 19:25:09.250,Change color of single word in Tk label widget,"I would like to change the font color of a single word in a Tkinter label widget. +I understand that something similar to what I would like to be done can be achieved with a Text widget.. for example making the word ""YELLOW"" show in yellow: +self.text.tag_config(""tag_yel"", fg=clr_yellow) +self.text.highligh_pattern(""YELLOW"", ""tag_yel"") +But my text is static and all I want is to change the word ""YELLOW"" to show as yellow font and ""RED"" in red font and I cannot seem to figure out how to change text color without changing it all with label.config(fg=clr). +Any help would be appreciated","You cannot do what you want. A label supports only a single foreground color and a single background color. The solution is to use a text or canvas widget., or to use two separate labels.",1.2,True,1,5988 +2019-03-11 12:10:11.213,Running python directly in terminal,"Is it possible to execute short python expressions in one line in terminal, without passing a file? +e.g. (borrowing from how I would write an awk expression) +python 'print(""hello world"")'","python3 -c ""print('Hello')"" +Use the -c flag as above.",1.2,True,2,5989 +2019-03-11 12:10:11.213,Running python directly in terminal,"Is it possible to execute short python expressions in one line in terminal, without passing a file? +e.g. (borrowing from how I would write an awk expression) +python 'print(""hello world"")'","For completeness, I found you can also feed a here-string to python. +python <<< 'print(""hello world"")'",0.0,False,2,5989 +2019-03-11 13:21:12.590,How to save and load my neural network model after training along with weights in python?,"I have trained a single layer neural network model in python (a simple model without keras and tensorflow). +How canI save it after training along with weights in python, and how to load it later?","So you write it down yourself. You need some simple steps: + +In your code for neural network, store weights in a variable. It could be simply done by using self.weights.weights are numpy ndarrays. for example if weights are between layer with 10 neurons to layer with 100 neurons, it is a 10 * 100(or 100* 10) nd array. +Use numpy.save to save the ndarray. +For next use of your network, use numpy.load to load weights +In the first initialization of your network, use weights you've loaded. +Don't forget, if your network is trained, weights should be frozen. It can be done by zeroing learning rate.",0.1352210990936997,False,1,5990 +2019-03-12 12:23:21.577,tf.gradient acting like tfp.math.diag_jacobian,"I try to calculate noise for input data using the gradient of the loss function from the input-data: +my_grad = tf.gradients(loss, input) +loss is an array of size (n x 1) where n is the number of datasets, m is the size of the dataset, input is an array of (n x m) where m is the size of a single dataset. +I need my_grad to be of size (n x m) - so for each dataset the gradient is calulated. But by definition the gradients where i!=j are zero - but tf.gradients allocates huge amount of memory and runs for prettymuch ever... +A version, which calulates the gradients only where i=j would be great - any Idea how to get there?","I suppose I have found a solution: +my_grad = tf.gradients(tf.reduce_sum(loss), input) +ensures, that the cross dependencies i!=j are ignored - that works really nicely and fast..",0.0,False,1,5991 +2019-03-12 14:50:25.703,Lost my python.exe in Pycharm with Anaconda3,"Everything was working perfectly until today, when for some reason my python.exe file disappeared from the Project Interpreter in Pycharm. +It was located in C:\users\my_name\Anaconda3\python.exe, and for some reason I can't find it anywhere! +Yet, all the packages are here (in the site-packages folder), and only the C:\users\my_name\Anaconda3\pythonw.exe is available. +With the latest however, some packages I installed on top of those available in Anaconda3 won't be recognized. +Therefore, how to get back the python.exe file?","The answer repeats the comment to the question. +I had the same issue once after Anaconda update - python.exe was missing. It was Anaconda 3 installed to Program Files folder by MS Visual Studio (Python 3.6 on Windows10 x64). +To solve the problem I manually copied python.exe file from the most fresh python package available (folder pkgs then folder like python-3.6.8-h9f7ef89_7).",1.2,True,3,5992 +2019-03-12 14:50:25.703,Lost my python.exe in Pycharm with Anaconda3,"Everything was working perfectly until today, when for some reason my python.exe file disappeared from the Project Interpreter in Pycharm. +It was located in C:\users\my_name\Anaconda3\python.exe, and for some reason I can't find it anywhere! +Yet, all the packages are here (in the site-packages folder), and only the C:\users\my_name\Anaconda3\pythonw.exe is available. +With the latest however, some packages I installed on top of those available in Anaconda3 won't be recognized. +Therefore, how to get back the python.exe file?","My Python.exe was missing today in my existing environment in anaconda, so I clone my environment with anaconda to recreate Python.exe and use it again in Spyder.",0.0,False,3,5992 +2019-03-12 14:50:25.703,Lost my python.exe in Pycharm with Anaconda3,"Everything was working perfectly until today, when for some reason my python.exe file disappeared from the Project Interpreter in Pycharm. +It was located in C:\users\my_name\Anaconda3\python.exe, and for some reason I can't find it anywhere! +Yet, all the packages are here (in the site-packages folder), and only the C:\users\my_name\Anaconda3\pythonw.exe is available. +With the latest however, some packages I installed on top of those available in Anaconda3 won't be recognized. +Therefore, how to get back the python.exe file?","I just had the same issue and found out that Avast removed it because it thought it was a threat. I found it in Avast -> Protection -> Virus Chest. And from there, you have the option to restore it.",0.3869120172231254,False,3,5992 +2019-03-12 18:13:12.880,trouble with appending scores in python,"the code is supposed to give 3 questions with 2 attempts. if the answer is correct the first try, 3 points. second try gives 1 point. if second try is incorrect, the game will end. +however, the scores are not adding up to create a final score after the 3 rounds. how do i make it so that it does that?",First move import random to the top of the script because you're importing it every time in the loop and the score is calculated just in the last spin of the program since you empty scoreList[] every time,0.6730655149877884,False,1,5993 +2019-03-13 05:05:14.420,Accessing Luigi visualizer on AWS,"I’ve been using the Luigi visualizer for pipelining my python code. +Now I’ve started using an aws instance, and want to access the visualizer from my own machine. +Any ideas on how I could do that?","We had the very same problem today on GCP, and solved with the following steps: + +setting firewall rules for incoming TCP connections on port used by the service (which by default is 8082); +installing apache2 server on the instance with a site.conf configuration that resolve incoming requests on ip-of-instance:8082. + +That's it. Hope this can help.",0.2012947653214861,False,1,5994 +2019-03-13 09:24:24.310,"Async, multithreaded scraping in Python with limited threads","We have to refactor scraping algorithm. To speed it up we came up to conclusion to multi-thread processes (and limit them to max 3). Generally speaking scraping consists of following aspects: + +Scraping (async request, takes approx 2 sec) +Image processing (async per image, approx 500ms per image) +Changing source item in DB (async request, approx 2 sec) + +What I am aiming to do is to create batch of scraping requests and while looping through them, create a stack of consequent async operations: Process images and as soon as images are processed -> change source item. +In other words - scraping goes. but image processing and changing source items must be run in separate limited async threads. +Only think I don't know how to stack the batch and limit threads. +Has anyone came across the same task and what approach have you used?","What you're looking for is consumer-producer pattern. Just create 3 different queues and when you process the item in one of them, queue new work in another. Then you can 3 different threads each of them processing one queue.",1.2,True,1,5995 +2019-03-13 20:16:42.690,Pymongo inserts _id in original array after insert_many .how to avoid insertion of _id?,"Pymongo inserts _id in original array after insert_many .how to avoid insertion of _id ? And why original array is updated with _id? Please explain with example, if anybody knows? Thanks in advance.",Pymongo driver explicitly inserts _id of type ObjectId into the original array and hence original array gets updated before inserting into mongo. This is the expected behaviour of pymongo for insertmany query as per my previous experiences. Hope this answers your question.,1.2,True,1,5996 +2019-03-13 21:29:05.987,how can i prevent the user from closing my cmd window in a python script on windows,is there any way to prevent the user from closing the cmd window of a python script on windows or maybe just disable the (X) close button ?? I have looked for answers already but i couldn't find anything that would help me,"I dont think its possible, what you can do instead is to not display the cmd window (backgroundworker) and make it into a hidden process with system rights so that it cant be shutdown until it finishes.",0.0,False,1,5997 +2019-03-14 00:37:45.023,regex python multiline,"how can i search for patterns in texts that cover multiple lines and have fixed positions relating each other, for example a pattern consisting of 3 letters of x directly below each other and I want to find them at any position in the line, not just at the beginning for example. +Thank you in advance for the answer!","I believe the problem you are asking about is ""Find patterns that appear at the same offset in a series of lines."" +I do not think this describes a regular language, so you would need to draw on Python's extended regex features to have a chance at a regex-based solution. But I do not believe Python supports sufficiently extended features to accomplish this task [1]. +If it is acceptable that they occur at a particular offset (rather than ""any offset, so long as the offset is consistent""), then something like this should work: +/^.{OFFSET}PATTERN.*\n^.{OFFSET}PATTERN.*\n^.{OFFSET}PATTERN/, using the MULTILINE flag so that ^ matches the beginning of a series of lines instead of just the beginning of the entire text. +[1] In particular, you could use a backreference to capture the text preceding the desired pattern on one line, but I do not think you can query the length of the captured content ""inline"". You could search for the same leading text again on the next line, but that does not sound like what you want.",0.0,False,1,5998 +2019-03-14 15:52:38.597,Faster pytorch dataset file,"I have the following problem, I have many files of 3D volumes that I open to extract a bunch of numpy arrays. +I want to get those arrays randomly, i.e. in the worst case I open as many 3D volumes as numpy arrays I want to get, if all those arrays are in separate files. +The IO here isn't great, I open a big file only to get a small numpy array from it. +Any idea how I can store all these arrays so that the IO is better? +I can't pre-read all the arrays and save them all in one file because then that file would be too big to open for RAM. +I looked up LMDB but it all seems to be about Caffe. +Any idea how I can achieve this?","One trivial solution can be pre-processing your dataset and saving multiple smaller crops of the original 3D volumes separately. This way you sacrifice some disk space for more efficient IO. +Note that you can make a trade-off with the crop size here: saving bigger crops than you need for input allows you to still do random crop augmentation on the fly. If you save overlapping crops in the pre-processing step, then you can ensure that still all possible random crops of the original dataset can be produced. +Alternatively you may try using a custom data loader that retains the full volumes for a few batch. Be careful, this might create some correlation between batches. Since many machine learning algorithms relies on i.i.d samples (e.g. Stochastic Gradient Descent), correlated batches can easily cause some serious mess.",0.0,False,1,5999 +2019-03-14 19:33:03.197,How does multiplexing in Django sockets work?,"I am new at this part of web developing and was trying to figure out a way of creating a web app with the basic specifications as the example bellow: + +A user1 opens a page with a textbox (something where he can add text or so), and it will be modified as it decides to do it. + +If the user1 has problems he can invite other user2 to help with the typing. + + +The user2 (when logged to the Channel/Socket) will be able to modify that field and the modifications made will be show to the user1 in real time and vice versa. + + +Or another example is a room on CodeAcademy: + +Imagine that I am learning a new coding language, however, at middle of it I jeopardize it and had to ask for help. + +So I go forward and ask help to another user. This user access the page through a WebSocket (or something related to that). + + +The user helps me changing my code and adding some comments at it in real time, and I also will be able to ask questions through it (real time communication) + + +My questions is: will I be able to developed certain app using Django Channels 2 and multiplexing? or better move to use NodeJS or something related to that? +Obs: I do have more experience working with python/django, so it will more productive for me right know if could find a way working with this combo.","This is definitely possible. They will be lots of possibilities, but I would recommend the following. + +Have a page with code on. The page has some websocket JS code that can connect to a Channels Consumer. +The JS does 2 simple things. When code is updated code on the screen, send a message to the Consumer, with the new text (you can optimize this later). When the socket receives a message, then replace the code on screen with the new code. +In your consumer, add your consumer to a channel group when connecting (the group will contain all of the consumers that are accessing the page) +When a message is received, use group_send to send it to all the other consumers +When your consumer callback function gets called, then send a message to your websocket",0.3869120172231254,False,1,6000 +2019-03-14 20:28:27.727,Operating system does not meet the minimum requirements of the language server,"I installed Python 3.7.2 and VSCode 1.32.1 on Mac OS X 10.10.2. In VSCode I installed the Pyhton extension and got a message saying: +""Operating system does not meet the minimum requirements of the language server. Reverting to alternative, Jedi"". +When clicking the ""More"" option under the message I got information indicating that I need OS X 10.12, at least. +I tried to install an older version of the extension, did some reading here and asked Google, but I'm having a hard time since I don´t really know what vocabulary to use. +My questions are: +Will the extension work despite the error message? +Do I need to solve this, and how do I do that?","The extension will work without the language server, but some thing won't work quite as well (e.g. auto-complete and some refactoring options). Basically if you remove the ""python.jediEnabled"" setting -- or set it to false -- and the extension works fine for you then that's the important thing. :)",1.2,True,1,6001 +2019-03-17 20:51:04.933,What is the preferred way to a add a citation suggestion to python packages?,"How should developers indicate how users should cite the package, other than on the documentation? +R packages return the preferred citation using citation(""pkg""). +I can think of pkg.CITATION, pkg.citation and pkg.__citation__. Are there others? If there is no preferred way (which seems to be the case to me as I did not find anything on python.org), what are the pros and cons of each?","Finally I opted for the dunder option. Only the dunder option (__citation__) makes clear, that this is not a normal variable needed for runtime. +Yes, dunder strings should not be used inflationary because python might use them at a later time. But if python is going to use __citation__, then it will be for a similar purpose. Also, I deem the relative costs higher with the other options.",1.2,True,1,6002 +2019-03-18 14:05:53.610,How to see the full previous command in Pycharm Python console using a shortcut,"I was wondering how I could see the history in the Pycharm Python console using a shortcut. I can see the history using the upper arrow key, but If I want to go further back in history I have to go to each individual line if more lines are ran at the time. Is it possible that each time I press a button the full previous commands that are ran are shown? +I don't want to search in history, I want to go back in history similar using arrow up key but each time I enter arrow up I want to see the previous full code that was ran.","Go to preferences -> Appereance & Behaviour -> Keymap. You can search for ""Browse Console History"" and add a keyboard shortcut with right click -> Add Keyboard shortcut.",0.0,False,1,6003 +2019-03-18 17:28:51.877,Python how to to make set of rules for each class in a game,"in C# we have to get/set to make rules, but I don't know how to do this in Python. +Example: +Orcs can only equip Axes, other weapons are not eligible +Humans can only equip swords, other weapons are eligible. +How can I tell Python that an Orc cannot do something like in the example above? +Thanks for answers in advance, hope this made any sense to you guys.","Python language doesn't have an effective mechanism for restricting access to an instance or method. There is a convention though, to prefix the name of a field/method with an underscore to simulate ""protected"" or ""private"" behavior. +But, all members in a Python class are public by default.",0.0,False,1,6004 +2019-03-18 18:58:53.487,"Regex to get key words, all digits and periods","My input text looks like this: + +Put in 3 extenders but by the 4th floor it is weak on signal these don't piggy back of each other. ST -99. 5G DL 624.26 UP 168.20 4g DL 2 + Up .44 + +I am having difficulty writing a regex that will match any instances of 4G/5G/4g/5g and give me all the corresponding measurements after the instances of these codes, which are numbers with decimals. +The output should be: + +5G 624.26 168.20 4g 2 .44 + +Any thoughts how to achieve this? I am trying to do this analysis in Python.","I would separate it in different capture group like this: +(?i)(?P5?4?G)\sDL\s(?P[^\s]*)\sUP\s(?P[^\s]*) +(?i) makes the whole regex case insensitive +(?P5?4?G) is the first group matching on either 4g, 5g, 4G or 5G. +(?P[^\s]*) is the second and third group matching on everything that is not a space. +Then in Python you can do: +match = re.match('(?i)(?P5?4?G)\sDL\s(?P[^\s]*)\sUP\s(?P[^\s]*)', input) +And access each group like so: +match.group('g1') etc.",0.1352210990936997,False,1,6005 +2019-03-19 03:35:02.983,"In Zapier, how do I get the inputs to my Python ""Run Code"" action to be passed in as lists and not joined strings?","In Zapier, I have a ""Run Python"" action triggered by a ""Twitter"" event. One of the fields passed to me by the Twitter event is called ""Entities URLs Display URL"". It's the list of anchor texts of all of the links in the tweet being processed. +Zapier is passing this value into my Python code as a single comma-separated string. I know I can use .split(',') to get a list, but this results in ambiguity if the original strings contained commas. +Is there some way to get Zapier to pass this sequence of strings into my code as a sequence of strings rather than as a single joined-together string?","David here, from the Zapier Platform team. +At this time, all inputs to a code step are coerced into strings due to the way data is passed between zap steps. This is a great request though and I'll make a note of it internally.",0.6730655149877884,False,1,6006 +2019-03-19 07:09:31.563,"Where is the tesseract executable file located on MacOS, and how to define it in Python?","I have made a code using pytesseract and whenever I run it, I get this error: +TesseractNotFoundError: tesseract is not installed or it's not in your path +I have installed tesseract using HomeBrew and also pip installed it.","If installed with Homebrew, it will be located in /usr/local/bin/tesseract by default. To verify this, run which tesseract in the terminal as Dmitrrii Z. mentioned. +If it's there, you can set it up in your python environment by adding the following line to your python script, after importing the library: +pytesseract.pytesseract.tesseract_cmd = r'/usr/local/bin/tesseract'",0.6730655149877884,False,1,6007 +2019-03-19 09:10:22.167,Call function from file that has already imported the current file,"If I have the files frame.py and bindings.py both with classes Frame and Bindings respectively inside of them, I import the bindings.py file into frame.py by using from bindings import Bindings but how do I go about importing the frame.py file into my bindings.py file. If I use import frame or from frame import Frame I get the error ImportError: cannot import name 'Bindings' from 'bindings'. Is there any way around this without restructuring my code?",Instead of using from bindings import Bindings try import bindings.,0.0,False,1,6008 +2019-03-20 10:03:02.943,How to only enter a date that is a weekday in Python,"I'm creating a web applcation in Python and I only want the user to be able to enter a weekday that is older than today's date. I've had a look at isoweekday() for example but don't know how to integrate it into a flask form. The form currently looks like this: +appointment_date = DateField('Appointment Date', format='%Y-%m-%d', validators=[DataRequired()]) +Thanks","If you just want a weekday, you should put a select or a textbox, not a date picker. +If you put a select, you can disable the days before today so you don't even need a validation",0.0,False,1,6009 +2019-03-20 23:43:33.130,Speed up access to python programs from Golang's exec packaqe,"I need suggestions on how to speed up access to python programs when called from Golang. I really need fast access time (very low latency). +Approach 1: +func main() { +... +... + cmd = exec.Command(""python"", ""test.py"") + o, err = cmd.CombinedOutput() +... +} +If my test.py file is a basic print ""HelloWorld"" program, the execution time is over 50ms. I assume most of the time is for loading the shell and python in memory. +Approach 2: +The above approach can be speeded up substantially by having python start a HTTP server and then gaving the Go code POST a HTTP request and get the response from the HTTP server (python). Speeds up response times to less than 5ms. +I guess the main reason for this is probably because the python interpretor is already loaded and warm in memory. +Are there other approaches I can use similar to approach 2 (shared memory, etc.) which could speed up the response from my python code?. Our application requires extremely low latency and the 50 ms I am currently seeing from using Golang's exec package is not going to cut it. +thanks,","Approach 1: Simple HTTP server and client +Approach 2: Local socket or pipe +Approach 3: Shared memory +Approach 4: GRPC server and client +In fact, I prefer the GRPC method by stream way, it will hold the connection (because of HTTP/2), it's easy, fast and secure. And it's easy moving python node to another machine.",0.0,False,1,6010 +2019-03-21 20:01:04.153,Python: Iterate through every pixel in an image for image recognition,"I'm a newbie in image processing and python in general. For an image recognition project, I want to compare every pixel with one another. For that, I need to create a program that iterates through every pixel, takes it's value (for example ""[28, 78, 72]"") and creates some kind of values through comparing it to every other pixel. I did manage to access one single number in an array element /pixel (output: 28) through a bunch of for loops, but I just couldn't figure out how to access every number in every pixel, in every row. Does anyone know a good algorithm to solve my problem? I use OpenCV for reading in the image by the way.","Comparing every pixel with a ""pattern"" can be done with convolution. You should take a look at Haar cascade algorithm.",0.0,False,1,6011 +2019-03-21 20:38:04.357,numpy.savetxt() rounding values,"I'm using numpy.savetxt() to save an array, but its rounding my values to the first decimal point, which is a problem. anyone have any clue how to change this?","You can set the precision through changing fmt parameter. For example np.savetxt('tmp.txt',a, fmt='%1.3f') would leave you with an output with the precision of first three decimal points",0.3869120172231254,False,1,6012 +2019-03-22 03:06:43.583,Training SVM in Python with pictures,"I have basic knowledge of SVM, but now I am working with images. I have images in 5 folders, each folder, for example, has images for letters a, b, c, d, e. The folder 'a' has images of handwriting letters for 'a, folder 'b' has images of handwriting letters for 'b' and so on. +Now how can I use the images as my training data in SVM in Python.","as far i understood you want to train your svm to classify these images into the classes named a,b,c,d . For that you can use any of the good image processing techniques to extract features (such as HOG which is nicely implemented in opencv) from your image and then use these features , and the label as the input to your SVM training (the corresponding label for those would be the name of the folders i.e a,b,c,d) you can train your SVM using the features only and during the inference time , you can simply calculate the HOG feature of the image and feed it to your SVM and it will give you the desired output.",0.0,False,1,6013 +2019-03-22 12:32:50.940,How to execute script from container within another container?,"I have a contanarized flask app with external db, that logs users on other site using selenium. Everything work perfectly in localhost. I want to deploy this app using containers and found selenium container with google chrome within could make the job. And my question is: how to execute scripts/methods from flask container in selenium container? I tried to find some helpful info, but I didn't find anything. +Should I make an API call from selenium container to flask container? Is it the way or maybe something different?","As far as i understood, you are trying to take your local implementation, which runs on your pc and put it into two different docker containers. Then you want to make a call from the selenium container to your container containing the flask script which connects to your database. +In this case, you can think of your containers like two different computers. You can tell docker do create an internal network between these two containers and send the request via API call, like you suggested. But you are not limited to this approach, you can use any technique, that works for two computers to exchange commands.",1.2,True,1,6014 +2019-03-22 21:15:34.407,Visual Studio doesn't work with Anaconda environment,"I downloaded VS2019 preview to try how it works with Python. +I use Anaconda, and VS2019 sees the Anaconda virtual environment, terminal opens and works but when I try to launch 'import numpy', for example, I receive this: + +An internal error has occurred in the Interactive window. Please + restart Visual Studio. Intel MKL FATAL ERROR: Cannot load + mkl_intel_thread.dll. The interactive Python process has exited. + +Does anyone know how to fix it?","I had same issue, this worked for me: +Try to add conda-env-root/Library/bin to the path in the run environment.",0.0,False,1,6015 +2019-03-24 17:23:41.657,Automatically filled field in model,"I have some model where there are date field and CharField with choices New or Done, and I want to show some message for this model objects in my API views if 2 conditions are met, date is past and status is NEW, but I really don't know how I should resolve this. +I was thinking that maybe there is option to make some field in model that have choices and set suitable choice if conditions are fulfilled but I didn't find any information if something like this is possible so maybe someone have idea how resolve this?","You need override the method save of your model. An overrided method must check the condition and show message +You may set the signal receiver on the post_save signal that does the same like (1).",0.0,False,1,6016 +2019-03-25 03:15:40.920,how to drop multiple (~5000) columns in the pandas dataframe?,"I have a dataframe with 5632 columns, and I only want to keep 500 of them. I have the columns names (that I wanna keep) in a dataframe as well, with the names as the row index. Is there any way to do this?","Let us assume your DataFrame is named as df and you have a list cols of column indices you want to retain. Then you should use: +df1 = df.iloc[:, cols] +This statement will drop all the columns other than the ones whose indices have been specified in cols. Use df1 as your new DataFrame.",0.0,False,1,6017 +2019-03-26 17:26:01.377,How to configure PuLP to call GLPK solver,"I am using the PuLP library in Python to solve an MILP problem. I have run my problem successfully with the default solver (CBC). Now I would like to use PuLP with another solver (GLPK). How do I set up PuLP with GLPK? +I have done some research online and found information on how to use GLPK (e.g. with lp_prob.solve(pulp.GLPK_CMD())) but haven't found information on how to actually set up PuLP with GLPK (or any other solver for that matter), so that it finds my GLPK installation. I have already installed GLPK seperately (but I didn't add it to my PATH environment variable). +I ran the command pulp.pulpTestAll() +and got: +Solver unavailable +I know that I should be getting a ""passed"" instead of an ""unavailable"" to be able to use it.","After reading in more detail the code and testing out some things, I finally found out how to use GLPK with PuLP, without changing anything in the PuLP package itself. +Your need to pass the path as an argument to GLPK_CMD in solve as follows (replace with your glpsol path): +lp_prob.solve(GLPK_CMD(path = 'C:\\Users\\username\\glpk-4.65\\w64\\glpsol.exe') +You can also pass options that way, e.g. +lp_prob.solve(GLPK_CMD(path = 'C:\\Users\\username\\glpk-4.65\\w64\\glpsol.exe', options = [""--mipgap"", ""0.01"",""--tmlim"", ""1000""])",1.2,True,2,6018 +2019-03-26 17:26:01.377,How to configure PuLP to call GLPK solver,"I am using the PuLP library in Python to solve an MILP problem. I have run my problem successfully with the default solver (CBC). Now I would like to use PuLP with another solver (GLPK). How do I set up PuLP with GLPK? +I have done some research online and found information on how to use GLPK (e.g. with lp_prob.solve(pulp.GLPK_CMD())) but haven't found information on how to actually set up PuLP with GLPK (or any other solver for that matter), so that it finds my GLPK installation. I have already installed GLPK seperately (but I didn't add it to my PATH environment variable). +I ran the command pulp.pulpTestAll() +and got: +Solver unavailable +I know that I should be getting a ""passed"" instead of an ""unavailable"" to be able to use it.","I had same problem, but is not related with glpk installation, is with solution file create, the message is confusim. My problem was I use numeric name for my variables, as '0238' ou '1342', I add a 'x' before it, then they looked like 'x0238'.",0.2012947653214861,False,2,6018 +2019-03-26 23:03:52.333,Tower of colored cubes,"Consider a set of n cubes with colored facets (each one with a specific color +out of 4 possible ones - red, blue, green and yellow). Form the highest possible tower of k cubes ( k ≤ n ) properly rotated (12 positions of a cube), so the lateral faces of the tower will have the same color, using and evolutionary algorithm. +What I did so far: +I thought that the following representation would be suitable: an Individual could be an array of n integers, each number having a value between 1 and 12, indicating the current position of the cube (an input file contains n lines, each line shows information about the color of each face of the cube). +Then, the Population consists of multiple Individuals. +The Crossover method should create a new child(Individual), containing information from its parents (approximately half from each parent). +Now, my biggest issue is related to the Mutate and Fitness methods. +In Mutate method, if the probability of mutation (say 0.01), I should change the position of a random cube with other random position (for example, the third cube can have its position(rotation) changed from 5 to 12). +In Fitness method, I thought that I could compare, two by two, the cubes from an Individual, to see if they have common faces. If they have a common face, a ""count"" variable will be incremented with the number of common faces and if all the 4 lateral faces will be the same for these 2 cubes, the count will increase with another number of points. After comparing all the adjacent cubes, the count variable is returned. Our goal is to obtain as many adjacent cubes having the same lateral faces as we can, i.e. to maximize the Fitness method. +My question is the following: +How can be a rotation implemented? I mean, if a cube changes its position(rotation) from 3, to 10, how do we know the new arrangement of the faces? Or, if I perform a mutation on a cube, what is the process of rotating this cube if a random rotation number is selected? +I think that I should create a vector of 6 elements (the colors of each face) for each cube, but when the rotation value of a cube is modified, I don't know in what manner the elements of its vector of faces should be rearranged. +Shuffling them is not correct, because by doing this, two opposite faces could become adjacent, meaning that the vector doesn't represent that particular cube anymore (obviously, two opposite faces cannot be adjacent).","First, I'm not sure how you get 12 rotations; I get 24: 4 orientations with each of the 6 faces on the bottom. Use a standard D6 (6-sided die) and see how many different layouts you get. +Apparently, the first thing you need to build is a something (a class?) that accurately represents a cube in any of the available orientations. I suggest that you use a simple structure that can return the four faces in order -- say, front-right-back-left -- given a cube and the rotation number. +I think you can effectively represent a cube as three pairs of opposing sides. Once you've represented that opposition, the remaining organization is arbitrary numbering: any valid choice is isomorphic to any other. Each rotation will produce an interleaved sequence of two opposing pairs. For instance, a standard D6 has opposing pairs [(1, 6), (2, 5), (3, 4)]. The first 8 rotations would put 1 and 6 on the hidden faces (top and bottom), giving you the sequence 2354 in each of its 4 rotations and their reverses. +That class is one large subsystem of your problem; the other, the genetic algorithm, you seem to have well in hand. Stack all of your cubes randomly; ""fitness"" is a count of the most prevalent 4-show (sequence of 4 sides) in the stack. At the start, this will generally be 1, as nothing will match. +From there, you seem to have an appropriate handle on mutation. You might give a higher chance of mutating a non-matching cube, or perhaps see if some cube is a half-match: two opposite faces match the ""best fit"" 4-show, so you merely rotate it along that axis, preserving those two faces, and swapping the other pair for the top-bottom pair (note: two directions to do that). +Does that get you moving?",0.0,False,1,6019 +2019-03-27 20:20:56.763,Airflow: How to download file from Linux to Windows via smbclient,"I have a DAG that imports data from a source to a server. From there, I am looking to download that file from the server to the Windows network. I would like to keep this part in Airflow for automation purposes. Does anyone know how to do this in Airflow? I am not sure whether to use the os package, the shutil package, or maybe there is a different approach.","I think you're saying you're looking for a way to get files from a cloud server to a windows shared drive or onto a computer in the windows network, these are some options I've seen used: + +Use a service like google drive, dropbox, box, or s3 to simulate a synced folder on the cloud machine and a machine in the windows network. +Call a bash command to SCP the files to the windows server or a worker in the network. This could work in the opposite direction too. +Add the files to a git repository and have a worker in the windows network sync the repository to a shared location. This option is only good in very specific cases. It has the benefit that you can track changes and restore old states (if the data is in CSV or another text format), but it's not great for large files or binary files. +Use rsync to transfer the files to a worker in the windows network which has the shared location mounted and move the files to the synced dir with python or bash. +Mount the network drive to the server and use python or bash to move the files there. + +All of these should be possible with Airflow by either using python (shutil) or a bash script to transfer the files to the right directory for some other process to pick up or by calling a bash sub-process to perform the direct transfer by SCP or commit the data via git. You will have to find out what's possible with your firewall and network settings. Some of these would require coordinating tasks on the windows side (the git option for example would require some kind of cron job or task scheduler to pull the repository to keep the files up to date).",0.0,False,1,6020 +2019-03-29 18:04:09.080,Python GTK+ 3: Is it possible to make background window invisible?,"basically I have this window with a bunch of buttons but I want the background of the window to be invisible/transparent so the buttons are essentially floating. However, GTK seems to be pretty limited with CSS and I haven't found a way to do it yet. I've tried making the main window opacity 0 but that doesn't seem to work. Is this even possible and if so how can I do it? Thanks. +Edit: Also, I'm using X11 forwarding.",For transparency Xorg requires a composite manager running on the X11 server. The compmgr program from Xorg is a minimal composite manager.,0.0,False,1,6021 +2019-03-30 18:02:10.470,Matplotlib with Pydroid 3 on Android: how to see graph?,"I'm currently using an Android device (of Samsung), Pydroid 3. +I tried to see any graphs, but it doesn't works. +When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window. +(means that i can't see even terminal screen, which always showed me [Program Finished]) +Well, even the basic sample code which Pydroid gives me doesn't show me the graph :( +I've seen many tutorials which successfully showed graphs, but well, mine can't do that things. +Unfortunately, cannot grab any errors. +Using same code which worked at Windows, so don't think the code has problem. +Of course, matplotlib is installed, numpy is also installed. +If there's any possible problems, please let me know.","I also had this problem a while back, and managed to fix it by using plt.show() +at the end of your code. With matplotlib.pyplot as plt.",0.1016881243684853,False,3,6022 +2019-03-30 18:02:10.470,Matplotlib with Pydroid 3 on Android: how to see graph?,"I'm currently using an Android device (of Samsung), Pydroid 3. +I tried to see any graphs, but it doesn't works. +When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window. +(means that i can't see even terminal screen, which always showed me [Program Finished]) +Well, even the basic sample code which Pydroid gives me doesn't show me the graph :( +I've seen many tutorials which successfully showed graphs, but well, mine can't do that things. +Unfortunately, cannot grab any errors. +Using same code which worked at Windows, so don't think the code has problem. +Of course, matplotlib is installed, numpy is also installed. +If there's any possible problems, please let me know.","After reinstalling it worked. +The problem was that I forced Pydroid to update matplotlib via Terminal, not the official PIP tab. +The version of matplotlib was too high for pydroid",1.2,True,3,6022 +2019-03-30 18:02:10.470,Matplotlib with Pydroid 3 on Android: how to see graph?,"I'm currently using an Android device (of Samsung), Pydroid 3. +I tried to see any graphs, but it doesn't works. +When I run the code, it just shows me a black-blank screen temporarily and then goes back to the source code editing window. +(means that i can't see even terminal screen, which always showed me [Program Finished]) +Well, even the basic sample code which Pydroid gives me doesn't show me the graph :( +I've seen many tutorials which successfully showed graphs, but well, mine can't do that things. +Unfortunately, cannot grab any errors. +Using same code which worked at Windows, so don't think the code has problem. +Of course, matplotlib is installed, numpy is also installed. +If there's any possible problems, please let me know.","You just need to add a line +plt.show() +Then it will work. You can also save the file before showing +plt.savefig(""*imageName*.png"")",0.0,False,3,6022 +2019-03-31 02:36:13.693,"Accidentally used homebrew to change my default python to 3.7, how do I change it back to 2.7?","I was trying to install python 3 because I wanted to work on a project using python 3. Instructions I'd found were not working, so I boldly ran brew install python. Wrong move. Now when I run python -V I get ""Python 3.7.3"", and when I try to enter a virtualenv I get -bash: /Users/elliot/Library/Python/2.7/bin/virtualenv: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory +My ~/.bash_profile reads +export PATH=""/Users/elliot/Library/Python/2.7/bin:/usr/local/opt/python/libexec/bin:/Library/PostgreSQL/10/bin:$PATH"" +but ls /usr/local/Cellar/python/ gets me 3.7.3 so it seems like brew doesn't even know about my old 2.7 version anymore. +I think what I want is to reset my system python to 2.7, and then add python 3 as a separate python running on my system. I've been googling, but haven't found any advice on how to specifically use brew to do this. +Edit: I'd also be happy with keeping Python 3.7, if I knew how to make virtualenv work again. I remember hearing that upgrading your system python breaks everything, but I'd be super happy to know if that's outdated knowledge and I'm just being a luddite hanging on to 2.7.","So, I got through it by completely uninstalling Python, which I'd been reluctant to do, and then reinstalled Python 2. I had to update my path and open a new shell to get it to see the new python 2 installation, and things fell into place. I'm now using pyenv for my Python 3 project, and it's a dream.",0.0,False,1,6023 +2019-03-31 06:41:35.623,How does one transfer python code written in a windows laptop to a samsung android phone?,"I created numerous python scripts on my pc laptop, and I want to run those scripts on my android phone. How can I do that? How can I move python scripts from my windows pc laptop, and use those python scripts on my samsung adroid phone? +I have downloaded qpython from the google playstore, but I still don't know how to get my pc python programs onto my phone. I heard some people talk about ""ftp"" but I don't even know what that means. +Thanks","you can use TeamViewer to control your android phone from your PC. And copy and paste the code easily. +or +You can transfer your scripts on your phone memory in the qpython folder and open it using qpython for android.",0.0,False,2,6024 +2019-03-31 06:41:35.623,How does one transfer python code written in a windows laptop to a samsung android phone?,"I created numerous python scripts on my pc laptop, and I want to run those scripts on my android phone. How can I do that? How can I move python scripts from my windows pc laptop, and use those python scripts on my samsung adroid phone? +I have downloaded qpython from the google playstore, but I still don't know how to get my pc python programs onto my phone. I heard some people talk about ""ftp"" but I don't even know what that means. +Thanks","Send them to yourself via email, then download the scripts onto your phone and run them through qpython. +However you have to realize not all the modules on python work on qpython so your scripts may not work the same when you transfer them.",0.0,False,2,6024 +2019-04-01 16:47:12.417,how to find text before and after given words and output into different text files?,"I have a text file like this: + +... + NAME : name-1 + ... + NAME : name-2 + ... + ... + ... + NAME : name-n + ... + +I want output text files like this: + +name_1.txt : NAME : name-1 ... + name_2.txt : NAME : name-2 ... + ... + name_n.txt : NAME : name-n ... + +I have the basic knowledge of grep, sed, awk, shell scripting, python.","With GNU sed: +sed ""s/\(.*\)\(name-.*\)/echo '\1 \2' > \2.txt/;s/-/_/2e"" input-file + +Turn line NAME : name-2 into echo ""NAME : name-2"" > name-2.txt +Then replace the second - with _ yielding echo ""NAME : name-2"" > name_2.txt +e have the shell run the command constructed in the pattern buffer. + +This outputs blank lines to stdout, but creates a file for each matching line. +This depends on the file having nothing but lines matching this format... but you can expand the gist here to skip other lines with n.",0.0,False,1,6025 +2019-04-02 09:36:07.253,"Unable to parse the rows in ResultSet returned by connection.execute(), Python and SQLAlchemy","I have a task to compare data of two tables in two different oracle databases. We have access of views in both of db. Using SQLAlchemy ,am able to fetch rows from views but unable to parse it. +In one db the type of ID column is : Raw +In db where column type is ""Raw"", below is the row am getting from resultset . +(b'\x0b\x975z\x9d\xdaF\x0e\x96>[Ig\xe0/', 1, datetime.datetime(2011, 6, 7, 12, 11, 1), None, datetime.datetime(2011, 6, 7, 12, 11, 1), b'\xf2X\x8b\x86\x03\x00K|\x99(\xbc\x81n\xc6\xd3', None, 'I', 'Inactive') +ID Column data: b'\x0b\x975z\x9d\xdaF\x0e\x96>[_Ig\xe0/' +Actual data in ID column in database: F2588B8603004B7C9928BC816EC65FD3 +This data is not complete hexadecimal format as it has some speical symbols like >|[_ etc. I want to know that how can I parse the data in ID column and get it as a string.",bytes.hex() solved the problem,1.2,True,1,6026 +2019-04-02 12:30:37.360,How to install Python packages from python3-apt in PyCharm on Windows?,"I'm on Windows and want to use the Python package apt_pkg in PyCharm. +On Linux I get the package by doing sudo apt-get install python3-apt but how to install apt_pkg on Windows? +There is no such package on PyPI.",There is no way to run apt-get in Windows; the package format and the supporting infrastructure is very explicitly Debian-specific.,0.2012947653214861,False,1,6027 +2019-04-03 14:59:12.000,“Close and Halt” feature does not functioning in jupyter notebook launched under Canopy on macOs High Sierra,"When I done with my work, I try to close my jupyter notebook via 'Close and Halt' under the file menu. However it somehow do not functioning. +I am running the notebook from Canopy, version: 2.1.9.3717, under macOs High Sierra.","If you are running Jupyter notebook from Canopy, then the Jupyter notebook interface is not controlling the kernel; rather, Canopy's built-in ipython Qtconsole is. You can restart the kernel from the Canopy run menu.",0.3869120172231254,False,1,6028 +2019-04-03 17:51:59.123,Running an external Python script on a Django site,"I have a Python script which communicates with a Financial site through an API. I also have a Django site, i would like to create a basic form on my site where i input something and, according to that input, my Python script should perform some operations. +How can i do this? I'm not asking for any code, i just would like to understand how to accomplish this. How can i ""run"" a python script on a Django project? Should i make my Django project communicate with the script through a post request? Or is there a simpler way?","I agree with @Daniel Roseman +If you are looking for your program to be faster, maybe multi-threading would be useful.",0.0,False,2,6029 +2019-04-03 17:51:59.123,Running an external Python script on a Django site,"I have a Python script which communicates with a Financial site through an API. I also have a Django site, i would like to create a basic form on my site where i input something and, according to that input, my Python script should perform some operations. +How can i do this? I'm not asking for any code, i just would like to understand how to accomplish this. How can i ""run"" a python script on a Django project? Should i make my Django project communicate with the script through a post request? Or is there a simpler way?","Since you don't want code, and you didn't get detailed on everything required required, here's my suggestion: + +Make sure your admin.py file has editable fields for the model you're using. +Make an admin action, +Take the selected row with the values you entered, and run that action with the data you entered. + +I would be more descriptive, but I'd need more details to do so.",0.3869120172231254,False,2,6029 +2019-04-04 02:39:46.887,Tracking any change in an table on SQL Server With Python,"How are you today? +I'm a newbie in Python. I'm working with SQL server 2014 and Python 3.7. So, my issue is: When any change occurs in a table on DB, I want to receive a message (or event, or something like that) on my server (Web API - if you like this name). +I don't know how to do that with Python. +I have an practice (an exp. maybe). I worked with C# and SQL Server, and in this case, I used ""SQL Dependency"" method in C# to solve that. It's really good! +Have something like that in Python? Many thank for any idea, please! +Thank you so much.","I do not know many things about SQL. But I guess there are tools for SQL to detect those changes. And then you could create an everlasting loop thread using multithreading package to capture that change. (Remember to use time.sleep() to block your thread so that It wouldn't occupy the CPU for too long.) Once you capture the change, you could call the function that you want to use. (Actually, you could design a simple event engine to do that). I am a newbie in Computer Science and I hope my answer is correct and helpful. :)",0.0,False,1,6030 +2019-04-04 07:59:55.183,virtual real time limit (178/120s) reached,"I am using ubuntu 16 version and running Odoo erp system 12.0 version. +On my application log file i see information says ""virtual real time limit (178/120s) reached"". +What exactly it means & what damage it can cause to my application? +Also how i can increase the virtual real time limit?","Open your config file and just add below parameter : +--limit-time-real=100000",0.9866142981514304,False,1,6031 +2019-04-04 15:23:10.660,How to handle multiple major versions of dependency,"I'm wondering how to handle multiple major versions of a dependency library. +I have an open source library, Foo, at an early release stage. The library is a wrapper around another open source library, Bar. Bar has just launched a new major version. Foo currently only supports the previous version. As I'm guessing that a lot of people will be very slow to convert from the previous major version of Bar to the new major version, I'm reluctant to switch to the new version myself. +How is this best handled? As I see it I have these options + +Switch to the new major version, potentially denying people on the old version. +Keep going with the old version, potentially denying people on the new version. +Have two different branches, updating both branches for all new features. Not sure how this works with PyPi. Wouldn't I have to release at different version numbers each time? +Separate the repository into two parts. Don't really want to do this. + +The ideal solution for me would be to have the same code base, where I could have some sort of C/C++ macro-like thing where if the version is new, use new_bar_function, else use old_bar_function. When installing the library from PyPi, the already installed version of the major version dictates which version is used. If no version is installed, install the newest. +Would much appreciate some pointers.","Have two different branches, updating both branches for all new features. Not sure how this works with PyPI. Wouldn't I have to release at different version numbers each time? + +Yes, you could have a 1.x release (that supports the old version) and a 2.x release (that supports the new version) and release both simultaneously. This is a common pattern for packages that want to introduce a breaking change, but still want to continue maintaining the previous release as well.",0.2012947653214861,False,1,6032 +2019-04-05 16:28:56.133,How do apply Q-learning to an OpenAI-gym environment where multiple actions are taken at each time step?,"I have successfully used Q-learning to solve some classic reinforcement learning environments from OpenAI Gym (i.e. Taxi, CartPole). These environments allow for a single action to be taken at each time step. However I cannot find a way to solve problems where multiple actions are taken simultaneously at each time step. For example in the Roboschool Reacher environment, 2 torque values - one for each axis - must be specify at each time step. The problem is that the Q matrix is built from (state, action) pairs. However, if more than one action are taken simultaneously, it is not straightforward to build the Q matrix. +The book ""Deep Reinforcement Learning Hands-On"" by Maxim Lapan mentions this but does not give a clear answer, see quotation below. + +Of course, we're not limited to a single action to perform, and the environment could have multiple actions, such as pushing multiple buttons simultaneously or steering the wheel and pressing two pedals (brake and accelerator). To support such cases, Gym defines a special container class that allows the nesting of several action spaces into one unified action. + +Does anybody know how to deal with multiple actions in Q learning? +PS: I'm not talking about the issue ""continuous vs discrete action space"", which can be tackled with DDPG.","You can take one of two approaches - depend on the problem: + +Think of the set of actions you need to pass to the environment as independent and make the network output actions values for each one (make softmax separately) - so if you need to pass two actions, the network will have two heads, one for each axis. +Think of them as dependent and look on the Cartesian product of the sets of actions, and then make the network to output value for each product - so if you have two actions that you need to pass and 5 options for each, the size of output layer will be 2*5=10, and you just use softmax on that.",0.6730655149877884,False,1,6033 +2019-04-06 19:50:46.103,How to install python3.6 in parallel with python 2.7 in Ubuntu 18,Setting up to start python for data analytics and want to install python 3.6 in Ubuntu 18.0 . Shall i run both version in parallel or overwrite 2.7 and how ? I am getting ambiguous methods when searched up.,Try pyenv and/or pipenv . Both are excellent tools to maintain local python installations.,0.0,False,1,6034 +2019-04-07 08:00:53.180,how to display the month in from view ? (Odoo11),"please, how do I display the month in the form? example: +07/04/2019 i want to change it in 07 april, 2019 +Thank you in advance","Try with following steps: + +Go to Translations > Languages +Open record with your current language. +Edit date format with %d %A, %Y",0.3869120172231254,False,1,6035 +2019-04-07 14:08:01.350,How to fix print((double parentheses)) after 2to3 conversion?,"When migrating my project to Python 3 (2to3-3.7 -w -f print *), I observed that a lot of (but not all) print statements became print((...)), so these statements now print tuples instead of performing the expected behavior. I gather that if I'd used -p, I'd be in a better place right now because from __future__ import print_function is at the top of every affected module. +I'm thinking about trying to use sed to fix this, but before I break my teeth on that, I thought I'd see if anyone else has dealt with this before. Is there a 2to3 feature to clean this up? +I do use version control (git) and have commits immediately before and after (as well as the .bak files 2to3 creates), but I'm not sure how to isolate the changes I've made from the print situations.",If your code already has print() functions you can use the -x print argument to 2to3 to skip the conversion.,0.6133572603953825,False,1,6036 +2019-04-08 06:39:59.923,"Windowed writes in python, e.g. to NetCDF","In python how can I write subsets of an array to disk, without holding the entire array in memory? +The xarray input/output docs note that xarray does not support incremental writes, only incremental reads except by streaming through dask.array. (Also that modifying a dataset only affects the in-memory copy, not the connected file.) The dask docs suggest it might be necessary to save the entire array after each manipulation?","This can be done using netCDF4 (the python library of low level NetCDF bindings). Simply assign to a slice of a dataset variable, and optionally call the dataset .sync() method afterward to ensure no delay before those changes are flushed to the file. +Note this approach also provides the opportunity to progressively grow a dimension of the array (by calling createDimension with size None, making it the first dimension of a variable, and iteratively assigning to incrementally larger indices along that dimension of the variable). +Although random-access window (i.e. subset) writes appear to require the lower level package, more systematic subset writes (eventually covering the entire array) can be done incrementally with xarray (by specifying a chunk size parameter to trigger use of the dask.array backend), and provided that your algorithm is refactored so that the main loop occurs in the dask/xarray store-to-file call. This means you will not have explicit control over the sequence in which chunks are generated and written.",0.0,False,1,6037 +2019-04-08 14:03:38.120,Is there any way to hide or encrypt your python code for edge devices? Any way to prevent reverse engineering of python code?,"I am trying to make a smart IOT device (capable of performing smart Computer Vision operations, on the edge device itself). A Deep Learning algorithm (written in python) is implemented on Raspberry Pi. Now, while shipping this product (software + hardware) to my customer, I want that no one should log in to the raspberry pi and get access to my code. The flow should be something like, whenever someone logs into pi, there should be some kind of key that needs to be input to get access to code. But in that case how OS will get access to code and run it (without key). Then I may have to store the key on local. But still there is a chance to get access to key and get access to the code. I have applied a patent for my work and want to protect it. +I am thinking to encrypt my code (written in python) and just ship the executable version. I tried pyinstaller for it, but somehow there is a script available on the internet that can reverse engineer it. +Now I am little afraid as it can leak all my effort of 6 months at one go. Please suggest a better way of doing this. +Thanks in Advance.","Keeping the code on your server and using internet access is the only way to keep the code private (maybe). Any type of distributed program can be taken apart eventually. You can't (possibly shouldn't) try to keep people from getting inside devices they own and are in their physical possession. If you have your property under patent it shouldn't really matter if people are able to see the code as only you will be legally able to profit from it. +As a general piece of advice, code is really difficult to control access to. Trying to encrypt software or apply software keys to it or something like that is at best a futile attempt and at worst can often cause issues with software performance and usability. The best solution is often to link a piece of software with some kind of custom hardware device which is necessary and only you sell. That might not be possible here since you're using generic hardware but food for thought.",-0.3869120172231254,False,1,6038 +2019-04-08 15:02:18.437,How to classify unlabelled data?,I am new to Machine Learning. I am trying to build a classifier that classifies the text as having a url or not having a url. The data is not labelled. I just have textual data. I don't know how to proceed with it. Any help or examples is appreciated.,"Since it's text, you can use bag of words technique to create vectors. + +You can use cosine similarity to cluster the common type text. +Then use classifier, which would depend on number of clusters. +This way you have a labeled training set. + +If you have two cluster, binary classifier like logistic regression would work. +If you have multiple classes, you need to train model based on multinomial logistic regression +or train multiple logistic models using One vs Rest technique. + +Lastly, you can test your model using k-fold cross validation.",0.2012947653214861,False,1,6039 +2019-04-08 16:54:25.427,Django - how to visualize signals and save overrides?,"As a project grows, so do dependencies and event chains, especially in overridden save() methods and post_save and pre_save signals. +Example: +An overridden A.save creates two related objects to A - B and C. When C is saved, the post_save signal is invoked that does something else, etc... +How can these event chins be made more clear? Is there a way to visualize (generate automatically) such chains/flows? I'm not looking for ERD nor a Class diagram. I need to be sure that doing one thing one place won't affect something on the other side of the project, so simple visualization would be the best. +EDIT +To be clear, I know that it would be almost impossible to check dynamically generated signals. I just want to check all (not dynamically generated) post_save, pre_save, and overridden save methods and visualize them so I can see immediately what is happening and where when I save something.","(Too long to fit into a comment, lacking code to be a complete answer) +I can't mock up a ton of code right now, but another interesting solution, inspired by Mario Orlandi's comment above, would be some sort of script that scans the whole project and searches for any overridden save methods and pre and post save signals, tracking the class/object that creates them. It could be as simple as a series of regex expressions that look for class definitions followed by any overridden save methods inside. +Once you have scanned everything, you could use this collection of references to create a dependency tree (or set of trees) based on the class name and then topologically sort each one. Any connected components would illustrate the dependencies, and you could visualize or search these trees to see the dependencies in a very easy, natural way. I am relatively naive in django, but it seems you could statically track dependencies this way, unless it is common for these methods to be overridden in multiple places at different times.",0.4540544406412981,False,1,6040 +2019-04-09 07:36:02.467,Capturing time between HTML form submit action and printing response,"I have a Python Flask application with a HTML form which accept few inputs from user, uses those in an python program which returns the processed values back to flask application return statement. +I wanted to capture the time took for whole processing and rendering output data on browser but not sure how to do that. At present I have captured the take by python program to process the input values but it doesn't account for complete time between ""submit"" action and rendering output data.",Use ajax request to submit form. Fetch the time on clicking the button and after getting the response and then calculate the difference.,0.0,False,1,6041 +2019-04-09 09:15:48.837,"How to extract images from PDF or Word, together with the text around images?","I found there are some library for extracting images from PDF or word, like docx2txt and pdfimages. But how can I get the content around the images (like there may be a title below the image)? Or get a page number of each image? +Some other tools like PyPDF2 and minecart can extract image page by page. However, I cannot run those code successfully. +Is there a good way to get some information of the images? (from the image got from docx2txt or pdfimages, or another way to extract image with info)",docx2python pulls the images into a folder and leaves -----image1.png---- markers in the extracted text. This might get you close to where you'd like to go.,0.0,False,1,6042 +2019-04-09 18:46:58.267,What is this audio datatype and how do I convert it to wav/l16?,"I am recording audio in a web browser and sending it to a flask backend. From there, I want to transcribe the audio using Watson Speech to Text. I cannot figure out what data format I'm receiving the audio and how to convert it to a format that works for watson. +I believe watson expects a bytestring like b'\x0c\xff\x0c\xffd. The data I receive from the browser looks like [ -4 -27 -34 -9 1 -8 -1 2 10 -28], which I can't directly convert to bytes because of the negative values (using bytes() gives me that error). +I'm really at a loss for what kind of conversion I need to be making here. Watson doesn't return any errors for any kind of data I throw at it just doesn't respond.","Those values should be fine, but you have to define how you want them stored before getting the bytes representation of them. +You'd simply want to convert those values to signed 2-byte/16-bit integers, then get the bytes representation of those.",1.2,True,1,6043 +2019-04-09 19:37:11.227,how do I implement ssim for loss function in keras?,"I need SSIM as a loss function in my network, but my network has 2 outputs. I need to use SSIM for first output and cross-entropy for the next. The loss function is a combination of them. However, I need to have a higher SSIM and lower cross-entropy, so I think the combination of them isn't true. Another problem is that I could not find an implementation of SSIM in keras. +Tensorflow has tf.image.ssim, but it accepts the image and I do not think I can use it in loss function, right? Could you please tell me what should I do? I am a beginner in keras and deep learning and I do not know how can I make SSIM as a custom loss function in keras.","other choice would be +ssim_loss = 1 - tf.reduce_mean(tf.image.ssim(target, output, max_val=self.max_val)) +then +combine_loss = mae (or mse) + ssim_loss +In this way, you are minimizing both of them.",0.0,False,1,6044 +2019-04-11 11:57:15.603,KMeans: Extracting the parameters/rules that fill up the clusters,"I have created a 4-cluster k-means customer segmentation in scikit learn (Python). The idea is that every month, the business gets an overview of the shifts in size of our customers in each cluster. +My question is how to make these clusters 'durable'. If I rerun my script with updated data, the 'boundaries' of the clusters may slightly shift, but I want to keep the old clusters (even though they fit the data slightly worse). +My guess is that there should be a way to extract the paramaters that decides which case goes to their respective cluster, but I haven't found the solution yet. +I would appreciate any help","Got the answer in a different topic: +Just record the cluster means. Then when new data comes in, compare it to each mean and put it in the one with the closest mean.",0.3869120172231254,False,1,6045 +2019-04-11 13:25:10.313,how to count number of days via cron job in odoo 10?,"I am setting up a script for counting number of days with passing each day in odoo. +How i can count day passing each day till end of the month. +For example : i have set two dates to find days between them.I need function which compare number of days with each passing day. When meet remaining day is 0 then will call a cron job.","Write a scheduled action that runs python code daily. The first thing that this code should do is to check the number of days you talk about and if it is 0, it should trigger whatever action it is needed.",0.0,False,1,6046 +2019-04-12 04:46:43.223,How to add reply(child comments) to comments on feed in getstream.io python,I am using getstream.io to create feeds. The user can follow feeds and add reaction like and comments. If a user adds a comment on feed and another wants to reply on the comment then how I can achieve this and also retrieve all reply on the comment.,you can add the child reaction by using reaction_id,0.0,False,1,6047 +2019-04-12 12:06:29.347,how to find the similarity between two documents,"I have tried using the similarity function of spacy to get the best matching sentence in a document. However it fails for bullet points because it considers each bullet as the a sentence and the bullets are incomplete sentences (eg sentence 1 ""password should be min 8 characters long , sentence 2 in form of a bullet "" 8 characters""). It does not know it is referring to password and so my similarity comes very low.","Sounds to me like you need to do more text processing before attempting to use similarity. If you want bullet points to be considered part of a sentence, you need to modify your spacy pipeline to understand to do so.",0.0,False,2,6048 +2019-04-12 12:06:29.347,how to find the similarity between two documents,"I have tried using the similarity function of spacy to get the best matching sentence in a document. However it fails for bullet points because it considers each bullet as the a sentence and the bullets are incomplete sentences (eg sentence 1 ""password should be min 8 characters long , sentence 2 in form of a bullet "" 8 characters""). It does not know it is referring to password and so my similarity comes very low.","Bullets are considered but the thing is it doesn't understand who 8 characters is referring to so I thought of finding the heading of the paragraph and replacing the bullets with it +I found the headings using python docs but it doesn't read bullets while reading the document ,is there a way I can read it using python docs ? +Is there any way I can find the headings of a paragraph in spacy? +Is there a better approach for it",0.0,False,2,6048 +2019-04-12 13:48:58.717,Trying to Import Infoblox Module in Python,"I am trying to write some code in python to retrieve some data from Infoblox. To do this i need to Import the Infoblox Module. +Can anyone tell me how to do this ?","Before you can import infoblox you need to install it: + +open a command prompt (press windows button, then type cmd) +if you are working in a virtual environment access it with activate yourenvname (otherwise skip this step) +execute pip install infoblox to install infoblox, then you should be fine +to test it from the command prompt, execute python, and then try executing import infoblox + +The same process works for basically every package.",0.0,False,1,6049 +2019-04-12 21:52:02.810,Why do I keep getting this error when trying to create a virtual environment with Python 3 on MacOS?,"So I'm following a book that's teaching me how to make a Learning Log using Python and the Django web framework. I was asked to go to a terminal and create a directory called ""learning_log"" and change the working directory to ""learning_log"" (did that with no problems). However, when I try to create the virtual environment, I get an error (seen at the bottom of this post). Why am I getting this error and how can I fix this to move forward in the book? +I already tried installing a virtualenv with pip and pip3 (as the book prescribed). I was then instructed to enter the command: +learning_log$ virtualenv ll_env +And I get: +bash: virtualenv: command not found +Since I'm using Python3.6, I tried: +learning_log$ virtualenv ll_env --python=python3 +And I still get: +bash: virtualenv: command not found +Brandons-MacBook-Pro:learning_log brandondusch$ python -m venv ll_env +Error: Command '['/Users/brandondusch/learning_log/ll_env/bin/python', '-Im', 'ensurepip', '--upgrade', '- +-default-pip']' returned non-zero exit status 1.","For Ubuntu: +The simple is if virtualenv --version returns something like virtualenv: command not found and which virtualenv prints nothing on the console, then virtualenv is not installed on your system. Please try to install using pip3 install virtualenv or sudo apt-get install virtualenv but this one might install a bit older one. +EDIT +For Mac: +For Mac, you need to install that using sudo pip install virtualenv after you have installed Python3 on your Mac.",0.0,False,2,6050 +2019-04-12 21:52:02.810,Why do I keep getting this error when trying to create a virtual environment with Python 3 on MacOS?,"So I'm following a book that's teaching me how to make a Learning Log using Python and the Django web framework. I was asked to go to a terminal and create a directory called ""learning_log"" and change the working directory to ""learning_log"" (did that with no problems). However, when I try to create the virtual environment, I get an error (seen at the bottom of this post). Why am I getting this error and how can I fix this to move forward in the book? +I already tried installing a virtualenv with pip and pip3 (as the book prescribed). I was then instructed to enter the command: +learning_log$ virtualenv ll_env +And I get: +bash: virtualenv: command not found +Since I'm using Python3.6, I tried: +learning_log$ virtualenv ll_env --python=python3 +And I still get: +bash: virtualenv: command not found +Brandons-MacBook-Pro:learning_log brandondusch$ python -m venv ll_env +Error: Command '['/Users/brandondusch/learning_log/ll_env/bin/python', '-Im', 'ensurepip', '--upgrade', '- +-default-pip']' returned non-zero exit status 1.","I had the same error. I restarted my computer and tried it again, but the error was still there. Then I tried python3 -m venv ll_env and it moved forward.",0.0,False,2,6050 +2019-04-13 11:57:31.263,How do I calculate the similarity of a word or couple of words compared to a document using a doc2vec model?,"In gensim I have a trained doc2vec model, if I have a document and either a single word or two-three words, what would be the best way to calculate the similarity of the words to the document? +Do I just do the standard cosine similarity between them as if they were 2 documents? Or is there a better approach for comparing small strings to documents? +On first thought I could get the cosine similarity from each word in the 1-3 word string and every word in the document taking the averages, but I dont know how effective this would be.","There's a number of possible approaches, and what's best will likely depend on the kind/quality of your training data and ultimate goals. +With any Doc2Vec model, you can infer a vector for a new text that contains known words – even a single-word text – via the infer_vector() method. However, like Doc2Vec in general, this tends to work better with documents of at least dozens, and preferably hundreds, of words. (Tiny 1-3 word documents seem especially likely to get somewhat peculiar/extreme inferred-vectors, especially if the model/training-data was underpowered to begin with.) +Beware that unknown words are ignored by infer_vector(), so if you feed it a 3-word documents for which two words are unknown, it's really just inferring based on the one known word. And if you feed it only unknown words, it will return a random, mild initialization vector that's undergone no inference tuning. (All inference/training always starts with such a random vector, and if there are no known words, you just get that back.) +Still, this may be worth trying, and you can directly compare via cosine-similarity the inferred vectors from tiny and giant documents alike. +Many Doc2Vec modes train both doc-vectors and compatible word-vectors. The default PV-DM mode (dm=1) does this, or PV-DBOW (dm=0) if you add the optional interleaved word-vector training (dbow_words=1). (If you use dm=0, dbow_words=0, you'll get fast training, and often quite-good doc-vectors, but the word-vectors won't have been trained at all - so you wouldn't want to look up such a model's word-vectors directly for any purposes.) +With such a Doc2Vec model that includes valid word-vectors, you could also analyze your short 1-3 word docs via their individual words' vectors. You might check each word individually against a full document's vector, or use the average of the short document's words against a full document's vector. +Again, which is best will likely depend on other particulars of your need. For example, if the short doc is a query, and you're listing multiple results, it may be the case that query result variety – via showing some hits that are really close to single words in the query, even when not close to the full query – is as valuable to users as documents close to the full query. +Another measure worth looking at is ""Word Mover's Distance"", which works just with the word-vectors for a text's words, as if they were ""piles of meaning"" for longer texts. It's a bit like the word-against-every-word approach you entertained – but working to match words with their nearest analogues in a comparison text. It can be quite expensive to calculate (especially on longer texts) – but can sometimes give impressive results in correlating alternate texts that use varied words to similar effect.",1.2,True,1,6051 +2019-04-14 15:12:05.933,operations order in Python,"I have just now started learning python from Learn Python 3 The Hard Way by Zed Shaw. In exercise 3 of the book, there was a problem to get the value of 100 - 25 * 3 % 4. The solution to this problem is already mentioned in the archives, in which the order preference is given to * and %(from left to right). +I made a problem on my own to get the value of 100 - 25 % 3 + 4. The answer in the output is 103. +I just wrote: print (""the value of"", 100 - 25 % 3 + 4), which gave the output value 103. +If the % is given the preference 25 % 3 will give 3/4. Then how the answer is coming 103. Do I need to mention any float command or something? +I would like to know how can I use these operations. Is there any pre-defined rule to solve these kinds of problems?","Actually, the % operator gives you the REMAINDER of the operation. +Therefore, 25 % 3 returns 1, because 25 / 3 = 8 and the remainder of this operation is 1. +This way, your operation 100 - 25 % 3 + 4 is the same as 100 - 1 + 4 = 103",1.2,True,2,6052 +2019-04-14 15:12:05.933,operations order in Python,"I have just now started learning python from Learn Python 3 The Hard Way by Zed Shaw. In exercise 3 of the book, there was a problem to get the value of 100 - 25 * 3 % 4. The solution to this problem is already mentioned in the archives, in which the order preference is given to * and %(from left to right). +I made a problem on my own to get the value of 100 - 25 % 3 + 4. The answer in the output is 103. +I just wrote: print (""the value of"", 100 - 25 % 3 + 4), which gave the output value 103. +If the % is given the preference 25 % 3 will give 3/4. Then how the answer is coming 103. Do I need to mention any float command or something? +I would like to know how can I use these operations. Is there any pre-defined rule to solve these kinds of problems?",The % operator is used to find the remainder of a quotient. So 25 % 3 = 1 not 3/4.,0.0,False,2,6052 +2019-04-14 20:15:45.423,Comparing feature extractors (or comparing aligned images),"I'd like to compare ORB, SIFT, BRISK, AKAZE, etc. to find which works best for my specific image set. I'm interested in the final alignment of images. +Is there a standard way to do it? +I'm considering this solution: take each algorithm, extract the features, compute the homography and transform the image. +Now I need to check which transformed image is closer to the target template. +Maybe I can repeat the process with the target template and the transformed image and look for the homography matrix closest to the identity but I'm not sure how to compute this closeness exactly. And I'm not sure which algorithm should I use for this check, I suppose a fixed one. +Or I could do some pixel level comparison between the images using a perceptual difference hash (dHash). But I suspect the the following hamming distance may not be very good for images that will be nearly identical. +I could blur them and do a simple subtraction but sounds quite weak. +Thanks for any suggestions. +EDIT: I have thousands of images to test. These are real world pictures. Images are of documents of different kinds, some with a lot of graphics, others mostly geometrical. I have about 30 different templates. I suspect different templates works best with different algorithms (I know in advance the template so I could pick the best one). +Right now I use cv2.matchTemplate to find some reference patches in the transformed images and I compare their locations to the reference ones. It works but I'd like to improve over this.","From your question, it seems like the task is not to compare the feature extractors themselves, but rather to find which type of feature extractor leads to the best alignment. +For this, you need two things: + +a way to perform the alignment using the features from different extractors +a way to check the accuracy of the alignment + +The algorithm you suggested is a good approach for doing the alignment. To check if accuracy, you need to know what is a good alignment. +You may start with an alignment you already know. And the easiest way to know the alignment between two images is if you made the inverse operation yourself. For example, starting with one image, you rotate it some amount, you translate/crop/scale or combine all this operations. Knowing how you obtained the image, you can obtain your ideal alignment (the one that undoes your operations). +Then, having the ideal alignment and the alignment generated by your algorithm, you can use one metric to evaluate its accuracy, depending on your definition of ""good alignment"".",1.2,True,1,6053 +2019-04-15 15:05:35.363,How to get access to django database from other python program?,"I have django project in which I can display records from raspberry pi device. I had mysql database and i have send records from raspberry there. I can display it via my api, but I want to work on this records.I want to change this to django database but I don't know how I can get access to django database which is on VPS server from raspberry pi device.","ALERT: THIS CAN LEAD TO SECURITY ISSUES +A Django database is no different from any other database. In this case a MySQL. +The VPS server where the MySQL is must have a public IP, the MySQL must be listening on that IP (if the VPS has a public IP but MySQL is not listening/bind on that IP, it won't work) and the port of the MySQL open (default is 3306), then you can connect to that database from any program with the required configurations params (host, port, user, password,...). +I'm not a sysadmin expert, but having a MySQL on a public IP is a security hole. So the best approach IMO is to expose the operations you want to do via API with Django.",1.2,True,1,6054 +2019-04-16 11:37:49.557,"What happened after entered "" flask run"" on a terminal under the project directory?","What happened after entered ""flask run"" on a terminal under the project directory? +How the python interpreter gets the file of flask.__main__.py and starts running project's code? +I know how Flask locates app. What I want to figure out is how command line instruction ""flask run"" get the flask/__main__.py bootup","flask is a Python script. Since you stated you are not a beginner, you should simply open the file (/usr/bin/flask) in your favorite text editor and start from there. There is no magic under the hood.",1.2,True,1,6055 +2019-04-17 08:02:38.757,what's the difference between airflow's 'parallelism' and 'dag_concurrency',"I can't understand the difference between dag_concurrency and parallelism. documentation and some of the related posts here somehow contradicts my findings. +The understanding I had before was that the parallelism parameter allows you to set the MAX number of global(across all DAGs) TaskRuns possible in airflow and dag_concurrency to mean the MAX number of TaskRuns possible for a single Dag. +So I set the parallelism to 8 and dag_concurrency to 4 and ran a single Dag. And I found out that it was running 8 TIs at a time but I was expecting it to run 4 at a time. + +How is that possible? +Also, if it helps, I have set the pool size to 10 or so for these tasks. But that shouldn't have mattered as ""config"" parameters are given higher priorities than the pool's, Right?","The other answer is only partially correct: +dag_concurrency does not explicitly control tasks per worker. dag_concurrency is the number of tasks running simultaneously per dag_run. So if your DAG has a place where 10 tasks could be running simultaneously but you want to limit the traffic to the workers you would set dag_concurrency lower. +The queues and pools setting also have an effect on the number of tasks per worker. +These setting are very important as you start to build large libraries of simultaneously running DAGs. +parallelism is the maximum number of tasks across all the workers and DAGs.",0.9866142981514304,False,1,6056 +2019-04-17 15:40:35.510,"how do i fix ""No module named 'win32api'"" on python2.7","I am trying to import win32api in python 2.7.9. i did the ""pip install pypiwin32"" and made sure all the files were intalled correctly (i have the win32api.pyd under ${PYTHON_HOME}\Lib\site-packages\win32). i also tried coping the files from C:\Python27\Lib\site-packages\pywin32_system32 to C:\Python27\Lib\site-packages\win32. I also tried restarting my pc after each of these steps but nothing seems to work! i still get the error 'No module named 'win32api''","Well, turns out the answer is upgrading my python to 3.6. +python 2.7 seems to old to work with outside imports (I'm just guessing here, because its not the first time I'm having an import problem) +hope it helps :)",1.2,True,1,6057 +2019-04-17 15:55:52.440,paste code to Jupyter notebook without symbols,"I tried to paste few lines code from online sources with the symbol like "">>>"". My question is how to paste without these symbols? +(Line by line works but it will be very annoying if pasting a big project.) +Cheers","Go to Edit > Find and Replace, in which find for >>> and replace with empty. Enjoy :)",0.0,False,1,6058 +2019-04-17 19:02:36.187,How can python iterate over a set if no order is defined?,"So I notice that we say in python that sets have no order or arrangement, although of course you can sort the list generated from a set. +So I was wondering how the iteration over a set is defined in python. Does it just follow the sorted list ordering, or is there some other footgun that might crop up at some point? +Thanks.","A temporary order is used to iterate over the set, but you can't reliably predict it (practically speaking, as it depends on the insertion and deletion history of the set). If you need a specific order, use a list.",1.2,True,1,6059 +2019-04-18 06:31:12.170,"How to triangulate a point in 3D space, given coordinate points in 2 image and extrinsic values of the camera","I'm trying to write a function that when given two cameras, their rotation, translation matrices, focal point, and the coordinates of a point for each camera, will be able to triangulate the point into 3D space. Basically, given all the extrinsic/intrinsic values needed +I'm familiar with the general idea: to somehow create two rays and find the closest point that satisfies the least squares problem, however, I don't know exactly how to translate the given information to a series of equations to the coordinate point in 3D.","Assume you have two cameras -- camera 1 and camera 2. +For each camera j = 1, 2 you are given: + +The distance hj between it's center Oj, (is ""focal point"" the right term? Basically the point Oj from which the camera is looking at its screen) and the camera's screen. The camera's coordinate system is centered at Oj, the Oj--->x and Oj--->y axes are parallel to the screen, while the Oj--->z axis is perpendicular to the screen. +The 3 x 3 rotation matrix Uj and the 3 x 1 translation vector Tj which transforms the Cartesian 3D coordinates with respect to the system of camera j (see point 1) to the world-coordinates, i.e. the coordinates with respect to a third coordinate system from which all points in the 3D world are described. +On the screen of camera j, which is the plane parallel to the plane Oj-x-y and at a distance hj from the origin Oj, you have the 2D coordinates (let's say the x,y coordinates only) of point pj, where the two points p1 and p2 are in fact the projected images of the same point P, somewhere in 3D, onto the screens of camera 1 and 2 respectively. The projection is obtained by drawing the 3D line between point Oj and point P and defining point pj as the unique intersection point of this line with with the screen of camera j. The equation of the screen in camera j's 3D coordinate system is z = hj , so the coordinates of point pj with respect to the 3D coordinate system of camera j look like pj = (xj, yj, hj) and so the 2D screen coordinates are simply pj = (xj, yj) . + +Input: You are given the 2D points p1 = (x1, y1), p2 = (x2, y2) , the twp cameras' focal distances h1, h2 , two 3 x 3 rotation matrices U1 and U2, two translation 3 x 1 vector columns T1 and T2 . +Output: The coordinates P = (x0, y0, z0) of point P in the world coordinate system. +One somewhat simple way to do this, avoiding homogeneous coordinates and projection matrices (which is fine too and more or less equivalent), is the following algorithm: + +Form Q1 = [x1; y1; h1] and Q2 = [x2; y2; h2] , where they are interpreted as 3 x 1 vector columns; +Transform P1 = U1*Q1 + T1 and P2 = U1*Q2 + T1 , where * is matrix multiplication, here it is a 3 x 3 matrix multiplied by a 3 x 1 column, givin a 3 x 1 column; +Form the lines X = T1 + t1*(P1 - T1) and X = T2 + t2*(P2 - T2) ; +The two lines from the preceding step 3 either intersect at a common point, which is the point P or they are skew lines, i.e. they do not intersect but are not parallel (not coplanar). +If the lines are skew lines, find the unique point X1 on the first line and the uniqe point X2 on the second line such that the vector X2 - X1 is perpendicular to both lines, i.e. X2 - X1 is perpendicular to both vectors P1 - T1 and P2 - T2. These two point X1 and X2 are the closest points on the two lines. Then point P = (X1 + X2)/2 can be taken as the midpoint of the segment X1 X2. + +In general, the two lines should pass very close to each other, so the two points X1 and X2 should be very close to each other.",0.0,False,1,6060 +2019-04-18 14:54:16.823,Understanding execution_date in Airflow,"I am running an airflow DAG and wanted to understand how the execution date gets set. This is the code I am running: +{{ execution_date.replace(day=1).strftime(""%Y-%m-%d"") }} +This always returns the first day of the month. This is the functionality that I want, but I just want to find a way to understand what is happening.","The reason this always returns the first of the month is that you are using a Replace to ensure the day is forced to be the 1st of the month. Simply remove "".replace(day=1)"".",1.2,True,2,6061 +2019-04-18 14:54:16.823,Understanding execution_date in Airflow,"I am running an airflow DAG and wanted to understand how the execution date gets set. This is the code I am running: +{{ execution_date.replace(day=1).strftime(""%Y-%m-%d"") }} +This always returns the first day of the month. This is the functionality that I want, but I just want to find a way to understand what is happening.",execution_date returns a datatime object. You are using the replace method of that object to replace the “day” with the first. Then outputting that to a string with the format method.,0.0,False,2,6061 +2019-04-19 10:22:36.757,"How to convert file .py to .exe, having Python from Anaconda Navigator? (in which command prompt should I write installation codes?)","I created a Python script (format .py) that works. +I would like to convert this file to .exe, to use it in a computer without having Python installed. +How can I do? +I have Python from Anaconda3. +What can I do? +Thank you! +I followed some instruction found here on Stackoverflow. +.I modify the Path in the 'Environment variables' in the windows settings, edited to the Anaconda folder. +.I managed to install pip in conda prompt (I guess). +Still, nothing is working. I don't know how to proceed and in general how to do things properly.","I personaly use pyinstaller, its available from pip. +But it will not really compile, it will just bundle. +The difference is compiling means translating to real machine code while bundling is creating a big exe file with all your libs and your python interpreter. +Even if pyinstaller create bigger file and is slower than cython (at execution), I prefer it because it work all the time without work (except lunching it).",1.2,True,1,6062 +2019-04-19 16:25:54.117,How can i install opencv in python3.7 on ubuntu?,"I have a Nvidia Jetson tx2 with the orbitty shield on it. +I got it from a friend who worked on it last year. It came with ubuntu 16.04. I updated everything on it and i installed the latest python3.7 and pip. +I tried checking the version of opencv to see what i have but when i do import cv2 it gives me : +Traceback (most recent call last): + File """", line 1, in +ModuleNotFoundError: No module named 'cv2' +Somehow besides python3.7 i have python2.7 and python3.5 installed. If i try to import cv2 on python2.7 and 3.5 it works, but in 3.7 it doesn't. +Can u tell me how can i install opencv in python3.7 and the latest version?",Does python-3.7 -m pip install opencv-python work? You may have to change the python-3.7 to whatever path/alias you use to open your own python 3.7.,0.0,False,1,6063 +2019-04-20 21:40:23.537,Negative Feature Importance Value in CatBoost LossFunctionChange,"I am using CatBoost for ranking task. I am using QueryRMSE as my loss function. I notice for some features, the feature importance values are negative and I don't know how to interpret them. +It says in the documentation, the i-th feature importance is calculated as the difference between loss(model with i-th feature excluded) - loss(model). +So a negative feature importance value means that feature makes my loss go up? +What does that suggest then?",Negative feature importance value means that feature makes the loss go up. This means that your model is not getting good use of this feature. This might mean that your model is underfit (not enough iteration and it has not used the feature enough) or that the feature is not good and you can try removing it to improve final quality.,1.2,True,1,6064 +2019-04-23 04:00:09.300,cv2 - multi-user image display,"Using python and OpenCV, is it possible to display the same image on multiple users? +I am using cv2.imshow but it only displays the image for the user that runs the code. +Thanks",I was able to display the images on another user/host by setting the DISPLAY environment variable of the X server to match the desired user's DISPLAY.,0.0,False,1,6065 +2019-04-23 06:19:55.700,Move turtle slightly closer to random coordinate on each update,"I'm doing a homework and I want to know how can I move turtle to a random location a small step each time. Like can I use turtle.goto() in a slow motion? +Someone said I should use turtle.setheading() and turtle.forward() but I'm confused on how to use setheading() when the destination is random. +I'm hoping the turtle could move half radius (which is 3.5) each time I update the program to that random spot.","Do you mean that you want to move a small step, stop, and repeat? If so, you can ‘import time’ and add ‘time.sleep(0.1)’ after each ‘forward’",0.0,False,1,6066 +2019-04-23 15:17:02.483,Python append single bit to bytearray,"I have a bytearray containing some bytes, it currently look like this (Converted to ASCII): + +['0b1100001', '0b1100010', '0b1100011', '0b10000000'] + +I need to add a number of 0 bits to this, is that possible or would I have to add full bytes? If so, how do I do that?","Where do you need the bits added to? Each element of your list or an additional element that contains all 0's? +The former: +myList[0] = myList[0] * 2 # ASL +The later +myList.append(0b000000)",0.1352210990936997,False,1,6067 +2019-04-23 19:50:59.320,How to access _pycache_ directory,"I want to remove a folder, but I can’t get in pycache to delete the pyc and pyo$ files. I have done it before, but I don’t know how I did it.","If you want to remove your python file artifacts, such as the .pyc and .pyo cache files, maybe you could try the following: + +Move into your project's root directory +cd +Remove python file artifacts +find . -name '*.pyc' -exec rm -f {} + +find . -name '*.pyo' -exec rm -f {} + + +Hopefully that helps!",0.0,False,1,6068 +2019-04-26 07:18:59.020,numerical entity extraction from unstructured texts using python,"I want to extract numerical entities like temperature and duration mentioned in unstructured formats of texts using neural models like CRF using python. I would like to know how to proceed for numerical extraction as most of the examples available on the internet are for specific words or strings extraction. +Input: 'For 5 minutes there, I felt like baking in an oven at 350 degrees F' +Output: temperature: 350 + duration: 5 minutes","So far my research shows that you can treat numbers as words. +This raises an issue : learning 5 will be ok, but 19684 will be to rare to be learned. +One proposal is to convert into words. ""nineteen thousands six hundred eighty four"" and embedding each word. The inconvenient is that you are now learning a (minimum) 6 dimensional vector (one dimension per word) +Based on your usage, you can also embed 0 to 3000 with distinct ids, and say 3001 to 10000 will map id 3001 in your dictionary, and then add one id in your dictionary for each 10x.",0.0,False,1,6069 +2019-04-26 14:50:51.197,Python Azure webjob passing parameters,"I have a Python WebJob living in Azure and I'm trying to pass parameters to it. +I've found documentation saying that I should be able to post the URL and add:?arguments={'arg1', 'arg2'} after it. +However, when I do that and then try to print(sys.argv) in my code, it's only printing the name of the Python file and none of the arguments I pass to it. +How do I get the arguments to pass to my Python code? I am also using a run.cmd in my Azure directory to trigger my Python code if that makes a difference. +UPDATE: So I tested it in another script without the run.cmd and that certainly is the problem. If I just do ?arguments=22 66 it works. So how do I pass parameters when I'm using a run.cmd file?","I figured it out: in the run.cmd file, you need to put ""%*"" after your script name and it will detect any arguments you passed in the URL.",1.2,True,1,6070 +2019-04-27 17:03:24.327,What happens after shutting down the PC via subprocess?,"I try to turn my pc off and restart it on LAN. +When getting one of the commands (turnoff or restart), I execute one of the followings: +subprocess.call([""shutdown"", ""-f"", ""-s"", ""-y""]) # Turn off +subprocess.call([""shutdown"", ""-f"", ""-r"", ""-t"", ""-c"", ""-y""]) # Restart +I'd like to inform the other side if the process was successfully initiated, and if the PC is in the desired state. +I know that it is possible to implement a function which will check if the PC is alive (which is a pretty good idea) several seconds after executing the commands, but how one can know how many seconds are needed? And what if the PC will be shut down a moment after sending a message stating that it is still alive? +I'm curious to know- what really happens after those commands are executed? Will the script keep running until the task manager will kill it? Will it stop running right after the command?","Programs like shutdown merely send a message to init (or whatever modern replacement) and exit immediately; it’s up to it what happens next. Typical Unix behavior is to first shut down things like SSH servers (which probably doesn’t kill your connection to the machine), then send SIGTERM to all processes, wait a few seconds (5 is typical) for signal handlers to run, and then send SIGKILL to any survivors. Finally, filesystems are unmounted and the hardware halt or reboot happens. +While there’s no guarantee that the first phase takes long enough for you to report successful shutdown execution, it generally will; if it’s a concern, you can catch the SIGTERM to buy yourself those few extra seconds to get the message out.",1.2,True,1,6071 +2019-04-27 20:18:09.337,How can I create a vCard qrcode with pyqrcode?,"I am trying to generate a vCard QR code with the pyqrcode library but I cannot figure out the way to do it. +I have read their documentation 5 times and it doesn't say anything about vCard, only about URL and on the internet, I could found only about wifi. Does anybody know how can I do it? +I want to make a vCard QR code and afterward to display it on django web page.","Let's say : +We've two libraries: + +pyqrcode : QR reader / writer +vobject : vCard serializer / deserializer + +Flow: +a. Generate a QR img from ""some"" web site : +web site send JSON info => get info from JSON and serialize using vobject to obtain a vcard string => pyqrcode.create(vcard string) +b. Show human redeable info from QR img : +pyqrcode read an QR img ( created from a. ) => deserialize using vobject to obtain a JSON => show info parsing JSON in the web site. +OR... after deserialize using vobject you can write a .vcard file",1.2,True,1,6072 +2019-04-28 15:35:56.843,Cloud SQL/NiFi: Connect to cloud sql database with python and NiFi,"So, I am doing a etl process in which I use Apache NiFi as an etl tool along with a postgresql database from google cloud sql to read csv file from GCS. As a part of the process, I need to write a query to transform data read from csv file and insert to the table in the cloud sql database. So, based on NIFi, I need to write a python to execute a sql queries automatically on a daily basis. But the question here is that how can I write a python to connect with the cloud sql database? What config that should be done? I have read something about cloud sql proxy but can I just use an cloud sql instance's internal ip address and put it in some config file and creating some dbconnector out of it? +Thank you +Edit: I can connect to cloud sql database from my vm using psql -h [CLOUD_SQL_PRIVATE_IP_ADDR] -U postgres but I need to run python script for the etl process and there's a part of the process that need to execute sql. What I am trying to ask is that how can I write a python file that use for executing the sql +e.g. In python, query = 'select * from table ....' and then run +postgres.run_sql(query) which will execute the query. So how can I create this kind of executor?","I don't understand why you need to write any code in Python? I've done a similar process where I used GetFile (locally) to read a CSV file, parse and transform it, and then used ExecuteSQLRecord to insert the rows into a SQL server (running on a cloud provider). The DBCPConnectionPool needs to reference your cloud provider as per their connection instructions. This means the URL likely reference something.google.com and you may need to open firewall rules using your cloud provider administration.",0.0,False,1,6073 +2019-04-29 12:55:44.847,"Keras / NN - Handling NaN, missing input","These days i'm trying to teach myself machine learning and i'm going though some issues with my dataset. +Some of my rows (i work with csv files that i create with some js script, i feel more confident doing that in js) are empty wich is normal as i'm trying to build some guessing model but the issue is that it results in having nan values on my training set. +My NN was not training so i added a piece of code to remove them from my set but now i have some issues where my model can't work with input from different size.. +So my question is: how do i handle missing data ? (i basically have 2 rows and can only have the value from 1 and can't merge them as it will not give good results) +i can remove it from my set, wich would reduce the accuracy of my model in the end. +PS: if needed i'll post some code when i come back home.","You need to have the same input size during training and inference. If you have a few missing values (a few %), you can always choose to replace the missing values by a 0 or by the average of the column. If you have more missing values (more than 50%) you are probably better off ignoring the column completely. Note that this theoretical, the best way to make it work is to try different strategies on your data.",1.2,True,1,6074 +2019-04-30 15:36:33.210,Doc2Vec - Finding document similarity in test data,"I am trying to train a doc2vec model using training data, then finding the similarity of every document in the test data for a specific document in the test data using the trained doc2vec model. However, I am unable to determine how to do this. +I currently using model.docvecs.most_similar(...). However, this function only finds the similarity of every document in the training data for a specific document in the test data. +I have tried manually comparing the inferred vector of a specific document in the test data with the inferred vectors of every other document in the test data using model.docvecs.n_similarity(inferred_vector.tolist(), testvectors[i].tolist()) but this returns KeyError: ""tag '-0.3502606451511383' not seen in training corpus/invalid"" as there are vectors not in the dictionary.","It turns out there is a function called similarity_unseen_docs(...) which can be used to find the similarity of 2 documents in the test data. +However, I will leave the question unsolved for now as it is not very optimal since I would need manually compare the specific document with every other document in the test data. Also, it compares the words in the documents instead of the vectors which could affect accuracy.",0.0,False,2,6075 +2019-04-30 15:36:33.210,Doc2Vec - Finding document similarity in test data,"I am trying to train a doc2vec model using training data, then finding the similarity of every document in the test data for a specific document in the test data using the trained doc2vec model. However, I am unable to determine how to do this. +I currently using model.docvecs.most_similar(...). However, this function only finds the similarity of every document in the training data for a specific document in the test data. +I have tried manually comparing the inferred vector of a specific document in the test data with the inferred vectors of every other document in the test data using model.docvecs.n_similarity(inferred_vector.tolist(), testvectors[i].tolist()) but this returns KeyError: ""tag '-0.3502606451511383' not seen in training corpus/invalid"" as there are vectors not in the dictionary.","The act of training-up a Doc2Vec model leaves it with a record of the doc-vectors learned from the training data, and yes, most_similar() just looks among those vectors. +Generally, doing any operations on new documents that weren't part of training will require the use of infer_vector(). Note that such inference: + +ignores any unknown words in the new document +may benefit from parameter tuning, especially for short documents +is currently done just one document at time in a single thread – so, acquiring inferred-vectors for a large batch of N-thousand docs can actually be slower than training a fresh model on the same N-thousand docs +isn't necessarily deterministic, unless you take extra steps, because the underlying algorithms use random initialization and randomized selection processes during training/inference +just gives you the vector, without loading it into any convenient storage-object for performing further most_similar()-like comparisons + +On the other hand, such inference from a ""frozen"" model can be parallelized across processes or machines. +The n_similarity() method you mention isn't really appropriate for your needs: it's expecting lists of lookup-keys ('tags') for existing doc-vectors, not raw vectors like you're supplying. +The similarity_unseen_docs() method you mention in your answer is somewhat appropriate, but just takes a pair of docs, re-calculating their vectors each time – somewhat wasteful if a single new document's doc-vector needs to be compared against many other new documents' doc-vectors. +You may just want to train an all-new model, with both your ""training documents"" and your ""test documents"". Then all the ""test documents"" get their doc-vectors calculated, and stored inside the model, as part of the bulk training. This is an appropriate choice for many possible applications, and indeed could learn interesting relationships based on words that only appear in the ""test docs"" in a totally unsupervised way. And there's not yet any part of your question that gives reasons why it couldn't be considered here. +Alternatively, you'd want to infer_vector() all the new ""test docs"", and put them into a structure like the various KeyedVectors utility classes in gensim - remembering all the vectors in one array, remembering the mapping from doc-key to vector-index, and providing an efficient bulk most_similar() over the set of vectors.",1.2,True,2,6075 +2019-04-30 22:47:27.537,How to run a python program using sourcelair?,"I'm trying to run a python program in the online IDE SourceLair. I've written a line of code that simply prints hello, but I am embarrassed to say I can't figure out how to RUN the program. +I have the console, web server, and terminal available on the IDE already pulled up. I just don't know how to start the program. I've tried it on Mac OSX and Chrome OS, and neither work. +I don't know if anyone has experience with this IDE, but I can hope. Thanks!!","Can I ask you why you are using SourceLair? +Well I just figured it out in about 2 mins....its the same as using any other editor for python. +All you have to do is to run it in the terminal. python (nameoffile).py",0.2012947653214861,False,1,6076 +2019-05-01 22:51:50.737,How to implement Proximal Policy Optimization (PPO) Algorithm for classical control problems?,"I am trying to implement clipped PPO algorithm for classical control task like keeping room temperature, charge of battery, etc. within certain limits. So far I've seen the implementations in game environments only. My question is the game environments and classical control problems are different when it comes to the implementation of the clipped PPO algorithm? If they are, help and tips on how to implement the algorithm for my case are appreciated.","I'm answering your question from a general RL point of view, I don't think the particular algorithm (PPO) makes any difference in this question. +I think there is no fundamental differences, both can be seen as discrete control problems. In a game you observe the state, then choose an action and act according to it, and receive reward an the observation of the subsequent state. +Now if you take a simple control problem, instead of a game you probably have a simulation (or just a very simple dynamic model) that describes the behavior of your problem. For example the equations of motion for an inverted pendulum (another classical control problem). In some case you might directly interact with the real system, not a model of it, but this is rare as it can be really slow, and the typical sample complexities of RL algorithms make learning on a real (physical) system less practical. +Essentially you interact with the model of your problem just the same way as you do with a game: you observe a state, take an action and act, and observe the next state. The only difference is that while in games reward is usually pre-defined (some score or goal state), probably you need to define the reward function for your problem. But again, in many cases you also need to define rewards for games, so this is not a major difference either.",1.2,True,1,6077 +2019-05-03 22:13:05.803,Visualizing a frozen graph_def.pb,"I am wondering how to go about visualization of my frozen graph def. I need it to figure out my tensorflow networks input and output nodes. I have already tried several methods to no avail, like the summarize graph tool. Does anyone have an answer for some things that I can try? I am open to clarifying questions, thanks in advance.",You can try to use TensorBoard. It is on the Tensorflow website...,0.0,False,1,6078 +2019-05-04 18:47:48.437,Python: how to create database and a GUI together?,"I am new to Python and to train myself, I would like to use Python build a database that would store information about wine - bottle, date, rating etc. The idea is that: + +I could use to database to add a new wine entries +I could use the database to browse wines I have previously entered +I could run some small analyses + +The design of my Python I am thinking of is: + +Design database with Python package sqlite3 +Make a GUI built on top of the database with the package Tkinter, so that I can both enter new data and query the database if I want. + +My question is: would your recommend this design and these packages? Is it possible to build a GUI on top of a database? I know StackOverflow is more for specific questions rather than ""project design"" questions so I would appreciate if anyone could point me to forums that discuss project design ideas. +Thanks.","If it's just for you, sure there is no problem with that stack. +If I were doing it, I would skip Tkinter, and build something using Flask (or Django.) Doing a web page as a GUI yields faster results, is less fiddly, and more applicable to the job market.",0.0,False,1,6079 +2019-05-05 00:30:24.233,Pandas read_csv method can't get 'œ' character properly while using encoding ISO 8859-15,"I have some trubble reading with pandas a csv file which include the special character 'œ'. +I've done some reseach and it appears that this character has been added to the ISO 8859-15 encoding standard. +I've tried to specify this encoding standard to the pandas read_csv methods but it doesn't properly get this special character (I got instead a '☐') in the result dataframe : +df= pd.read_csv(my_csv_path, "";"", header=None, encoding=""ISO-8859-15"") +Does someone know how could I get the right 'œ' character (or eaven better the string 'oe') instead of this ? +Thank's a lot :)",Anyone have a clue ? I've manage the problem by manually rewrite this special character before reading my csv with pandas but that doesn't answer my question :(,0.0,False,1,6080 +2019-05-06 11:57:11.053,Using OpenCV with PyPy,"I am trying to run a python script using OpenCV with PyPy, but all the documentation that I found didn't work. +The installation of PyPy went well, but when I try to run the script it says that it can't find OpenCV modules like 'cv2' for example, despite having cloned opencv for pypy directly from a github repository. +I would need to know how to do it exactly.","pip install opencv-python worked well for me on python 2.7, I can import and use cv2.",0.2012947653214861,False,1,6081 +2019-05-07 06:21:51.733,Getting ARERR 149 A user name must be supplied in the control record,"I have a SOAP url , while running the url through browser I am getting a wsdl response.But when I am trying to call a method in the response using the required parameter list, and it is showing ""ARERR [149] A user name must be supplied in the control record"".I tried using PHP as well as python but I am getting the same error. +I searched this error and got the information like this : ""The name field of the ARControlStruct parameter is empty. Supply the name of an AR System user in this field."".But nowhere I saw how to supply the user name parameter.","I got the solution for this problem.Following are the steps I followed to solve the issue (I have used ""zeep"" a 3rd party module to solve this): + +Run the following command to understand WSDL: + +python -mzeep wsdl_url + +Search for string ""Service:"". Below that we can see our operation name +For my operation I found following entry: + +MyOperation(parameters..., _soapheaders={parameters: ns0:AuthenticationInfo}) +which clearly communicates that, I have to pass parameters and an auth param using kwargs ""_soapheaders"" +With that I came to know that I have to pass my authentication element as _soapheaders argument to MyOperation function. + +Created Auth Element: + +auth_ele = client.get_element('ns0:AuthenticationInfo') +auth = auth_ele(userName='me', password='mypwd') + +Passed the auth to my Operation: + +cleint.service.MyOperation('parameters..', _soapheaders=[auth])",0.0,False,1,6082 +2019-05-07 20:45:21.080,How to implement Breadth-First-Search non-recursively for a directed graph on python,"I'm trying to implement a BFS function that will print a list of nodes of a directed graph as visited using Breadth-First-Search traversal. The function has to be implemented non-recursively and it has to traverse through all the nodes in a graph, so if there are multiple trees it will print in the following way: +Tree 1: a, b +Tree 2: d, e, h +Tree 3: ..... +My main difficulty is understanding how to make the BFS function traverse through all the nodes if the graph has several trees, without reprinting previously visited nodes.","BFS is usually done with a queue. When you process a node, you push its children onto the queue. After processing the node, you process the next one in the queue. +This is by nature non-recursive.",0.0,False,1,6083 +2019-05-08 08:51:35.840,"How to kill tensorboard with Tensorflow2 (jupyter, Win)","sorry for the noob question, but how do I kill the Tensorflow PID? +It says: +Reusing TensorBoard on port 6006 (pid 5128), started 4 days, 18:03:12 ago. (Use '!kill 5128' to kill it.) +But I can not find any PID 5128 in the windows taks manager. Using '!kill 5128' within jupyter the error returns that comand kill cannot be found. Using it in the Windows cmd or conda cmd does not work either. +Thanks for your help.","If you clear the contents of AppData/Local/Temp/.tensorboard-info, and delete your logs, you should be able to have a fresh start",0.9999665971563038,False,1,6084 +2019-05-08 11:29:27.797,how to extract line from a word2vec file?,"I have created a word2vec file and I want to extract only the line at position [0] +this is the word2vec file +`36 16 +Activity 0.013954502 0.009596351 -0.0002082094 -0.029975398 -0.0244055 -0.001624907 0.01995442 0.0050479663 -0.011549354 -0.020344704 -0.0113901375 -0.010574887 0.02007604 -0.008582828 0.030914625 -0.009170294 +DATABASED%GWC%5 0.022193532 0.011890317 -0.018219836 0.02621059 0.0029900416 0.01779779 -0.026217759 0.0070709535 -0.021979155 0.02609082 0.009237218 -0.0065825963 -0.019650755 0.024096865 -0.022521153 0.014374277 +DATABASED%GWC%7 0.021235622 -0.00062567473 -0.0045315344 0.028400827 0.016763352 0.02893731 -0.013499333 -0.0037113864 -0.016281538 0.004078895 0.015604254 -0.029257657 0.026601797 0.013721668 0.016954066 -0.026421601`","glove_model[""Activity""] should get you its vector representation from the loaded model. This is because glove_model is an object of type KeyedVectors and you can use key value to index into it.",1.2,True,1,6085 +2019-05-08 17:12:26.120,Handling many-to-many relationship from existing database using Django ORM,"I'm starting to work with Django, already done some models, but always done that with 'code-first' approach, so Django handled the table creations etc. Right now I'm integrating an already existing database with ORM and I encountered some problems. +Database has a lot of many-to-many relationships so there are quite a few tables linking two other tables. I ran inspectdb command to let Django prepare some models for me. I revised them, it did rather good job guessing the fields and relations, but the thing is, I think I don't need those link tables in my models, because Django handles many-to-many relationships with ManyToManyField fields, but I want Django to use that link tables under the hood. +So my question is: Should I delete the models for link tables and add ManyToManyFields to corresponding models, or should I somehow use this models? +I don't want to somehow mess-up database structure, it's quite heavy populated. +I'm using Postgres 9.5, Django 2.2.",In many cases it doesn't matter. If you would like to keep the code minimal then m2m fields are a good way to go. If you don't control the database structure it might be worth keeping the inspectdb schema in case you have to do it again after schema changes that you don't control. If the m2m link tables can grow properties of their own then you need to keep them as models.,0.0,False,1,6086 +2019-05-08 20:10:18.047,"Is there a way to use the ""read_csv"" method to read the csv files in order they are listed in a directory?","I am plotting plots on one figure using matplotlib from csv files however, I want the plots in order. I want to somehow use the read_csv method to read the csv files from a directory in the order they are listed in so that they are outputted in the same fashion. +I want the plots listed under each other the same way the csv files are listed in the directory.","you could use os.listdir() to get all the files in the folder and then sort them out in a certain way, for example by name(it would be enough using the python built in sorted() ). Instead if you want more fancy ordering you could retrieve both the name and last modified date and store them in a dictionary, order the keys and retrieve the values. So as @Fausto Morales said it all only depends on which order you would like them to be sorted.",1.2,True,1,6087 +2019-05-09 09:52:44.457,How to make a python script run forever online?,"I Have a python script that monitors a website, and I want it to send me a notification when some particular change happens to the website. +My question is how can I make that Python script runs for ever in some place else (Not my machine, because I want it to send me a notification even when my machine is off)? +I have thought about RDP, but I wanted to have your opinions also. +(PS: FREE Service if it's possible, otherwise the lowest cost) +Thank you!","I would suggest you to setup AWS EC2 instance with whatever OS you want. +For beginner, you can get 750 hours of usage for free where you can run your script on.",0.3869120172231254,False,1,6088 +2019-05-12 11:06:47.317,How to write binary file with bit length not a multiple of 8 in Python?,"I'm working on a tool generates dummy binary files for a project. We have a spec that describes the real binary files, which are created from a stream of values with various bit lengths. I use input and spec files to create a list of values, and the bitstring library's BitArray class to convert the values and join them together. +The problem is that the values' lengths don't always add up to full bytes, and I need the file to contain the bits as-is. Normally I could use BitArray.tofile(), but that method automatically pads the file with zeroes at the end. +Is there another way how to write the bits to a file?","You need to give padding to the, say 7-bit value so it matches a whole number of bytes: +1010101 (7 bits) --> 01010101 +1111 (4 bits) --> 00001111 +The padding of the most significant digits does not affect the data taken from the file.",0.0,False,1,6089 +2019-05-14 11:09:07.967,Send variable between PCs over the internet using python,"I have two computers with internet connection. They both have public IPs and they are NATed. What I want is to send a variable from PC A to PC B and close the connection. +I have thought of two approaches for this: +1) Using sockets. PC B will have listen to a connection from PC A. Then, when the variable will be sent, the connection will be closed. The problem is that, the sockets will not communicate, because I have to forward the traffic from my public IP to PC B. +2) An out of the box idea, is to have the variable broadcasted online somewhere. I mean making a public IP hold the variable in HTML and then the PC would GET the IP from and get the variable. The problem is, how do I make that variable accessible over the internet? +Any ideas would be much appreciated.","Figured a solution out. I make a dummy server using flask and I hosted it at pythonanywhere.com for free. The variables are posted to the server from PC A and then, PC B uses the GET method to get them locally.",1.2,True,1,6090 +2019-05-15 19:48:44.263,Pulling duration stats API in Airflow,"In airflow, the ""Gantt"" chart offers quite a good view on performance of the ran tasks. It offers stats like start/end time, duration and etc. +Do you guys know a way to programmatically pull these stats via the Airflow API? I would like to use these stats and generate periodic reports on the performance of my tasks and how it changes over time. +My airflow version is: 1.9 +Python: 3.6.3 +Running on top of docker +Thanks! +Kelvin +Airflow online documentation","One easy approach could be to set up a SQL alchemy connection, airflow stores/sends all the data in there once the configuration is completed(dag info/stat/fail, task info/stats/ etc.). +Edit airflow.cfg and add: +sql_alchemy_conn = mysql://------/table_name",1.2,True,1,6091 +2019-05-15 23:51:31.380,How to use a Pyenv virtualenv from within Eclipse?,"I am using Eclipse on Linux to develop C applications, and the build system I have makes use of make and python. I have a custom virtualenv installed and managed by pyenv, and it works fine from the command line if I pre-select the virtualenv with, say pyenv shell myvenv. +However I want Eclipse to make use of this virtualenv when building (via ""existing makefile"") from within Eclipse. Currently it runs my Makefile but uses the system python in /usr/bin/python, which is missing all of the packages needed by the build system. +It isn't clear to me how to configure Eclipse to use a custom Python interpreter such as the one in my virtualenv. I have heard talk of setting PYTHONPATH however this seems to be for finding site-packages rather than the interpreter itself. My virtualenv is based on python 3.7 and my system python is 2.7, so setting this alone probably isn't going to work. +I am not using PyDev (this is a C project, not a Python project) so there's no explicit support for Python in Eclipse. I'd prefer not to install PyDev if I can help it. +I've noticed that pyenv adds its plugins, shims and bin directories to PATH when activated. I could explicitly add these to PATH in Eclipse, so that Eclipse uses pyenv to find an interpreter. However I'd prefer to point directly at a specific virtualenv rather than use the pyenv machinery to find the current virtualenv.","For me, following steps worked ( mac os 10.12, eclipse photon version, with pydev plugin) + +Project -> properties +Pydev-Interpreter/Grammar +Click here to configure an interpreter not listed (under interpret combobox) +open interpreter preference page +Browse for python/pypy exe -> my virtualenvdirectory/bin/python +Then the chosen python interpreter path should show ( for me still, it was not pointing to my virtual env, but I typed my path explicitly here and it worked) + +In the bottom libraries section, you should be able to see the site-packages from your virtual env +Extra tip - In my mac os the virtual env was starting with .pyenv, since it's a hidden directory, I was not able to select this directory and I did not know how to view the hidden directory in eclipse file explorer. Therefore I created an softlink ( without any . in the name) to the hidden directory (.pyenv) and then I was able to select the softlink",-0.1352210990936997,False,3,6092 +2019-05-15 23:51:31.380,How to use a Pyenv virtualenv from within Eclipse?,"I am using Eclipse on Linux to develop C applications, and the build system I have makes use of make and python. I have a custom virtualenv installed and managed by pyenv, and it works fine from the command line if I pre-select the virtualenv with, say pyenv shell myvenv. +However I want Eclipse to make use of this virtualenv when building (via ""existing makefile"") from within Eclipse. Currently it runs my Makefile but uses the system python in /usr/bin/python, which is missing all of the packages needed by the build system. +It isn't clear to me how to configure Eclipse to use a custom Python interpreter such as the one in my virtualenv. I have heard talk of setting PYTHONPATH however this seems to be for finding site-packages rather than the interpreter itself. My virtualenv is based on python 3.7 and my system python is 2.7, so setting this alone probably isn't going to work. +I am not using PyDev (this is a C project, not a Python project) so there's no explicit support for Python in Eclipse. I'd prefer not to install PyDev if I can help it. +I've noticed that pyenv adds its plugins, shims and bin directories to PATH when activated. I could explicitly add these to PATH in Eclipse, so that Eclipse uses pyenv to find an interpreter. However I'd prefer to point directly at a specific virtualenv rather than use the pyenv machinery to find the current virtualenv.",Typing CMD+SHIFT+. will show you dotfiles & directories that begin with dot in any Mac finder dialog box...,-0.1352210990936997,False,3,6092 +2019-05-15 23:51:31.380,How to use a Pyenv virtualenv from within Eclipse?,"I am using Eclipse on Linux to develop C applications, and the build system I have makes use of make and python. I have a custom virtualenv installed and managed by pyenv, and it works fine from the command line if I pre-select the virtualenv with, say pyenv shell myvenv. +However I want Eclipse to make use of this virtualenv when building (via ""existing makefile"") from within Eclipse. Currently it runs my Makefile but uses the system python in /usr/bin/python, which is missing all of the packages needed by the build system. +It isn't clear to me how to configure Eclipse to use a custom Python interpreter such as the one in my virtualenv. I have heard talk of setting PYTHONPATH however this seems to be for finding site-packages rather than the interpreter itself. My virtualenv is based on python 3.7 and my system python is 2.7, so setting this alone probably isn't going to work. +I am not using PyDev (this is a C project, not a Python project) so there's no explicit support for Python in Eclipse. I'd prefer not to install PyDev if I can help it. +I've noticed that pyenv adds its plugins, shims and bin directories to PATH when activated. I could explicitly add these to PATH in Eclipse, so that Eclipse uses pyenv to find an interpreter. However I'd prefer to point directly at a specific virtualenv rather than use the pyenv machinery to find the current virtualenv.","I had the same trouble and after some digging, there are two solutions; project-wide and workspace-wide. I prefer the project-wide as it will be saved in the git repository and the next person doesn't have to pull their hair. +For the project-wide add /Users/${USER}/.pyenv/shims: to the start of the ""Project properties > C/C++ Build > Environment > PATH"". +I couldn't figure out the other method fully (mostly because I'm happy with the other one) but it should be with possible to modify ""Eclipse preferences > C/C++ > Build > Environment"". You should change the radio button and add PATH variable.",0.0,False,3,6092 +2019-05-16 10:29:25.553,the clustering of mixed data using python,"I am trying to cluster a data set containing mixed data(nominal and ordinal) using k_prototype clustering based on Huang, Z.: Clustering large data sets with mixed numeric and categorical values. +my question is how to find the optimal number of clusters?","There is not one optimal number of clusters. But dozens. Every heuristic will suggest a different ""optimal"" number for another poorly defined notion of what is ""optimal"" that likely has no relevancy for the problem that you are trying to solve in the first place. +Rather than being overly concerned with ""optimality"", rather explore and experiment more. Study what you are actually trying to achieve, and how to get this into mathematical form to be able to compute what is solving your problem, and what is solving someone else's...",0.0,False,1,6093 +2019-05-17 07:14:43.650,"How to predict different data via neural network, which is trained on the data with 36x60 size?","I was training a neural network with images of an eye that are shaped 36x60. So I can only predict the result using a 36x60 image? But in my application I have a video stream, this stream is divided into frames, for each frame 68 points of landmarks are predicted. In the eye range, I can select the eye point, and using the 'boundingrect' function from OpenCV, it is very easy to get a cropped image. But this image has no form 36x60. What is the correct way to get 36x60 data that can be used for forecasting? Or how to use a neural network for data of another form?","Neural networks (insofar as I've encountered) have a fixed input shape, freedom permitted only to batch size. This (probably) goes for every amazing neural network you've ever seen. Don't be too afraid of reshaping your image with off-the-shelf sampling to the network's expected input size. Robust computer-vision networks are generally trained on augmented data; randomly scaled, skewed, and otherwise transformed in order to---among other things---broaden the network's ability to handle this unavoidable scaling situation. +There are caveats, of course. An input for prediction should be as similar to the dataset it was trained on as possible, which is to say that a model should be applied to the data for which it was designed. For example, consider an object detection network made for satellite applications. If that same network is then applied to drone imagery, the relative size of objects may be substantially larger than the objects for which the network (specifically its anchor-box sizes) was designed. +Tl;dr: Assuming you're using the right network for the job, don't be afraid to scale your images/frames to fit the network's inputs.",1.2,True,1,6094 +2019-05-17 13:19:41.943,How to configure alerts for employee contract expiration in odoo 11?,"i'm using odoo 11 and i want to know how can I configure odoo in a way that the HR manager and the employee received alert before the expiration of contract. +Is it possible to do it ? Any idea for help please ?","This type of scenario is only archived by developing custom addon. +In custom addon you have to specify cron file which will automatically fire some action at regular basis, and which will send email notification to HR Manager that some of employee's contract are going to be expired.",0.0,False,1,6095 +2019-05-17 14:56:30.123,User word2vec model output in larger kmeans project,"I am attempting a rather large unsupervised learning project and am not sure how to properly utilize word2vec. We're trying to cluster groups of customers based on some stats about them and what actions they take on our website. Someone recommended I use word2vec and treat each action a user takes as a word in a ""sentence"". The reason this step is necessary is because a single customer can create multiple rows in the database (roughly same stats, but new row for each action on the website in chronological order). In order to perform kmeans on this data we need to get that down to one row per customer ID. Hence the previous idea to collapse down the actions as words in a sentence ""describing the user's actions"" +My question is I've come across countless tutorials and resources online that show you how to use word2vec (combined with kmeans) to cluster words on their own, but none of them show how to use the word2vec output as part of a larger kmeans model. I need to be able to use the word2vec model along side other values about the customer. How should I go about this? I'm using python for the clustering if you want to be specific with coding examples, but I could also just be missing something super obvious and high level. It seems the word2vec outputs vectors, but kmeans needs straight numbers to work, no? Any guidance is appreciated.","There are two common approaches. + +Taking the average of all words. That is easy, but the resulting vectors tend to be, well, average. They are not similar to the keywords of the document, but rather similar to the most average and least informative words... My experiences with this approach are pretty disappointing, despite this being the most mentioned approach. +par2vec/doc2vec. You add a ""word"" for each user to all it's contexts, in addition to the neighbor words, during training. This way you get a ""predictive"" vector for each paragraph/document/user the same way you get a word in the first word2vec. These are supposedly more informative but require much more effort to train - you can't download a pretrained model because they are computed during training.",0.0,False,1,6096 +2019-05-17 18:10:01.270,"Conversion from pixel to general Metric(mm, in)","I am using openCV to process an image and use houghcircles to detect the circles in the image under test, and also calculating the distance between their centers using euclidean distance. +Since this would be in pixels, I need the absolute distances in mm or inches, can anyone let me know how this can be done +Thanks in advance.","The image formation process implies taking a 2D projection of the real, 3D world, through a lens. In this process, a lot of information is lost (e.g. the third dimension), and the transformation is dependent on lens properties (e.g. focal distance). +The transformation between the distance in pixels and the physical distance depends on the depth (distance between the camera and the object) and the lens. The complex, but more general way, is to estimate the depth (there are specialized algorithms which can do this under certain conditions, but require multiple cameras/perspectives) or use a depth camera which can measure the depth. Once the depth is known, after taking into account the effects of the lens projection, an estimation can be made. +You do not give much information about your setup, but the transformation can be measured experimentally. You simply take a picture of an object of known dimensions and you determine the physical dimension of one pixel (e.g. if the object is 10x10 cm and in the picture it has 100x100px, then 10px is 1mm). This is strongly dependent on the distance to the camera from the object. +An approach a bit more automated is to use a certain pattern (e.g. checkerboard) of known dimensions. It can be automatically detected in the image and the same transformation can be performed.",0.0,False,1,6097 +2019-05-19 00:17:45.020,How to run Airflow dag with more than 100 thousand tasks?,"I have an airflow DAG that has over 100,000 tasks. +I am able to run only up to 1000 tasks. Beyond that the scheduler hangs, the webserver cannot render tasks and is extremely slow on the UI. +I have tried increasing, min_file_process_interval and processor_poll_interval config params. +I have set num_duration to 3600 so that scheduler restarts every hour. +Any limits I'm hitting on the webserver or scheduler? In general, how to deal with a large number of tasks in Airflow? Any config settings, etc would be very helpful. +Also, should I be using SubDagOperator at this scale or not? please advice. +Thanks,","I was able to run more than 165,000 airflow tasks! +But there's a catch. Not all the tasks were scheduled and rendered in a single Airflow Dag. +The problems I faced when I tried to schedule more and more tasks are that of scheduler and webserver. +The memory and cpu consumption on scheduler and webserver dramatically increased as more and more tasks were being scheduled (it is obvious and makes sense). It went to a point where the node couldn't handle it anymore (scheduler was using over 80GB memory for 16,000+ tasks) +I split the single dag into 2 dags. One is a leader/master. The second one being the worker dag. +I have an airflow variable that says how many tasks to process at once (for example, num_tasks=10,000). Since I have over 165,000 tasks, the worker dag will process 10k tasks at a time in 17 batches. +The leader dag, all it does is trigger the same worker dag over and over with different sets of 10k tasks and monitor the worker dag run status. The first trigger operator triggers the worker dag for the first set of 10k tasks and keeps waiting until the worker dag completes. Once it's complete, it triggers the same worker dag with the next batch of 10k tasks and so on. +This way, the worker dag keeps being reused and never have to schedule more than X num_tasks +The bottom line is, figure out the max_number of tasks your Airflow setup can handle. And then launch the dags in leader/worker fashion for max_tasks over and over again until all the tasks are done. +Hope this was helpful.",1.2,True,1,6098 +2019-05-19 02:12:09.703,Let high priority python thread to enter the critical section while low priority thread is execution in the critical section,I have set of threads which can execute a synchronized method in python. Currently when a thread comes to critical section it enters to the critical section if no thread is executing the critical section. Otherwise wait and enter the critical section after lock is released. (it works as synchronization supposed to work). But I have a high priority thread which should enter the critical section whether a low priority thread is in the critical section or not. Is this possible? If so how can I implement this?,"As another answer described very well, this is not possible, there is no way to do it. +What you can and often should do is prevent another lower priority thread from entering this critical section first, before high priority thread. +I.e. if a critical section is being held by some thread, this thread needs to exit it first. But by that time there might be multiple threads waiting for this critical section, some low and some high priority. You may want to ensure higher priority thread gets the critical section first in such situation.",0.3869120172231254,False,1,6099 +2019-05-19 21:14:23.120,Replace os.system with os.popen for security purposes,"Is it possible to use os.popen() to achieve a result similar to os.system? I know that os.popen() is more secure, but I want to know how to be able to actually run the commands through this function. When using os.system(), things can get very insecure and I want to be able to have a secure way of accessing terminal commands.","Anything that uses the shell to execute commands is insecure for obvious reasons (you don't want someone running rm -rf / in your shell :). Both os.system and os.popen use the shell. +For security, use the subprocess module with shell = False +Either way, both of those functions have been deprecated since Python 2.6",1.2,True,1,6100 +2019-05-20 11:26:16.497,"""SSL: CERTIFICATE_VERIFY_FAILED"" error in my telegram bot","My Telegram bot code was working fine for weeks and I didn't changed anything today suddenly I got [SSL: CERTIFICATE_VERIFY_FAILED] error and my bot code no longer working in my PC. +I use Ubuntu 18.04 and I'm usng telepot library. +What is wrong and how to fix it? +Edit: I'm using getMe method and I don't know where is the certificate and how to renew it and I didn't import requests in my bot code. I'm using telepot API by importing telepot in my code.","Probably your certificate expired, that is why it worked fine earlier. Just renew it and all should be good. If you're using requests under the hood you can just pass verify=False to the post or get method but that is unwise. +The renew procedure depends on from where do you get your certificate. If your using letsencrypt for example with certbot. Issuing sudo certbot renew command from shell will suffice.",0.3869120172231254,False,1,6101 +2019-05-20 16:35:33.720,Use C# DLL in Python,"I have a driver which is written in C#, .NET 4.7.0 and build as DLL. I don't have sources from this driver. I want to use this driver in python application. +I wrapped some functionality from driver into method of another C# project. Then I built it into DLL. I used RGiesecke.DllExport to make one method available in python. When i call this method from python using ctypes, I get WinError -532462766 Windows Error 0xe0434352. +If I exclude driver code and keep only wrapper code in exported method everything runs fine. +Could you please give me some advice how to make this working or help me find better sollution? Moving from python to IronPython is no option here. +Thank you.","PROBLEM CAUSE: +Python didn't run wrapper from directory where it was stored together with driver. That caused problem with loading driver.",1.2,True,1,6102 +2019-05-20 17:25:47.853,How to make multiple y axes zoomable individually,"I have a bokeh plot with multiple y axes. I want to be able to zoom in one y axis while having the other one's displayed range stay the same. Is this possible in bokeh, and if it is, how can I accomplish that?","Bokeh does not support this, twin axes are always linked to maintain their original relative scale.",1.2,True,1,6103 +2019-05-21 07:05:10.423,Unsupported major.minor version when running a java program from shell script which is executed by a python program,"I have a shell script that runs some java program on a remote server. +But this shell script is to be executed by a python script which is on my local machine. +Here's the flow : Python script executes the shell script (with paramiko), the shell script then executes a java class. +I am getting an error : 'The java class could not be loaded. java.lang.UnsupportedClassVersionError: (Unsupported major.minor version 50.0)' whenever I run python code. +Limitations: I cannot make any changes to the shell script. +I believe this is java version issue. But I don't know how to explicitly have a python program to run in a specific java environment. +Please suggest how I can get rid of this error. +The java version of unix machine (where shell script executes) : 1.6.0 +Java version of my local machine (where python script executes): 1.7.0","The shell script can stay the same, update java on the remote system to java 1.7 or later. Then it should work. +Another possibility could be to compile the java application for java 1.6 instead. The java compiler (javac) has the arguments -source and -target and adding -source 1.6 -target 1.6 when compiling the application should solve this issue, too (but limits the application to use java 1.6 features). +Also be aware: If you use a build system like gradle or maven, then you have a different way to set source and target version.",0.3869120172231254,False,1,6104 +2019-05-21 07:54:56.323,"Accidentally deleted /usr/bin/python instead of /usr/local/bin/python on OS X/macOS, how to restore?","I had so many Python installations that it was getting frustrating, so I decided to do a full reinstall. I removed the /Library/Frameworks/Python.Frameworks/ folder, and meant to remove the /usr/local/bin/python folder too, but I accidentally removed the /usr/bin/python instead. I don't see any difference, everything seems to be working fine for now, but I've read multiple articles online saying that I should never touch /usr/bin/python as OS X uses it and things will break. +I tried Time Machine but there are no viable recovery options. How can I manually ""restore"" what was deleted? Do I even need to, since everything seems to be working fine for now? I haven't restarted the Mac yet, in fear that things might break. +I believe the exact command I ran was rm -rf /usr/bin/python*, and I don't have anything python related in my /usr/bin/ folder. +I'm running on macOS Mojave 10.14.5","Items can't be recovered when you perform rm -rf. However, you can try the following: +cp /usr/local/bin/python* /usr/bin +This would copy user local python to usr bin and most probably will bail you out. +Don't worry, nothing will happen to your OS. It should work fine :)",0.2012947653214861,False,1,6105 +2019-05-21 09:29:34.240,How to build a resnet with Keras that trains and predicts the subclass from the main class?,"I would like go implement a hierarchical resnet architecture. However, I could not find any solution for this. For example, my data structure is like: + +class A + + +Subclass 1 +Subclass 2 +.... + +class B + + +subclass 6 +........ + + +So i would like to train and predict the main class and then the subclass of the chosen/predicted mainclass. Can someone provide a simple example how to do this with generators?","The easiest way to do so would be to train multiple classifiers and build a hierarchical system by yourself. +One classifier detecting class A, B etc. After that make a new prediction for subclasses. +If you want only one single classifier: +What about just killing the first hierarchy of parent classes? Should be also quite easy. If you really want a model, where the hierarchy is learned take a look at Hierarchical Multi-Label Classification Networks.",0.0,False,1,6106 +2019-05-22 21:18:51.313,AzureDataFactory Incremental Load using Python,"How do I create azure datafactory for incremental load using python? +Where should I mention file load option(Incremental Load:LastModifiedOn) while creating activity or pipeline?? +We can do that using UI by selecting File Load Option. But how to do the same pragmatically using python? +Does python api for datafactory support this or not?","My investigations suggest that the Python SDK has not yet implemented this feature. I used the SDK to connect to my existing instance and fetched two example datasets. I did not find anything that looked like the 'last modified date'. I tried dataset.serialize() , dataset.__dict__ , dataset.properties.__dict__ . I also tried .__slots__ . +Trying serialize() is significant because there ought to be parity between the JSON generated in the GUI and the JSON generated by the Python. The lack of parity suggests the SDK version lags behind the GUI version. +UPDATE: The SDK's are being updated.",0.0,False,1,6107 +2019-05-22 22:30:47.140,Can I connect my IBM Cloudant Database as the callback URL for my Twilio IBM STT add-on service?,"I have a Watson voice assistant instance connected using SIP trunk to a Twilio API. I want to enable to the IBM Speech-To-Text add-on from the Twilio Marketplace which will allow me to obtain full transcriptions of phone calls made to the Watson Assistant bot. I want to store these transcriptions in a Cloudant Database I have created in IBM Cloud. Can I use the endpoint of my Cloudant Database as the callback URL for my Twilio add-on so that when the add-on is activated, the transcription will be added as a document in my Cloudant Database? +It seems that I should be able to somehow call a trancsription service through IBM Cloud's STT service in IBM Cloud, but since my assistant is connected through Twilio, this add-on seems like an easier option. I am new to IBM Cloud and chat-bot development so any information is greatly appreciated.","Twilio developer evangelist here. +First up, I don't believe that you can enable add-ons for voice services that are served through Twilio SIP trunking. +Unless I am mistaken and you are making a call through a SIP trunk to a Twilio number that is responding with TwiML. In this case, then you can add the STT add-on. I'm not sure it would be the best idea to set the webhook URL to your Cloudant DB URL as the webhook is not going to deliver the data in the format that Cloudant expects. +Instead I would build out an application that can provide an endpoint to receive the webhook, transform the data into something Cloudant will understand and then send it on to the DB. +Does that help at all?",0.0,False,1,6108 +2019-05-23 04:26:35.540,How corrupt checksum over TCP/IP,"I am connecting my slave via TCP/IP, everything looks fine by using the Wireshark software I can validate that the CRC checksum always valid “good”, but I am wondering how I can corrupt the CRC checksum so I can see like checksum “Invalid”. Any suggestion how can I get this done maybe python code or any other way if possible. +Thank you all +Tariq","I think you use a library that computes CRC. You can form Modbus packet without it, if you want simulate bad CRC condition",0.2012947653214861,False,1,6109 +2019-05-23 20:27:53.837,How to identify Plaid transactions if transaction ID's change,I noticed that the same transaction had a different transaction ID the second time I pulled it. Why is this the case? Is it because pending transactions have different transaction IDs than those same transactions once posted? Does anyone have recommendations for how I can identify unique transactions if the trx IDs are in fact changing?,"Turns out that the transaction ID often does change. When a transaction is posted (stops pending), the original transaction ID becomes the pending transaction ID, and a new transaction ID is assigned.",1.2,True,1,6110 +2019-05-24 20:15:57.300,How to implement neural network pruning?,"I trained a model in keras and I'm thinking of pruning my fully connected network. I'm little bit lost on how to prune the layers. +Author of 'Learning both Weights and Connections for Efficient +Neural Networks', say that they add a mask to threshold weights of a layer. I can try to do the same and fine tune the trained model. But, how does it reduce model size and # of computations?","If you add a mask, then only a subset of your weights will contribute to the computation, hence your model will be pruned. For instance, autoregressive models use a mask to mask out the weights that refer to future data so that the output at time step t only depends on time steps 0, 1, ..., t-1. +In your case, since you have a simple fully connected layer, it is better to use dropout. It randomly turns off some neurons at each iteration step so it reduces the computation complexity. However, the main reason dropout was invented is to tackle overfitting: by having some neurons turned off randomly, you reduce neurons' co-dependencies, i.e. you avoid that some neurons rely on others. Moreover, at each iteration, your model will be different (different number of active neurons and different connections between them), hence your final model can be interpreted as an ensamble (collection) of several diifferent models, each specialized (we hope) in the understanding of a specific subset of the input space.",0.6730655149877884,False,1,6111 +2019-05-25 15:12:30.457,Controlling Kodi from Browser,"I am currently building a media website using node js. I would like to be able to control Kodi, which is installed of the server computer, remotely from the website browser.How would I go about doing this? My first idea was + +to simply see if I could somehow pipe the entire Kodi GUI into the +browser such that the full program stays on the server +and just the GUI is piped to the browser, sending commands back to +the server; + +however, I could find little documentation on how to do that. +Second, I thought of making a script (eg Python) that would be able to control Kodi and just interface node js with the Python script, but again, +I could find little documentation on that. + Any help would be much appreciated. +Thank You!",Can't you just go to settings -> services -> control and then the 'remote control via http' settings? I use this to login to my local ip e.g. 192.168.1.150:8080 (you can set the port on this page) from my browser and I can do anything from there,0.0,False,1,6112 +2019-05-25 20:48:52.037,How to i split a String by first and last character in python,"I have a list of strings +my_list = ['1Jordan1', '2Michael2', '3Jesse3']. +If I should delete the first and last character, how would I do it in python??",You would use slicing. I would use [1:-1].,0.0,False,1,6113 +2019-05-26 02:09:50.387,Comparing results of neural net on two subsets of features,"I am running a LSTM model on a multivariate time series data set with 24 features. I have ran feature extraction using a few different methods (variance testing, random forest extraction, and Extra Tree Classifier). Different methods have resulted in a slightly different subset of features. I now want to test my LSTM model on all subsets to see which gives the best results. +My problem is that the test/train RMSE scores for my 3 models are all very similar, and every time I run my model I get slightly different answers. This question is coming from a person who is naive and still learning the intricacies of neural nets, so please help me understand: in a case like this, how do you go about determining which model is best? Can you do seeding for neural nets? Or some type of averaging over a certain amount of trials?","Since you have mentioned that using the different feature extraction methods, you are only getting slightly different feature sets, so the results are also similar. Also since your LSTM model is then also getting almost similar RMSE values, the models are able to generalize well and learn similarly and extract important information from all the datasets. +The best model depends on your future data, the computation time and load of different methods and how well they will last in production. Setting a seed is not really a good idea in neural nets. The basic idea is that your model should be able to reach the optimal weights no matter how they start. If your models are always getting similar results, in most cases, it is a good thing.",1.2,True,1,6114 +2019-05-27 23:00:17.260,Setting legend entries manually,"I am using openpyxl to create charts. For some reason, I do not want to insert row names when adding data. So, I want to edit the legend entries manually. I am wondering if anyone know how to do this. +More specifically +class openpyxl.chart.legend.Legend(legendPos='r', legendEntry=(), + layout=None, overlay=None, spPr=None, txPr=None, extLst=None). I want to edit the legendEntry field",You cannot do that. You need to set the rows when creating the plots. That will create the titles for your charts,1.2,True,1,6115 +2019-05-28 00:51:27.173,Time series prediction: need help using series with different periods of days,"There's this event that my organization runs, and we have the ticket sales historic data from 2016, 2017, 2018. This data contains the quantity of tickets selled by day, considering all the sales period. +To the 2019 edition of this event, I was asked to make a prediction of the quantity of tickets selled by day, considering all the sales period, sort of to guide us through this period, meaning we would have the information if we are above or below the expected sales average. +The problem is that the historic data has a different size of sales period in days: +In 2016, the total sales period was 46 days. +In 2017, 77 days. +In 2018, 113 days. +In 2019 we are planning 85 days. So how do I ajust those historical data, in a logic/statistical way, so I could use them as inputs to a statistical predictive model (such as ARIMA model)? +Also, I'm planning to do this on Python, so if you have any suggestions about that, I would love to hear them too! +Thank you!","Based on what I understand after reading your question, I would approach this problem in the following way. + +For each day, find how far out the event is from that day. The max +value for this number is 46 in 2016, 77 in 2017 etc. Scale this value +by the max day. +Use the above variable, along with day of the month, day of the week +etc as extraneous variable +Additionally, use lag information from ticket sales. You can try one +day lag, one week lag etc. +You would be able to generate all this data from the sale start until +end. +Use the generated variables as predictor for each day and use ticket +sales as target variable and generate a machine learning model +instead of forecasting. +Use the machine learning model along with generated variables to predict future sales.",0.0,False,1,6116 +2019-05-28 05:42:52.493,Why can't I push from PyCharm,"I'm a new GitHub user, and this question may be a trivial newbie problem. So I apologize in advance. +I'm using PyCharm for a Python project. I've set up a Git repository for the project and uploaded the files manually through the Git website. I also linked the repository to my PyCharm project. +When I modify a file, PyCharm allows me to ""commit"" it, but when I try to ""push"" it, I get a PyCharm pop-up error message saying ""Push rejected."" No further information is provided. How do I figure out what went wrong -- and how to fix it? +Thanks.","If you manually uploaded files to the Github by dropping them, it now likely has a different history than your local files. +One way you could get around this is to store all of your changes in a different folder, do a git pull in pycharm, abandoning your changes so you are up to date with origin/master, then commit the files and push as you have been doing.",0.3869120172231254,False,1,6117 +2019-06-01 11:32:58.337,How to choose a split variables for continous features for decision tree,I am currently implementing decision tree algorithm. If I have a continous featured data how do i decide a splitting point. I came across few resources which say to choose mid points between every two points but considering I have 8000 rows of data this would be very time consuming. The output/feature label is having category data. Is any approach where I can perform this operation quicker,"Decision tree works calculating entropy and information gain to determine the most important feature. Indeed, 8000 row is not too much for decision tree. But generally, Random forest is similar to decision tree. It is working as ensemble. You can review and try it.Moreover, maybe being slowly is related to another thing.",0.0,False,1,6118 +2019-06-02 09:05:49.710,What is a scalable way of creating cron jobs on Amazon Web Services?,"This is my first question so I appologize if it's not the best quality. +I have a use case: User creates a monitoring task which sends an http request to a website every X hours. User can have thousands of these tasks and can add/modify and delete them. When a user creates a task, django signals create a Celery periodic task which then is running periodically. +I'm searching for a more scalable solution using AWS. I've read about using Lambda + Cloudwatch Events. +My question is: how do I approach this to let my users create tens of thousands of these tasks in the cheapest / most scalable way? +Thank you for reading my question! +Peter","There is no straight forward solution to your problem .You have to proceed step by step with some plumbing along the way . +Event management +1- Create a lambda function that creates a cloudwatch schedule. +2 - Create a lambda function that deletes a cloudwatch schedule. +3 - Persist any event created using dynamodb +4 - Create 2 API gateway that will invoke the 2 lambda above. +5 - Create anohter lambda function (used by cloudwatch) that will invoke the API gateway below. +6 - Create API gateway that will invoke the website via http request. +When the user creates an event from the app, there will be a chaining calls as follow : +4 -> 1,3 -> 5-> 6 +Now there are two other parameters to take into consideration : +Lambda concurrency: you can't run simultaneously more than 1000 lambda in same region. +Cloudwatch: You can not create more than 100 rules per region . Rule is where you define the schedule.",0.0,False,1,6119 +2019-06-03 00:39:25.213,Run python script from another computer without installing packages/setting up environment?,"I have a Jupyter notebook script that will be used to teach others how to use python. +Instead of asking each participant to install the required packages, I would like to provide a folder with the environment ready from the start. +How can I do this? +What is the easiest way to teach python without running into technical problems with packages/environments etc.?","The easiest way I have found to package python files is to use pyinstaller which packages your python file into an executable file. +If it's a single file I usually run pyinstaller main.py --onefile +Another option is to have a requirements file +This reduces installing all packages to one command pip install -r requirements.txt",0.2012947653214861,False,2,6120 +2019-06-03 00:39:25.213,Run python script from another computer without installing packages/setting up environment?,"I have a Jupyter notebook script that will be used to teach others how to use python. +Instead of asking each participant to install the required packages, I would like to provide a folder with the environment ready from the start. +How can I do this? +What is the easiest way to teach python without running into technical problems with packages/environments etc.?","You would need to use a program such as py2exe, pyinstaller, or cx_freeze to package each the file, the modules, and a lightweight interpreter. The result will be an executable which does not require the user to have any modules or even python installed to access it; however, because of the built-in interpreter, it can get quite large (which is why Python is not commonly used to make executables).",0.2012947653214861,False,2,6120 +2019-06-04 10:52:09.683,how to use python-gitlab to upload file with newline?,"I'm trying to use python-gitlab projects.files.create to upload a string content to gitlab. +The string contains '\n' which I'd like it to be the real newline char in the gitlab file, but it'd just write '\n' as a string to the file, so after uploading, the file just contains one line. +I'm not sure how and at what point should I fix this, I'd like the file content to be as if I print the string using print() in python. +Thanks for your help. +EDIT--- +Sorry, I'm using python 3.7 and the string is actually a csv content, so it's basically like: +',col1,col2\n1,data1,data2\n' +So when I upload it the gitlab file I want it to be: +,col1,col2 +1,data1,data2","I figured out by saving the string to a file and read it again, this way the \n in the string will be translated to the actual newline char. +I'm not sure if there's other of doing this but just for someone that encounters a similar situation.",0.0,False,1,6121 +2019-06-04 17:28:52.047,How do you install Django 2x on pip when python 2x is your default version of python but you use python 3x on Bash,"I need to install Django 2.2.2 on my MacBook pro (latest generation), and I am a user of python 3x. However, my default version of python is python 2x and I cannot pip install Django version 2x when I am using python 2x. Could anyone explain how to change the default version of python on MacBook I have looked at many other questions on this site and none have worked. All help is appreciated thank you :)",You can simply use pip3 instead of pip to install Python 3 packages.,1.2,True,1,6122 +2019-06-05 17:11:10.447,Example of using hylang with python multiprocessing,"I am looking for an example of using python multiprocessing (i.e. a process-pool/threadpool, job queue etc.) with hylang.","Note that a straightforward translation runs into a problem on macOS (which is not officially supported, but mostly works anyway): Hy sets sys.executable to the Hy interpreter, and multiprocessing relies on that value to start up new processes. You can work around that particular problem by calling (multiprocessing.set_executable hy.sys_executable), but then it will fail to parse the file containing the Hy code itself, which it does again for some reason in the child process. So there doesn't seem to be a good solution for using multiprocessing with Hy running natively on a Mac. +Which is why we have Docker, I suppose.",0.0,False,1,6123 +2019-06-05 23:27:18.603,How to use python with qr scanner devices?,I want to create a program that can read and store the data from a qr scanning device but i don't know how to get the input from the barcode scanner as an image or save it in a variable to read it after with openCV,"Typically a barcode scanner automatically outputs to the screen, just like a keyboard (except really quickly), and there is an end of line character at the end (like and enter). +Using a python script all you need to do is start the script, connect a scanner, scan something, and get the input (STDIN) of the script. If you built a script that was just always receiving input and storing or processing them, you could do whatever you please with the data. +A QR code is read in the same way that a barcode scanner works, immediately outputting the encoded data as text. Just collect this using the STDIN of a python script and you're good to go!",0.0,False,1,6124 +2019-06-06 06:24:25.907,What are the available estimators which we can use as estimator in onevsrest classifier?,"I want to know briefly about all the available estimators like logisticregression or multinomial regression or SVMs which can be used for classification problems. +These are the three I know. Are there any others like these? and relatively how long they run or how accurate can they get than these?","The following can be used for classification problems: + +Logistic Regression +SVM +RandomForest Classifier +Neural Networks",0.2012947653214861,False,1,6125 +2019-06-06 07:55:29.040,How to use data obtained from a form in Django form?,"I'm trying to create a form in Django using Django form. +I need two types of forms. + +A form that collect data from user, do some calculations and show the results to user without saving the data to database. I want to show the result to user once he/she press button (calculate) next to it not in different page. +A form that collect data from user, look for it in a column in google sheet, and if it's unique, add it to the column otherwise inform the user a warning that the data is not unique. + +Thanks","You could use AJAX and javascript to achieve this, but I suggest doing this only via javascript. This means you will have to rewrite the math in JS and output it directly in the element. +Please let me know if you need any help :) +Jasper",0.0,False,2,6126 +2019-06-06 07:55:29.040,How to use data obtained from a form in Django form?,"I'm trying to create a form in Django using Django form. +I need two types of forms. + +A form that collect data from user, do some calculations and show the results to user without saving the data to database. I want to show the result to user once he/she press button (calculate) next to it not in different page. +A form that collect data from user, look for it in a column in google sheet, and if it's unique, add it to the column otherwise inform the user a warning that the data is not unique. + +Thanks","Start by writing it in a way that the user submits the form (like any normal django form), you process it in your view, do the calculation, and return the same page with the calculated values (render the template). That way you know everything is working as expected, using just Django/python. +Then once that works, refactor to make your form submit the data using AJAX and your view to just return the calculation results in JSON. Your AJAX success handler can then insert the results in the current page. +The reason I suggest you do this in 2 steps is that you're a beginner with javascript, so if you directly try to build this with AJAX, and you're not getting the results you expect, it's difficult to understand where things go wrong.",0.0,False,2,6126 +2019-06-06 09:28:12.653,Cassandra write throttling with multiple clients,"I have two clients (separate docker containers) both writing to a Cassandra cluster. +The first is writing real-time data, which is ingested at a rate that the cluster can handle, albeit with little spare capacity. This is regarded as high-priority data and we don't want to drop any. The ingestion rate varies quite a lot from minute to minute. Sometimes data backs up in the queue from which the client reads and at other times the client has cleared the queue and is (briefly) waiting for more data. +The second is a bulk data dump from an online store. We want to write it to Cassandra as fast as possible at a rate that soaks up whatever spare capacity there is after the real-time data is written, but without causing the cluster to start issuing timeouts. +Using the DataStax Python driver and keeping the two clients separate (i.e. they shouldn't have to know about or interact with each other), how can I throttle writes from the second client such that it maximises write throughput subject to the constraint of not impacting the write throughput of the first client?","The solution I came up with was to make both data producers write to the same queue. +To meet the requirement that the low-priority bulk data doesn't interfere with the high-priority live data, I made the producer of the low-priority data check the queue length and then add a record to the queue only if the queue length is below a suitable threshold (in my case 5 messages). +The result is that no live data message can have more than 5 bulk data messages in front of it in the queue. If messages start backing up on the queue then the bulk data producer stops queuing more data until the queue length falls below the threshold. +I also split the bulk data into many small messages so that they are relatively quick to process by the consumer. +There are three disadvantages of this approach: + +There is no visibility of how many queued messages are low priority and how many are high priority. However we know that there can't be more than 5 low priority messages. +The producer of low-priority messages has to poll the queue to get the current length, which generates a small extra load on the queue server. +The threshold isn't applied strictly because there is a race between the two producers from checking the queue length to queuing a message. It's not serious because the low-priority producer queues only a single message when it loses the race and next time it will know the queue is too long and wait.",1.2,True,1,6127 +2019-06-07 13:05:17.733,Pardot Visit query API - generic query not available,"I am trying to extract/sync data through Pardot API v4 into a local DB. Most APIs were fine, just used the query method with created_after search criteria. But the Visit API does not seem to support neither a generic query of all visit data, nor a created_after search criteria to retrieve new items. +As far as I can see I can only query Visits in the context of a Visitor or a Prospect. +Any ideas why, and how could I implement synchronisation? (sorry, no access to Pardot DB...) +I have been using pypardot4 python wrapper for convenience but would be happy to use the API natively if it makes any difference.","I managed to get a response from Pardot support, and they have confirmed that such response filtering is not available on the Visits API. I asked for a feature request, but hardly any chance to get enough up-votes to be considered :(",1.2,True,1,6128 +2019-06-08 19:04:33.077,How can I stop networkx to change the source and the target node?,"I make a Graph (not Digraph) from a data frame (Huge network) with networkx. +I used this code to creat my graph: +nx.from_pandas_edgelist(R,source='A',target='B',create_using=nx.Graph()) +However, in the output when I check the edge list, my source node and the target node has been changed based on the sort and I don't know how to keep it as the way it was in the dataframe (Need the source and target node stay as the way it was in dataframe).","If you mean the order has changed, check out nx.OrderedGraph",0.0,False,1,6129 +2019-06-08 19:37:59.810,How to fail a Control M job when running a python function,"I have a Control-M job that calls a python script. The python script contains a function that returns True or False. +Is it possible to make the job to fail when the function returns False? +I have to use a shell scrip for this? If yes how should i create it? +Thank you","Return a non-zero value -- i.e. call sys.exit(1) when function returns False, and sys.exit(0) otherwise.",1.2,True,1,6130 +2019-06-11 02:31:46.400,How to load NTU rgbd dataset?,"We are working on early action prediction but we are unable to understand the dataset itself NTU rgbd dataset is 1.3 tb.my laptop Hard disk is 931 GB + .first problem : how to deal with such a big dataset? +Second problem : how to understand dataset? +Third problem: how to load dataset ? +Thanks for the help","The overall size of the dataset is 1.3 TB and this size will decrease after processing the data and converting it into numpy arrays or something else. +But I do not think you will work on the entire dataset, what is the part you want to work on it in the dataset?",0.0,False,1,6131 +2019-06-11 08:49:12.827,How do I install Pytorch offline?,"I need to install Pytorch on a computer with no internet connection. +I've tried finding information about this online but can't find a single piece of documentation. +Do you know how I can do this? Is it even possible?","An easy way with pip: + +Create an empty folder +pip download torch using the connected computer. You'll get the pytorch package and all its dependencies. +Copy the folder to the offline computer. You must be using the same python setup on both computers (this goes for virtual environments as well) +pip install * on the offline computer, in the copied folder. This installs all the packages in the correct order. You can then use pytorch. + +Note that this works for (almost) any kind of python package.",0.9999954793514042,False,1,6132 +2019-06-11 16:14:57.993,Graphing multiple csv lists into one graph in python,"I have 5 csv files that I am trying to put into one graph in python. In the first column of each csv file, all of the numbers are the same, and I want to treat these as the x values for each csv file in the graph. However, there are two more columns in each csv file (to make 3 columns total), but I just want to graph the second column as the 'y-values' for each csv file on the same graph, and ideally get 5 different lines, one for each file. Does anyone have any ideas on how I could do this? +I have already uploaded my files to the variable file_list",Read the first file and create a list of lists in which each list filled by two columns of this file. Then read the other files one by one and append y column of them to the correspond index of this list.,0.0,False,1,6133 +2019-06-11 16:32:40.920,Password authentication failed when trying to run django application on server,"I have downloaded postgresql as well as django and python but when I try running the command ""python manage.py runserver"" it gives me an error saying ""Fatal: password authentication failed for user"" . I am trying to run it locally but am unable to figure out how to get past this issue. +I was able to connect to the server in pgAdmin but am still getting password authentication error message","You need to change the password used to connect to your local Database, and this can be done, modifying your setting.py file in ""DATABASES"" object",0.0,False,1,6134 +2019-06-12 14:54:28.927,How to get a voxel array from a list of 3D points that make up a line in a voxalized volume?,"I have a list of points that represent a needle/catheter in a 3D volume. This volume is voxalized. I want to get all the voxels that the line that connects the point intersects. The line needs to go through all the points. +Ideally, since the round needle/catheter has a width I would like to be able to get the voxels that intersect the actual three dimensional object that is the needle/catheter. (I imagine this is much harder so if I could get an answer to the first problem I would be very happy!) +I am using the latest version of Anaconda (Python 3.7). I have seen some similar problems, but the code is always in C++ and none of it seems to be what I'm looking for. I am fairly certain that I need to use raycasting or a 3D Bresenham algorithm, but I don't know how. +I would appreciate your help!","I ended up solving this problem myself. For anyone who is wondering how, I'll explain it briefly. +First, since all the catheters point in the general direction of the z-axis, I got the thickness of the slices along that axis. Both input points land on a slice. I then got the coordinates of every intersection between the line between the two input points and the z-slices. Next, since I know the radius of the catheter and I can calculate the angle between the two points, I was able to draw ellipse paths on each slice around the points I had previously found (when you cut a cone at and angle, the cross-section is an ellipse). Then I got the coordinates of all the voxels on every slice along the z-axis and checked which voxels where within my ellipse paths. Those voxels are the ones that describe the volume of the catheter. If you would like to see my code please let me know.",0.0,False,1,6135 +2019-06-12 21:34:00.557,Hash a set of three indexes in a key for dictionary?,"I have a matrix of data with three indexes: i, j and k +I want to enter some of the data in this matrix into a dictionary, and be able to find them afterwards in the dictionary. +The data itself can not be the key for the dict. +I would like the i,j,k set of indexes to be the key. +I think I need to ""hash"" (some sort of hash) in one number from which I can get back the i,j,k. I need the result key to be ordered so that: + +key1 for 1,2,3 is greater than +key2 for 2,1,3 is greater than +key3 for 2,3,1 + +Do you know any algorithms to get the keys from this set of indexes? Or is there a better structure in python to do what I want to do? +I can't know before I store the data how much I will get, so I think I cannot just append the data with its indexes.","Only immutable elements can be used as dictionary keys + +This mean you can't use a list (mutable data type) but you can use a tuple as the key of your dictionary: dict_name[(i, j, k)] = data",1.2,True,1,6136 +2019-06-12 23:20:49.637,Chatterbot sqlite store in repl.it,"I'm wondering how sqlite3 works when working in something like repl.it? I've been working on learning chatterbot on my own computer through Jupiter notebook. I'm a pretty amateur coder, and I have never worked with databases or SQL. When working from my own computer, I pretty much get the concept that when setting up a new bot with chatterbot, it creates a sqlite3 file, and then saves conversations to it to improve the chatbot. However, if I create a chatbot the same way only through repl.it and give lots of people the link, is the sqlite3 file saved online somewhere? Is it big enough to save lots of conversations from many people to really improve the bot well?","I am not familiar with repl.it, but for all the answers you have asked the answer is yes. For example, I have made a simple web page that uses the chatterbot library. Then I used my own computer as a server using ngrok and gather training data from users.",0.0,False,1,6137 +2019-06-14 03:35:14.117,Command prompt does not recognize changes in PATH. How do I fix this?,"I am attempting to download modules in Python through pip. No matter how many times I edit the PATH to show the pip.exe, it shows the same error: +'pip' is not recognized as an internal or external command, +operable program or batch file. +I have changed the PATH many different times and ways to make pip usable, but these changes go unnoticed by the command prompt terminal. +How should I fix this?",Are you using PyCharm? if yes change the environment to your desired directory and desired interpreter if you do have multiple interpreter available,0.0,False,1,6138 +2019-06-14 21:20:00.560,Using python on android tablet,"Learning python workflow on android tablet +I have been using Qpython3 but find it unsatisfactory +Can anybody tell me how best to learn the python workflow using an android tablet... that is what IDE works best with android and any links to pertinent information. Thank you.","Try pydroid3 instead of Qpython.it have almost all scientific Python libraries like Numpy,scikit-learn,matplotlib,pandas etc.All you have to do is to download the scripting library.You can save your file with extension ' .py ' and then upload it to drive and then to colab +Hope this will help.......",0.2012947653214861,False,1,6139 +2019-06-16 01:10:00.687,Wing IDE The debug server encountered an error in probing locals,"I am running Wing IDE 5 with Python 2.4. Everything was fine until I tried to debug and set a breakpoint. Arriving at the breakpoint I get an error message: +""The debug server encountered an error in probing locals or globals..."" +And the Stack Data display looks like: + locals + globals +I am not, to my knowledge, using a server client relationship or anything special, I am simply debugging a single threaded program running directly under the IDE. Anybody seen this or know how to fix it? +Wing IDE 5.0.9-1","That's a pretty old version of Wing and likely a bug that's been fixed since then, so trying a newer version of Wing may solve it. +However, if you are stuck with Python 2.4 then that's the latest that supports it (except that unofficially Wing 6 may work with Python 2.4 on Windows). +A work-around would be to inspect data from the Debug Probe and/or Watch tools (both available in the Tools menu). +Also, Clear Stored Value Errors in the Debug menu may allow Wing to load the data in a later run if the problem doesn't reoccur.",1.2,True,1,6140 +2019-06-18 12:13:26.103,How to change a OneToOneField into ForeignKey in django model with data in both table?,"I am having a model Employee with a OneToOneField relationship with Django USER model. Now for some reason, I want to change it to the ManyToOne(ForeignKey) relationship with the User model. +Both these tables have data filled. Without losing the data how can I change it? +Can I simply change the relationship and migrate?","makemigrations in this case would only correspond to an sql of Alter field you can see the result of makemigrations, the same sql will be executed when you migrate the model so the data would not be affected",0.0,False,1,6141 +2019-06-18 14:09:10.777,Does escaping work differently in Python shell? (Compared to code in file),"In a Python 3.7 shell I get some unexpected results when escaping strings, see examples below. Got the same results in the Python 2.7 shell. +A quick read in the Python docs seems to say that escaping can be done in strings, but doesn't seem to say it can't be used in the shell. (Or I have missed it). +Can someone explain why escaping doesn't seem to work as expected. +Example one: +input: +>>> ""I am 6'2\"" tall"" +output: +'I am 6\'2"" tall' +while >>> print(""I am 6'2\"" tall"") +returns (what I expected): +I am 6'2"" tall +(I also wonder how the backslash, in the unexpected result, ends up behind the 6?) +Another example: +input: +>>> ""\tI'm tabbed in."" +output: +""\tI'm tabbed in."" +When inside print() the tab is replaced with a proper tab. (Can't show it, because stackoverflow seems the remove the tab/spaces in front of the line I use inside a code block).","The interactive shell will give you a representation of the return value of your last command. It gives you that value using the repr() method, which tries to give a valid source code representation of the value; i.e. something you could copy and paste into code as is. +print on the other hand prints the contents of the string to the console, without regards whether it would be valid source code or not.",0.3869120172231254,False,1,6142 +2019-06-18 15:01:36.913,Writing to a file in C while reading from it in python,"I am working with an Altera DE1-SoC board where I am reading data from a sensor using a C program. The data is being read continually, in a while loop and written to a text file. I want to read this data using a python program and display the data. +The problem is that I am not sure how to avoid collision during the read/write from the file as these need to happen simultaneously. I was thinking of creating a mutex, but I am not sure how to implement it so the two different program languages can work with it. +Is there a simple way to do this? Thanks.","You could load a C library into Python using cdll.LoadLibrary and call a function to get the status of the C mutex. Then in Python if the C mutex is locking then don't read, and if it is unlocked then it can read.",0.1352210990936997,False,2,6143 +2019-06-18 15:01:36.913,Writing to a file in C while reading from it in python,"I am working with an Altera DE1-SoC board where I am reading data from a sensor using a C program. The data is being read continually, in a while loop and written to a text file. I want to read this data using a python program and display the data. +The problem is that I am not sure how to avoid collision during the read/write from the file as these need to happen simultaneously. I was thinking of creating a mutex, but I am not sure how to implement it so the two different program languages can work with it. +Is there a simple way to do this? Thanks.","Operating system will take care of this as long as you can open that file twice (one for read and one for write). Just remember to flush from C code to make sure your data are actually written to disk, instead of being kept in cache in memory.",0.1352210990936997,False,2,6143 +2019-06-20 06:34:56.463,Building a window application from the bunch of python files,"I have written some bunch of python files and i want to make a window application from that. +The structure looks like this: +Say, a.py,b.py,c.py are there. a.by is the file which i want application to open and it is basically a GUI which has import commands for ""b.py"" and ""c.py"". +I know this might be a very basic problem,but i have just started to packaging and deployment using python.Please tell me how to do that , or if is there any way to do it by py2exe and pyinstaller? +I have tried to do it by py2exe and pyinstaller from the info available on internet , but that seems to create the app which is running only ""a.py"" .It is not able to then use ""b"" and ""c "" as well.","I am not sure on how you do this with py2exe. I have used py2app before which is very similar, but it is for Mac applications. For Mac there is a way to view the contents of the application. In here you can add the files you want into the resources folder (where you would put your 'b.py' and 'c.py'). +I hope there is something like this in Windows and hope it helps.",0.0,False,1,6144 +2019-06-20 13:04:42.747,How to display number of epochs in tensorflow object detection api with Faster Rcnn?,"I am using Tensorflow Object detection api. What I understood reading the faster_rcnn_inception_v2_pets.config file is that num_steps mean the total number of steps and not the epochs. But then what is the point of specifying batch_size?? Lets say I have 500 images in my training data and I set batch size = 5 and num_steps = 20k. Does that mean number of epochs are equal to 200 ?? +When I run model_main.py it shows only the global_steps loss. So if these global steps are not the epochs then how should I change the code to display train loss and val loss after each step and also after each epoch.","So you are right with your assumption, that you have 200 epochs. +I had a similar problem with the not showing of loss. +my solution was to go to the model_main.py file and then insert +tf.logging.set_verbosity(tf.logging.INFO) +after the import stuff. +then it shows you the loss after each 100 steps. +you could change the set_verbosity if you want to have it after every epoch ;)",0.3869120172231254,False,1,6145 +2019-06-20 13:36:08.230,"how can i search for facebook users ,using facebook API(V3.3) in python 3","I want to be able to search for any user using facebook API v3.3 in python 3. +I have written a function that can only return my details and that's fine, but now I want to search for any user and I am not succeeding so far, it seems as if in V3.3 I can only search for places and not users + +The following function search and return a place, how can I modify it so that I can able to search for any Facebook users? + +def search_friend(): + graph = facebook.GraphAPI(token) + find_user = graph.search(q='Durban north beach',type='place') + print(json.dumps(find_user, indent=4))","You can not search for users any more, that part of the search functionality has been removed a while ago. +Plus you would not be able to get any user info in the first place, unless the user in question logged in to your app first, and granted it permission to access at least their basic profile info.",0.3869120172231254,False,1,6146 +2019-06-21 07:53:06.587,Record Audio from Peppers Tablet Microphone,"I would like to use the microphone of peppers tablet to implement speech recognition. +I already do speech recognition with the microphones in the head. +But the audio I get from the head microphones is noisy due to the fans in the head and peppers joints movement. +Does anybody know how to capture the audio from peppers tablet? +I am using Pepper 2.5. and would like to solve this with python. +Thanks!","With NAOqi 2.5 on Pepper it is not possible to access the tablet's microphone. +You can either upgrade to 2.9.x and use the Android API for this, or stay in 2.5 and use Python to get the sound from Pepper's microphones.",0.0,False,1,6147 +2019-06-21 10:48:44.013,How to create .mdb file?,"I am new with zarr, HDF5 and LMDB. I have converted data from HDF5 to Zarr but i got many files with extension .n (n from 0 to 31). I want to have just one file with .zarr extension. I tried to use LMDB (zarr.LMDBStore function) but i don't understand how to create .mdb file ? Do you have an idea how to do that ? +Thank you !","@kish When trying your solution i got this error: +from comtypes.gen import Access +ImportError: cannot import name 'Access'",0.0,False,1,6148 +2019-06-21 15:20:04.737,How to remove regularisation from pre-trained model?,"I've got a partially trained model in Keras, and before training it any further I'd like to change the parameters for the dropout, l2 regularizer, gaussian noise etc. I have the model saved as a .h5 file, but when I load it, I don't know how to remove these regularizing layers or change their parameters. Any clue as to how I can do this?",Create a model with your required hyper-parameters and load the parameters to the model using load_weight().,0.0,False,1,6149 +2019-06-21 17:20:48.640,How to display contact without company in odoo?,"how to display contact without company in odoo 11 , exemple : if mister X in Company Y, in odoo, display this mister and company : Y, X. But i want only X. thanks",That name comes via name_get method written inside res.partner.py You need to extend that method in your custom module and remove company name as a prefix from the contact name.,0.3869120172231254,False,1,6150 +2019-06-22 20:23:45.847,Python3 script exit with any traceback?,"I have one Python3 script that exits without any traceback from time to time. +Some said in another question that it was caused by calling sys.exit, but I am not pretty sure whether this is the case. +So how can I make Python3 script always exit with traceback, of course except when it is killed with signal 9?","It turns out that the script crashed when calling some function from underlying so, and crashed without any trackback. .",1.2,True,1,6151 +2019-06-24 14:06:10.280,"Could not install packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle, invalid path","I get this error: + +Could not install packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle, invalid path: /home/yosra/Desktop/CERT.RSA + +When I run: $ virtualenv venv +So I put a random CERT.RSA on the Desktop which worked and I created my virtual environment, but then when I run: pip install -r requirements.txt +I got this one: + +Could not install packages due to an EnvironmentError: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /KristianOellegaard/django-hvad/archive/2.0.0-beta.tar.gz (Caused by SSLError(SSLError(0, 'unknown error (_ssl.c:3715)'),)) + +I feel that these 2 errors are linked to each other, but I want to know how can I fix the first one?","We get this all the time for various 'git' actions. We have our own CA + intermediary and we don't customize our software installations enough to accomodate that fact. +Our general fix is update your ca-bundle.crt with the CA cert pems via either concatenation or replacement. +e.g. cat my_cert_chain.pem >> $(python -c ""import certifi; print(certifi.where())"") +This works great if you have an /etc/pki/tls/certs directory, but with python the python -c ""import certifi; print(certifi.where())"" tells you the location of python's ca-bundle.crt file. +Althought it's not a purist python answer, since we're not adding a new file / path, it solves alot of other certificate problems with other software when you understand the underlying issue. +I recommended concatenating in this case as I don't know what else the file is used for vis-a-vis pypi.",0.0,False,2,6152 +2019-06-24 14:06:10.280,"Could not install packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle, invalid path","I get this error: + +Could not install packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle, invalid path: /home/yosra/Desktop/CERT.RSA + +When I run: $ virtualenv venv +So I put a random CERT.RSA on the Desktop which worked and I created my virtual environment, but then when I run: pip install -r requirements.txt +I got this one: + +Could not install packages due to an EnvironmentError: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /KristianOellegaard/django-hvad/archive/2.0.0-beta.tar.gz (Caused by SSLError(SSLError(0, 'unknown error (_ssl.c:3715)'),)) + +I feel that these 2 errors are linked to each other, but I want to know how can I fix the first one?","I received this error while running the command as ""pip install flask"" in Pycharm. +If you look at the error, you will see that the error points out to ""packages due to an EnvironmentError: Could not find a suitable TLS CA certificate bundle -- Invalid path"". +I solved this by removing the environment variable ""REQUESTS_CA_BUNDLE"" OR you can just change the name of the environment variable ""REQUESTS_CA_BUNDLE"" to some other name. +Restart your Pycharm and this should be solved. +Thank you !",0.0814518047658113,False,2,6152 +2019-06-24 20:31:18.433,Running Python Code in .NET Environment without Installing Python,"Is it possible to productionize Python code in a .NET/C# environment without installing Python and without converting the Python code to C#, i.e. just deploy the code as is? +I know installing the Python language would be the reasonable thing to do but my hesitation is that I just don't want to introduce a new language to my production environment and deal with its testing and maintenance complications, since I don't have enough manpower who know Python to take care of these issues. +I know IronPython is built on CLR, but don't know how exactly it can be hosted and maintained inside .NET. Does it enable one to treat PYthon code as a ""package"" that can be imported into C# code, without actually installing Python as a standalone language? How can IronPython make my life easier in this situation? Can python.net give me more leverage?","IronPython is limited compared to running Python with C based libraries needing the Python Interpreter, not the .NET DLR. I suppose it depends how you are using the Python code, if you want to use a lot of third party python libraries, i doubt that IronPython will fit your needs. +What about building a full Python application but running it all from Docker? +That would require your environments to have Docker installed, but you could then also deploy your .NET applications using Docker too, and they would all be isolated and not dirty your 'environment'. +There are base docker images out there that are specifically for Building Python and .NET Project and also for running.",0.3869120172231254,False,1,6153 +2019-06-25 13:23:01.350,No module named 'numpy' Even When Installed,"I'm using windows with Python 3.7.3, I installed NumPy via command prompt with ""pip install NumPy"", and it installed NumPy 1.16.4 perfectly. However, when I run ""import numpy as np"" in a program, it says ""ModuleNotFoundError: No module named 'numpy'"" +I only have one version of python installed, and I don't know how I can fix this. How do I fix this?","python3 is not supported under NumPy 1.16.4. Try to install a more recent version of NumPy: +pip uninstall numpy +pip install numpy",1.2,True,1,6154 +2019-06-26 02:23:52.283,How I get the error log generates from a flask app installed on CPanel?,"I have a flask application installed on cpanel and it's giving me some error while the application is running. Application makes an ajax request from the server, but server returns the response with a 500 error. I have no idea how I get the information that occurs to throw this error. +There's no information on the cpanel error log and is it possible to create some log file that logs errors when occur in the same application folder or something?",When you log into cPanel go to the Errors menu and it will give a more detailed response to your errors there. You can also try and check: /var/log/apache/error.log or /var/log/daemon.log,0.3869120172231254,False,1,6155 +2019-06-26 06:12:06.163,How can I output to a v4l2 driver using FFMPEG's avformat_write_header?,"I'm trying to use PyAV to output video to a V4l2 loopback device (/dev/video1), but I can't figure out how to do it. It uses the avformat_write_header() from libav* (ffmpeg bindings). +I've been able to get ffmpeg to output to the v4l2 device from the CLI but not from python.","Found the solution. The way to do this is: + +Set the container format to v4l2 +Set the stream format as ""rawvideo"" +Set the framerate (if it's a live stream, set the framerate to 1 fps higher than the stream is so that you don't get an error) +Set pixel format to either RGB24 or YUV420",0.0,False,1,6156 +2019-06-26 09:33:29.453,How to set up data collection for small-scale algorithmic trading software,"This is a question on a conceptual level. +I'm building a piece of small-scale algorithmic trading software, and I am wondering how I should set up the data collection/retrieval within that system. The system should be fully autonomous. +Currently my algorithm that I want to trade live is doing so on a very low frequency, however I would like to be able to trade with higher frequency in the future and therefore I think that it would be a good idea to set up the data collection using a websocket to get real time trades straight away. I can aggregate these later if need be. +My first question is: considering the fact that the data will be real time, can I use a CSV-file for storage in the beginning, or would you recommend something more substantial? +In any case, the data collection would proceed as a daemon in my application. +My second question is: are there any frameworks available to handle real-time incoming data to keep the database constant while the rest of the software is querying it to avoid conflicts? +My third and final question is: do you believe it is a wise approach to use a websocket in this case or would it be better to query every time data is needed for the application?","CSV is a nice exchange format, but as it is based on a text file, it is not good for real-time updates. Only my opinion but I cannot imagine a reason to prefere that to database. +In order to handle real time conflicts, you will later need a professional grade database. PostgreSQL has the reputation of being robust, MariaDB is probably a correct choice too. You could use a liter database in development mode like SQLite, but beware of the slight differences: it is easy to write something that will work on one database and will break on another one. On another hand, if portability across databases is important, you should use at least 2 databases: one at development time and a different one at integration time. +A question to ask yourself immediately is whether you want a relational database or a noSQL one. Former ensures ACID (Atomicity, Consistency, Isolation, Durability) transations, the latter offers greater scalability.",1.2,True,1,6157 +2019-06-27 05:15:37.540,"So when I run my python selenium script through jenkins, how should I write the 'driver = webdriver.Chrome()'?","So when I run my python selenium script through Jenkins, how should I write the driver = webdriver.Chrome() +How should I put the chrome webdriver EXE in jenkins? +Where should I put it?","If you have added your repository path in jenkins during job configuration, Jenkins will create a virtual copy of your workspace. So, as long as the webdriver file is somewhere in your project folder structure and as long as you are using relative path to reference it in your code, there shouldn't be any issues with respect to driver in invocation. +You question also depends on several params like: +1. Whether you are using Maven to run the test +2. Whether you are running tests on Jenkins locally or on a remote machine using Selenium Grid Architecture.",1.2,True,1,6158 +2019-06-27 08:57:46.013,Multiple header in Pandas DataFrame to_excel,"I need to export my DataFrame to Excel. Everything is good but I need two ""rows"" of headers in my output file. That mean I need two columns headers. I don't know how to export it and make double headers in DataFrame. My DataFrame is created with dictionary but I need to add extra header above. +I tried few dumb things but nothing gave me a good result. I want to have on first level header for every three columns and on second level header for each column. They must be different. +I expect output with two headers above columns.","Had a similar issue. Solved by persisting cell-by-cell using worksheet.write(i, j, df.iloc[i,j]), with i starting after the header rows.",0.0,False,1,6159 +2019-06-27 09:51:08.853,How can I find the index of a tuple inside a numpy array?,"I have a numpy array as: +groups=np.array([('Species1',), ('Species2', 'Species3')], dtype=object). +When I ask np.where(groups == ('Species2', 'Species3')) or even np.where(groups == groups[1]) I get an empty reply: (array([], dtype=int64),) +Why is this and how can I get the indexes for such an element?","It's not means search a tuple('Species2', 'Species3') from groups when you use +np.where(groups == ('Species2', 'Species3')) +it means search 'Species2' and 'Species3' separately if you have a Complete array like this +groups=np.array([('Species1',''), ('Species2', 'Species3')], dtype=object)",0.1016881243684853,False,1,6160 +2019-06-27 12:15:35.603,Aggregate Ranking using Khatri-Rao product,"I have constructed 2 graphs and calculated the eigenvector centrality of each node. Each node can be considered as an individual project contributor. Consider 2 different rankings of project contributors. They are ranked based on the eigenvector of the node. +Ranking #1: +Rank 1 - A +Rank 2 - B +Rank 3 - C +Ranking #2: +Rank 1 - B +Rank 2 - C +Rank 3 - A +This is a very small example but in my case, I have almost 400 contributors and 4 different rankings. My question is how can I merge all the rankings and get an aggregate ranking. Now I can't just simply add the eigenvector centralities and divide it by the number of rankings. I was thinking to use the Khatri-Rao product or Kronecker Product to get the result. +Can anyone suggest me how can I achieve this? +Thanks in advance.",Rank both graphs separately each node gets a rank in both graphs then do simple matrix addition. Now normalize the rank. This should keep the relationship like rank1>rank2>rank3>rank4 true and relationships like rank1+rank1>rank1+rank2 true. I don't know how it would help you taking the Khatri-Rao product of the matrix. That would make you end up with more than 400 nodes. Then you would need to compress them back to 400 nodes in-order to have 400 ranked nodes at the end. Who told you to use Khatri-Rao product?,0.0,False,1,6161 +2019-06-28 16:20:58.207,How to use Javascript in Spyder IDE?,"I want to use write code in Javascript, in the Spyder IDE, that is meant for Python. I have read that Spyder supports multiple languages but I'm not sure how to use it. I have downloaded Nodejs and added it to the environment variables. I'd like to know how get Javascript syntax colouring, possibly auto-completion and Help options as well ,and I'd also like to know how to conveniently execute the .js file and see the results in a console.","(Spyder maintainer here) Sorry but for now we only support Python for all the functionality that you are looking for (code completion, help and code execution). +Our next major version (Spyder 4, to be released later in 2019) will have the ability to give code completion and linting for other programming languages, but it'll be more of a power-user feature than something anyone can use.",1.2,True,1,6162 +2019-06-29 20:41:24.083,How to view pyspark temporary tables on Thrift server?,"I'm trying to make a temporary table a create on pyspark available via Thrift. My final goal is to be able to access that from a database client like DBeaver using JDBC. +I'm testing first using beeline. +This is what i'm doing. + +Started a cluster with one worker in my own machine using docker and added spark.sql.hive.thriftServer.singleSession true on spark-defaults.conf +Started Pyspark shell (for testing sake) and ran the following code: +from pyspark.sql import Row +l = [('Ankit',25),('Jalfaizy',22),('saurabh',20),('Bala',26)] +rdd = sc.parallelize(l) +people = rdd.map(lambda x: Row(name=x[0], age=int(x[1]))) +people = people.toDF().cache() +peebs = people.createOrReplaceTempView('peebs') +result = sqlContext.sql('select * from peebs') +So far so good, everything works fine. +On a different terminal I initialize spark thrift server: +./sbin/start-thriftserver.sh --hiveconf hive.server2.thrift.port=10001 --conf spark.executor.cores=1 --master spark://172.18.0.2:7077 +The server appears to start normally and I'm able to see both pyspark and thrift server jobs running on my spark cluster master UI. +I then connect to the cluster using beeline +./bin/beeline +beeline> !connect jdbc:hive2://172.18.0.2:10001 +This is what I got + +Connecting to jdbc:hive2://172.18.0.2:10001 + Enter username for jdbc:hive2://172.18.0.2:10001: + Enter password for jdbc:hive2://172.18.0.2:10001: + 2019-06-29 20:14:25 INFO Utils:310 - Supplied authorities: 172.18.0.2:10001 + 2019-06-29 20:14:25 INFO Utils:397 - Resolved authority: 172.18.0.2:10001 + 2019-06-29 20:14:25 INFO HiveConnection:203 - Will try to open client transport with JDBC Uri: jdbc:hive2://172.18.0.2:10001 + Connected to: Spark SQL (version 2.3.3) + Driver: Hive JDBC (version 1.2.1.spark2) + Transaction isolation: TRANSACTION_REPEATABLE_READ + +Seems to be ok. +When I list show tables; I can't see anything. + +Two interesting things I'd like to highlight is: + +When I start pyspark I get these warnings + +WARN ObjectStore:6666 - Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 +WARN ObjectStore:568 - Failed to get database default, returning NoSuchObjectException +WARN ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException + +When I start the thrift server I get these: + +rsync from spark://172.18.0.2:7077 + ssh: Could not resolve hostname spark: Name or service not known + rsync: connection unexpectedly closed (0 bytes received so far) [Receiver] + rsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2] + starting org.apache.spark.sql.hive.thriftserver.HiveThriftServer2, logging to ... + + +I've been through several posts and discussions. I see people saying we can't have temporary tables exposed via thrift unless you start the server from within the same code. If that's true how can I do that in python (pyspark)? +Thanks","createOrReplaceTempView creates an in-memory table. The Spark thrift server needs to be started on the same driver JVM where we created the in-memory table. +In the above example, the driver on which the table is created and the driver running STS(Spark Thrift server) are different. +Two options +1. Create the table using createOrReplaceTempView in the same JVM where the STS is started. +2. Use a backing metastore, and create tables using org.apache.spark.sql.DataFrameWriter#saveAsTable so that tables are accessible independent of the JVM(in fact without any Spark driver. +Regarding the errors: +1. Relates to client and server metastore version. +2. Seems like some rsync script trying to decode spark:\\ url +Both doesnt seems to be related to the issue.",0.0,False,1,6163 +2019-06-30 00:14:55.133,How do I store information about a front-end button on the Django server?,"Basically I want to store a buttons service server-side that way it can persist through browser closes and page refresh. +Here's what the user is trying to do + +The user searches in a search bar for a list of products. +When the results show up, they are shown a button that triggers an action for each individual product. They are also shown a master button that can trigger the same action for each product that is listed. +Upon clicking the button, I want to disable it for 30 seconds and have this persist through page refreshes and browser close. + +What I've done +Currently I have this implemented using AJAX calls on the client side, but if the page refreshes it resets the button and they can click it again. So I looked into using javascript's localStorage function, but in my situation it would be better just to store this on the server. +What I think needs to happen + +Create a model in my Django app for a button. Its attributes would be its status and maybe some meta data (last clicked, etc). +Whenever the client requests a list of products, the views will send the list of products and it will be able to query the database for the respective button's status and implement a disabled attribute directly into the template. +If the button is available to be pressed then the client side will make an AJAX POST call to the server and the server will check the buttons status. If it's available it will perform the action, update the buttons status to disabled for 30 seconds, and send this info back to the client in order to reflect it in the DOM. + +A couple questions + +Is it just a matter of creating a model for the buttons and then querying the database like normal? +How do I have Django update the database after 30 seconds to make a button's status go from disabled back to enabled? +When the user presses the button it's going to make it disabled, but it will only be making it disabled in the database. What is the proper way to actually disable the button without a page refresh on the client side? Do I just disable the button in javascript for 30 seconds, and then if they try to refresh the page then the views will see the request for the list of products and it will check the database for each button's status and it will serve the button correctly? + +Thank you very much for the help!!","Is it just a matter of creating a model for the buttons and then + querying the database like normal? + +Model could be something like Button (_id, last_clicked as timestamp, user_id) +While querying you could simply sort by timestamp and LIMIT 1 to get the last click. By not overwriting the original value it would ensure a bit faster write. +If you don't want the buttons to behave similarly for each user you will have to create a mapping of the button with the user who clicked it. Even if your current requirements don't need them, create an extensible solution where mapping the user with this table is quite easy. + +How do I have Django update the database after 30 seconds to make a + button's status go from disabled back to enabled? + +I avoid changing the database without a client request mapped to the change. This ensures the concurrency and access controls. And also has higher predictability for the current state of data. Following that, I would suggest not to update the db after the time delta(30 sec). +Instead of that you could simply compare the last_clicked timestamp and calculate the delta either server side before sending the response or in client side. +This decision could be important, consider a scenario when the client has a different time on his system than the server time. + +When the user presses the button it's going to make it disabled, but + it will only be making it disabled in the database. What is the proper + way to actually disable the button without a page refresh on the + client side? Do I just disable the button in javascript for 30 + seconds, and then if they try to refresh the page then the views will + see the request for the list of products and it will check the + database for each button's status and it will serve the button + correctly? + +You'd need to do a POST request to communicate the button press timestamp with the db. You'd also need to ensure that the POST request is successful as an unsuccessful request would not persist the data in case of browser closure. +After doing the above two you could disable the button only from the client side without trying the get the button last_clicked timestamp.",1.2,True,1,6164 +2019-06-30 05:53:43.637,"Using a Discord bot, how do I get a string of an embed message from another bot",I am creating a Discord bot that needs to check all messages to see if a certain string is in an embed message created by any other Discord bot. I know I can use message.content to get a string of the message a user has sent but how can I do something similar with bot embeds in Python?,Use message.embeds instead to get the embed string content,1.2,True,1,6165 +2019-07-01 09:49:35.957,Python magic is not recognizing the correct content,"I have parsed the content of a file to a variable that looks like this; + +b'8,092436.csv,,20f85' + +I would now like to find out what kind of filetype this data is coming from, with; + +print(magic.from_buffer(str(decoded, 'utf-8'), mime=True)) + +This prints; + +application/octet-stream + +Anyone know how I would be able to get a result saying 'csv'?","Use magic on the original file. +You also need to take into account that CSV is really just a text file that uses particular characters to delimit the content. There is no explicit identifier that indicates that the file is a CSV file. Even then the CSV module needs to be configured to use the appropriate delimiters. +The delimiter specification of a CSV file is either defined by your program or needs to be configured (see importing into Excel as an example, you are presented with a number of options to configure the type of CSV to import).",1.2,True,1,6166 +2019-07-01 15:01:58.223,Configure proxy with python,"I am looking to use a public API running on a distant server from within my company. For security reasons, I am supposed to redirect all the traffic via the company's PROXY. Does anyone know how to do this in Python?","Directly in python you can do : +os.environ[""HTTP_PROXY""] = http://proxy.host.com:8080. +Or as it has been mentioned before launching by @hardillb on a terminal : +export HTTP_PROXY=http://proxy.host.com:8080",0.3869120172231254,False,2,6167 +2019-07-01 15:01:58.223,Configure proxy with python,"I am looking to use a public API running on a distant server from within my company. For security reasons, I am supposed to redirect all the traffic via the company's PROXY. Does anyone know how to do this in Python?","Set the HTTP_PROXY environment variable before starting your python script +e.g. export HTTP_PROXY=http://proxy.host.com:8080",0.2012947653214861,False,2,6167 +2019-07-03 17:35:26.190,How can I find domain that has been used from a client to reach my server in python socket?,I just wonder how apache server can know the domain you come from you can see that in Vhost configuration,"By a reverse DNS lookup of the IP; socket.gethostbyaddr(). +Results vary; many IPs from consumer ISPs won't resolve to anything interesting, because of NAT and just not maintaining a generally informative reverse zone.",0.0,False,1,6168 +2019-07-03 22:16:01.657,How to write each dataframe partition into different tables,"I am using Databricks to connect to an Eventhub, where each message comming from the EventHub may be very different from another. +In the message, I have a body and an id. +I am looking for performance, so I am avoiding collecting data or doing unecessary processings, also I want to do the saving in parallel by partition. However I am not sure on how to do this in a proper way. +I want to append the body of each ID in a different AND SPECIFIC table in batches, the ID will give me the information I need to save in the right table. So in order to do that I have been trying 2 approachs: + +Partitioning: Repartition(numPartitions, ID) -> ForeachPartition +Grouping: groupBy('ID').apply(myFunction) #@pandas_udf GROUPED_MAP + +The approach 1 doens't look very attracting to me, the repartition process looks kind unecessary and I saw in the docs that even if I set a column as a partition, it may save many ids of that column in a single partition. It only garantees that all data related to that id is in the partition and not splitted +The approach 2 forces me to output from the pandas_udf, a dataframe with the same schema of the input, which is not going to happen since I am transforming the eventhub message from CSV to dataframe in order to save it to the table. I could return the same dataframe that I received, but it sounds weird. +Is there any nice approach I am not seeing?","If your Id has distinct number of values (kind of type/country column) you can use partitionBy to store and thereby saving them to different table will be faster. +Otherwise create a derive column(using withColumn) from you id column by using the logic same as you want to use while deviding data across tables. Then you can use that derive column as a partition column in order to have faster load.",1.2,True,1,6169 +2019-07-04 09:21:11.737,how to get the memebrs of a telegram group greater than 10000,"I am getting only upto 10000 members when using telethon how to get more than 10000 +I tried to run multiple times to check whether it is returning random 10000 members but still most of them are same only few changed that also not crossing two digits +Expected greater than 10000 +but actual is 10000","there is no simple way. you can play with queries like 'a*', 'b*' and so on",0.0,False,1,6170 +2019-07-04 12:44:56.280,Keras preprocessing for 3D semantic segmentation task,"For semantic image segmentation, I understand that you often have a folder with your images and a folder with the corresponding masks. In my case, I have gray-scale images with the dimensions (32, 32, 32). The masks naturally have the same dimensions. The labels are saved as intensity values (value 1 = label 1, value 2 = label 2 etc.). 4 classes in total. Imagine I have found a model that was built with the keras model API. How do I know how to prepare my label data for it to be accepted by the model? Does it depend on the loss function? Is it defined in the model (Input parameter). Do I just add another dimension (4, 32, 32, 32) in which the 4 represents the 4 different classes and one-hot code it? +I want to build a 3D convolutional neural network for semantic segmentation but I fail to understand how to feed in the data correctly in keras. The predicted output is supposed to be a 4-channel 3D image, each channel showing the probability values of each pixel to belong to a certain class.","The Input() function defines the shape of the input tensor of a given model. For 3D images, often a 5D Tensor is expected, e.g. (None, 32, 32, 32, 1), where None refers to the batch size. Therefore the training images and labels have to be reshaped. Keras offers the to_categorical function to one-hot encode the label data (which is necessary). The use of generators helps to feed in the data. In this case, I cannot use the ImageDataGenerator from keras as it can only deal with RGB and grayscale images and therefore have to write a custom script.",1.2,True,1,6171 +2019-07-04 23:38:40.930,"How to fix ""UnsatisfiableError: The following specifications were found to be incompatible with each other: - pip -> python=3.6""","So, i trying to install with the command ecmwf api client conda install -c conda-forge ecmwf-api-client then the warning in the title shows up. I don't know how to proceede +(base) C:\Users\caina>conda install -c conda-forge ecmwf-api-client +Collecting package metadata (current_repodata.json): done +Solving environment: failed +Collecting package metadata (repodata.json): done +Solving environment: failed +UnsatisfiableError: The following specifications were found to be incompatible with each other: + +pip -> python=3.6","Install into a new environment instead of the conda base environment. Recent Anaconda and Miniconda installers have Python 3.7 in the base environment, but you're trying to install something that requires Python 3.6.",0.3869120172231254,False,2,6172 +2019-07-04 23:38:40.930,"How to fix ""UnsatisfiableError: The following specifications were found to be incompatible with each other: - pip -> python=3.6""","So, i trying to install with the command ecmwf api client conda install -c conda-forge ecmwf-api-client then the warning in the title shows up. I don't know how to proceede +(base) C:\Users\caina>conda install -c conda-forge ecmwf-api-client +Collecting package metadata (current_repodata.json): done +Solving environment: failed +Collecting package metadata (repodata.json): done +Solving environment: failed +UnsatisfiableError: The following specifications were found to be incompatible with each other: + +pip -> python=3.6","Simply go to Anaconda navigator. +Go to Environments, Select Installed (packages, etc.) and then click the version of Python. Downgrade it to a lower version. In your case Python 3.6",-0.1016881243684853,False,2,6172 +2019-07-06 19:03:17.943,how to run my python code on google cloud without fear of getting disconnected - an absolute beginner?,"I have been trying to use python 3 for text mining on a 650 MB csv file, which my computer was not powerful enough to do. My second solution was to reach out to google cloud. I have set up my VMs and my jupyter notebook on google cloud, and it works perfectly well. The problem, however, is that I am in constant fear of getting disconnected. As a matter of fact, my connection with google server was lost a couple of time and so was my whole work. +My question: Is there a way to have the cloud run my code without fear of getting disconnected? I need to be able to have access to my csv file and also the output file. +I know there is more than one way to do this and have read a lot of material. However, they are too technical for a beginner like me to understand. I really appreciate a more dummy-friendly version. Thanks! +UPDATE: here is how I get access to my jupyter notebook on google cloud: +1- I run my instance on google cloud +2- I click on SSH +3- in the window that appears, I type the following: +jupyter notebook --ip=0.0.0.0 --port=8888 --no-browser & +I have seen people recommend to add nohup to the beginning of the same commend. I have tried it and got this message: +nohup: ignoring input and appending output to 'nohup.out' +And nothing happens.","If I understand your problem correctly, you could just run the program inside a screen instance: +After connecting via ssh type screen +Run your command +Press ctrl + a, ctrl + d +Now you can disconnect from ssh and your code will continue to run. You can reconnect to the screen via screen -r",1.2,True,1,6173 +2019-07-08 15:13:46.557,Importing sknw on jupyter ModuleNotFoundError,"On my jupyter notebook, running import sknw throws a ModuleNotFoundError error. +I have tried pip install sknw and pip3 install sknw and python -m pip install sknw. It appears to have downloaded successfully, and get requirement already satisfied if I try to download it again. +Any help on how to get the sknw package to work in jupyter notebook would be very helpful!",check on which environment you using pip.,1.2,True,1,6174 +2019-07-08 16:03:06.770,What is the best approach to scrape a big website?,"Hello I am developing a web scraper and I am using in a particular website, this website has a lot of URLs, maybe more than 1.000.000, and for scraping and getting the information I have the following architecture. +One set to store the visited sites and another set to store the non-visited sites. +For scraping the website I am using multithreading with a limit of 2000 threads. +This architecture has a problem with a memory size and can never finish because the program exceeds the memory with the URLs +Before putting a URL in the set of non-visited, I check first if this site is in visited, if the site was visited then I will never store in the non-visited sites. +For doing this I am using python, I think that maybe a better approach would be storing all sites in a database, but I fear that this can be slow +I can fix part of the problem by storing the set of visited URLs in a database like SQLite, but the problem is that the set of the non-visited URL is too big and exceeds all memory +Any idea about how to improve this, with another tool, language, architecture, etc...? +Thanks","At first, i never crawled pages using Python. My preferd language is c#. But python should be good, or better. +Ok, the first thing your detected is quiet important. Just operating on your memory will NOT work. Implementing a way to work on your harddrive is important. If you just want to work on memory, think about the size of the page. +In my opinion, you already got the best(or a good) architecture for webscraping/crawling. You need some kind of list, which represents the urls you already visited and another list in which you could store the new urls your found. Just two lists is the simplest way you could go. Cause that means, you are not implementing some kind of strategy in crawling. If you are not looking for something like that, ok. But think about it, because that could optimize the usage of memory. Therefor you should look for something like deep and wide crawl. Or recursive crawl. Representing each branch as a own list, or a dimension of an array. +Further, what is the problem with storing your not visited urls in a database too? Cause you only need on each thread. If your problem with putting it in db is the fact, that it could need some time swiping through it, then you should think about using multiple tables for each part of the page. +That means, you could use one table for each substring in url: +wwww.example.com/ +wwww.example.com/contact/ +wwww.example.com/download/ +wwww.example.com/content/ +wwww.example.com/support/ +wwww.example.com/news/ +So if your url is:""wwww.example.com/download/sweetcats/"", then you should put it in the table for wwww.example.com/download/. +When you have a set of urls, then you have to look at first for the correct table. Afterwards you can swipe through the table. +And at the end, i have just one question. Why are you not using a library or a framework which already supports these features? I think there should be something available for python.",1.2,True,2,6175 +2019-07-08 16:03:06.770,What is the best approach to scrape a big website?,"Hello I am developing a web scraper and I am using in a particular website, this website has a lot of URLs, maybe more than 1.000.000, and for scraping and getting the information I have the following architecture. +One set to store the visited sites and another set to store the non-visited sites. +For scraping the website I am using multithreading with a limit of 2000 threads. +This architecture has a problem with a memory size and can never finish because the program exceeds the memory with the URLs +Before putting a URL in the set of non-visited, I check first if this site is in visited, if the site was visited then I will never store in the non-visited sites. +For doing this I am using python, I think that maybe a better approach would be storing all sites in a database, but I fear that this can be slow +I can fix part of the problem by storing the set of visited URLs in a database like SQLite, but the problem is that the set of the non-visited URL is too big and exceeds all memory +Any idea about how to improve this, with another tool, language, architecture, etc...? +Thanks","2000 threads is too many. Even 1 may be too many. Your scraper will probably be thought of as a DOS (Denial Of Service) attach and your IP address will be blocked. +Even if you are allowed in, 2000 is too many threads. You will bottleneck somewhere, and that chokepoint will probably lead to going slower than you could if you had some sane threading. Suggest trying 10. One way to look at it -- Each thread will flip-flop between fetching a URL (network intensive) and processing it (cpu intensive). So, 2 times the number of CPUs is another likely limit. +You need a database under the covers. This will let you top and restart the process. More importantly, it will let you fix bugs and release a new crawler without necessarily throwing away all the scraped info. +The database will not be the slow part. The main steps: + +Pick a page to go for (and lock it in the database to avoid redundancy). +Fetch the page (this is perhaps the slowest part) +Parse the page (or this could be the slowest) +Store the results in the database +Repeat until no further pages -- which may be never, since the pages will be changing out from under you. + +(I did this many years ago. I had a tiny 0.5GB machine. I quit after about a million analyzed pages. There were still about a million pages waiting to be scanned. And, yes, I was accused of a DOS attack.)",0.2012947653214861,False,2,6175 +2019-07-08 17:12:59.247,How to set Referrer in driver selenium python?,"I need to scrape a web page, but the problem is when i click on the link on website, it works fine, but when i go through the link manually by typing url in browser, it gives Access Denied error, so may be they are validating referrer on their end, Can you please tell me how can i sort this issue out using selenium in python ? +or any idea that can solve this issue? i am unable to scrape the page because its giving Access Denied error. +PS. i am working with python3 +Waiting for help. +Thanks","I solved myself by using seleniumwire ;) selenium doesn't support headers, but seleniumwire supports, so that solved my issue. +Thanks",0.0,False,1,6176 +2019-07-08 23:22:52.727,"While query data (web scraping) from a website with Python, how to avoid being blocked by the server?","I was trying to using python requests and mechanize to gather information from a website. This process needs me to post some information then get the results from that website. I automate this process using for loop in Python. However, after ~500 queries, I was told that I am blocked due to high query rate. It takes about 1 sec to do each query. I was using some software online where they query multiple data without problems. Could anyone help me how to avoid this issue? Thanks! +No idea how to solve this. +--- I am looping this process (by auto changing case number) and export data to csv.... +After some queries, I was told that my IP was blocked.","Optimum randomized delay time between requests. +Randomized real user-agents for +each request. +Enabling cookies. +Using a working proxy pool and +selecting a random proxy for each request.",0.0,False,1,6177 +2019-07-09 21:53:09.597,What is the purpose of concrete methods in abstract classes in Python?,"I feel like this subject is touched in some other questions but it doesn't get into Python (3.7) specifically, which is the language I'm most familiar with. +I'm starting to get the hang of abstract classes and how to use them as blueprints for subclasses I'm creating. +What I don't understand though, is the purpose of concrete methods in abstract classes. +If I'm never going to instantiate my parent abstract class, why would a concrete method be needed at all, shouldn't I just stick with abstract methods to guide the creation of my subclasses and explicit the expected behavior? +Thanks.","This question is not Python specific, but general object oriented. +There may be cases in which all your sub-classes need a certain method with a common behavior. It would be tedious to implement the same method in all your sub-classes. If you instead implement the method in the parent class, all your sub-classes inherit this method automatically. Even callers may call the method on your sub-class, although it is implemented in the parent class. This is one of the basic mechanics of class inheritance.",0.3869120172231254,False,1,6178 +2019-07-10 09:24:32.763,Building Kivy Android app with Tensorflow,"Recently, I want to deploy a Deeplearning model (Tensorflow) on mobile (Android/iOS) and I found that Kivy Python is a good choice to write cross-platform apps. (I am not familiar with Java Android) +But I don't know how to integrate Tensorflow libs when building .apk file. +The guide for writing ""buildozer recipe"" is quite complicate for this case. +Is there any solution for this problem without using native Java Android and Tensorflow Lite?","Fortunately found someone facing the same issues as I am but unfortunately I found that Kivy couldn't compile Tensorflow library yet. In other words, not supported, yet. I don't know when will they update the features.",0.0,False,1,6179 +2019-07-10 11:12:13.937,how do I produce unique random numbers as an array in Python?,"I have an array with size (4,4) that can have values 0 and 1, so I can have 65536 different arrays. I need to produce all these arrays without repeating. I use wt_random=np.random.randint(2, size=(65536,4,4)) but I am worried they are not unique. could you please tell me this code is correct or not and what should I do to produce all possible arrays? Thank you.","If you need all possible arrays in random order, consider enumerating them in any arbitrary deterministic order and then shuffling them to randomize the order. If you don't want all arrays in memory, you could write a function to generate the array at a given position in the deterministic list, then shuffle the positions. Note that Fisher-Yates may not even need a dense representation of the list to shuffle... if you keep track of where the already shuffled entries end up you should have enough.",0.0,False,1,6180 +2019-07-10 11:43:17.540,How to add a column of seconds to a column of times in python?,"I have a file contain some columns which the second column is time. Like what I show below. I need to add a column of time which all are in seconds like this: ""2.13266 2.21784 2.20719 2.02499 2.16543"", to the time column in the first file (below). My question is how to add these two time to each other. And maybe in some cases when I add these times, then it goes to next day, and in this case how to change the date in related row. +2014-08-26 19:49:32 0 +2014-08-28 05:43:21 0 +2014-08-30 11:47:54 0 +2014-08-30 03:26:10 0","Probably the easiest way is to read your file into a pandas data-frame and parse each row as a datetime object. Then you create a datetime.timedelta object passing the fractional seconds. +A datetime object + a timedelta handles wrapping around for days quite nicely so this should work without any additional code. Finally, write back your updated dataframe to a file.",0.0,False,2,6181 +2019-07-10 11:43:17.540,How to add a column of seconds to a column of times in python?,"I have a file contain some columns which the second column is time. Like what I show below. I need to add a column of time which all are in seconds like this: ""2.13266 2.21784 2.20719 2.02499 2.16543"", to the time column in the first file (below). My question is how to add these two time to each other. And maybe in some cases when I add these times, then it goes to next day, and in this case how to change the date in related row. +2014-08-26 19:49:32 0 +2014-08-28 05:43:21 0 +2014-08-30 11:47:54 0 +2014-08-30 03:26:10 0","Ok. Finally it is done via this code: + d= 2.13266 +dd= pd.to_timedelta (int(d), unit='s') +df= pd.Timestamp('2014-08-26 19:49:32') +new = df + dd",0.0,False,2,6181 +2019-07-10 15:34:08.677,how to average in a specific dimension with numpy.mean?,"I have a matrix called POS which has form (10,132) and I need to average those first 10 elements in such a way that my averaged matrix has the form of (1,132) +I have tried doing +means = pos.mean (axis = 1) +or +menas = np.mean(pos) +but the result in the first case is a matrix of (10,) and in the second it is a simple number +i expect the ouput a matrix of shape (1,132)","The solution is to specify the correct axis and use keepdims=True which is noted by several commenters (If you add your answer I will delete mine). +This can be done with either pos.mean(axis = 0,keepdims=True) or np.mean(pos,axis=0,keepdims=True)",1.2,True,1,6182 +2019-07-10 20:56:07.317,Delete empty directory from Jupyter notebook error,"I am trying to delete an empty directory in Jupyter notebook. +When I select the folder and click Delete, an error message pops up saying: +'A directory must be empty before being deleted.' +There are no files or folders in the directory and it is empty. +Any advice on how to delete it? +Thank you!","Usually, Jupyter itself creates a hidden .ipynb_checkpoints folder within the directory when you inspect it. You can check its existence (or any other hidden file/folders) in the directory using ls -a in a terminal that has a current working directory as the corresponding folder.",1.2,True,2,6183 +2019-07-10 20:56:07.317,Delete empty directory from Jupyter notebook error,"I am trying to delete an empty directory in Jupyter notebook. +When I select the folder and click Delete, an error message pops up saying: +'A directory must be empty before being deleted.' +There are no files or folders in the directory and it is empty. +Any advice on how to delete it? +Thank you!","Go to your local directory where it stores the workbench files, ex:(C:\Users\prasadsarada) +You can see all the folders you have created in Jupyter Notebook. delete it there.",0.0,False,2,6183 +2019-07-10 21:47:53.280,TCP Socket on Server Side Using Python with select on Windows,"By trying to find an optimization to my server on python, I have stumbled on a concept called select. By trying to find any code possible to use, no matter where I looked, Windows compatibility with this subject is hard to find. +Any ideas how to program a TCP server with select on windows? I know about the idea of unblocking the sockets to maintain the compatibility with it. Any suggestions will be welcomed.","Using select() under Windows is 99% the same as it is under other OS's, with some minor variations. The minor variations (at least the ones I know about) are: + +Under Windows, select() only works for real network sockets. In particular, don't bother trying to select() on stdin under Windows, as it won't work. +Under Windows, if you attempt a non-blocking TCP connection and the TCP connection fails asynchronously, you will get a notification of that failure via the third (""exception"") fd_set only. (Under other OS's you will get notified that the failed-to-connect TCP-socket is ready-for-read/write also) +Under Windows, select() will fail if you don't pass in at least one valid socket to it (so you can't use select([], [], [], timeoutInSeconds) as an alternative to time.sleep() like you can under some other OS's) + +Other than that select() for Windows is like select() for any other OS. (If your real question about how to use select() in general, you can find information about that using a web search)",0.3869120172231254,False,1,6184 +2019-07-11 11:48:58.680,Tensorflow 2.0: Accessing a batch's tensors from a callback,"I'm using Tensorflow 2.0 and trying to write a tf.keras.callbacks.Callback that reads both the inputs and outputs of my model for the batch. +I expected to be able to override on_batch_end and access model.inputs and model.outputs but they are not EagerTensor with a value that I could access. Is there anyway to access the actual tensors values that were involved in a batch? +This has many practical uses such as outputting these tensors to Tensorboard for debugging, or serializing them for other purposes. I am aware that I could just run the whole model again using model.predict but that would force me to run every input twice through the network (and I might also have non-deterministic data generator). Any idea on how to achieve this?","No, there is no way to access the actual values for input and output in a callback. That's not just part of the design goal of callbacks. Callbacks only have access to model, args to fit, the epoch number and some metrics values. As you found, model.input and model.output only points to the symbolic KerasTensors, not actual values. +To do what you want, you could take the input, stack it (maybe with RaggedTensor) with the output you care about, and then make it an extra output of your model. Then implement your functionality as a custom metric that only reads y_pred. Inside your metric, unstack the y_pred to get the input and output, and then visualize / serialize / etc. Metrics +Another way might be to implement a custom Layer that uses py_function to call a function back in python. This will be super slow during serious training but may be enough for use during diagnostic / debugging.",0.3869120172231254,False,1,6185 +2019-07-11 13:34:12.703,Drone control by python,"I am new in drone, can you please explain one thing: +Is it possible to have RC controller programmed by python? +As I understood using telemetry module and DroneKit, it is possible to control the drone using python. +But usually telemetry module supporting drones are custom drones and as I understood telemetry module does not work as good as RC. +So to have cheaper price, can someone suggest me solution about how to control RC drone using python?",You can use tello drones .These drones can be programmed as per your requirement using python .,0.0,False,1,6186 +2019-07-11 17:14:00.880,Is there a way of deleting specific text for the user in Python?,"I am making a program in python and want to clear what the user has enterd, this is because I am using the keyboard function to register input as is is given, but there is still text left over after a keypress is registerd and I don't want this to happen. +I was woundering if there is a module that exists to remove text that is being entered +Any help would be greatly apreciated, and just the name of a module is fine; I can figure out how to use it, just cant find an appropriate module. +EDIT: +Sorry if i did not make my self clear, I dont really want to clear the whole screen, just what the user has typed. So that they don't have to manually back space after their input has been taken.",'sys.stdout.write' is the moduel I was looking for.,0.0,False,1,6187 +2019-07-13 01:06:02.247,Quickest way to insert zeros into numpy array,"I have a numpy array ids = np.array([1,1,1,1,2,2,2,3,4,4]) +and another array of equal length vals = np.array([1,2,3,4,5,6,7,8,9,10]) +Note: the ids array is sorted by ascending order +I would like to insert 4 zeros before the beginning of each new id - i.e. +new array = np.array([0,0,0,0,1,2,3,4,0,0,0,0,5,6,7,0,0,0,0,8,0,0,0,0,9,10]) +Only, way I am able to produce this is by iterating through the array which is very slow - and I am not quite sure how to do this using insert, pad, or expand_dim ...","u can use np.zeros and append it to your existing array like +newid=np.append(np.zeros((4,), dtype=int),ids) +Good Luck!",0.0,False,1,6188 +2019-07-13 23:37:40.520,How can I put my curvilinear coordinate data on a map projection?,"I'm working with NetCDF files from NCAR and I'm trying to plot sea-ice thickness. This variable is on a curvilinear (TLAT,TLON) grid. What is the best way to plot this data on a map projection? Do I need to re-grid it to a regular grid or is there a way to plot it directly? I'm fairly new to Python so any help would be appreciated. Please let me know if you need any more information. Thank you! +I've tried libraries like iris, scipy, and basemap, but I couldn't really get a clear explanation on how to implement them for my case.","I am pretty sure you can already use methods like contour, contourf, pcolormesh from Python's matplotlib without re-gridding the data. The same methods work for Basemap.",1.2,True,1,6189 +2019-07-15 07:29:41.747,how to use natural language generation from a csv file input .which python module we should use.can any one share a sample tutorial?,take a input as a csv file and generate text/sentence using nlg. I have tried with pynlg and markov chain.But nothing worked .What else I can use?,"There are not much python libraries for NLG!!. Try out nlglib a python wrapper around SimpleNLG. For tutorial purposes, you could read Building Natural Language Generation systems by e.reiter.",-0.3869120172231254,False,1,6190 +2019-07-15 10:55:05.307,[How to run code by using cmd from sublime text 3 ],"I am a newbie in Python, and have a problem. When I code Python using Sublime Text 3 and run directly on it, it does not find some Python library which I already imported. I Googled this problem and found out Sublime Text is just a Text Editor. +I already had code in Sublime Text 3 file, how can I run it without this error? +For example: + +'ModuleNotFoundError: No module named 'matplotlib'. + +I think it should be run by cmd but I don't know how.","Depending on what OS you are using this is easy. On Windows you can press win + r, then type cmd. This will open up a command prompt. Then, type in pip install matplotlib. This will make sure that your module is installed. Then, navigate to the folder which your code is located in. You can do this by typing in cd Documents if you first need to get to your documents and then for each subsequent folder. +Then, try typing in python and hitting enter. If a python shell opens up then type quit() and then type python filename.py and it will run. +If no python shell opens up then you need to change your environment variables. Press the windows key and pause break at the same time, then click on Advanced system settings. Then press Environment Variables. Then double click on Path. Then press New. Then locate the installation folder of you Python install, which may be in C:\Users\YOURUSERNAME\AppData\Local\Programs\Python\Python36 Now put in the path and press ok. You should now be able to run python from your command line.",1.2,True,1,6191 +2019-07-15 11:09:48.323,how can I use gpiozero robot library to change speeds of motors via L298N,"In my raspberry pi, i need to run two motors with a L298N. +I can pwm on enable pins to change speeds. But i saw that gpiozero robot library can make things a lot easier. But +When using gpiozero robot library, how can i alter speeds of those motors by giving signel to the enable pins.","I have exactly the same situation. You can of course program the motors separately but it is nice to use the robot class. +Looking into the gpiocode for this, I find that in our case the left and right tuples have a third parameter which is the pin for PWM motor speed control. (GPIO Pins 12 13 18 19 have hardware PWM support). The first two outout pins in the tuple are to be signalled as 1, 0 for forward, 0,1 for back. +So here is my line of code: + Initio = Robot(left=(4, 5, 12), right=(17, 18, 13)) +Hope it works for you! +I have some interesting code on the stocks for controlling the robot's absolute position, so it can explore its environment.",0.2012947653214861,False,1,6192 +2019-07-15 20:45:58.190,Does Kivy have laserjet printer support?,"Is there a way to print a Page/Widget/Label in Kivy? (or some other way in python). +Unfortunately, I don't know how to ask the question correctly since I am new to software development. +I want to build a price tracking app for my business in which i will have to print some stuff.","Not directly, no, but the printing part isn't really Kivy's responsibility - probably you can find another Python module to handle this. +In terms of what is printed, you can export an image of any part of the Kivy gui and print that.",0.6730655149877884,False,1,6193 +2019-07-16 11:12:04.277,How to access python dictionary from C?,"I have a dictionary in python and I need to access that dictionary from a C program? +or for example, convert this dictionary into struct map in C +I don't have any idea how this could be done. +I will be happy to get some hints regarding that or if there are any libraries that could help. +Update: +the dictionary is generated from the abstract syntax tree of C program by using pycparser. +so, I wrote a python function to generate this dictionary and I can dump it using pickle or save it as a text file. +Now I want to use keys and their values from a c program and I don't know how to access that dictionary.",You could export the dictionary to a JSON and parse the JSON file from C...,0.3869120172231254,False,1,6194 +2019-07-16 12:46:43.023,Flask non-template HTML files included by Jinja,"In my Flask application, I have one html file that holds some html and some js that semantically belongs together and cannot be used separately in a sensible way. I include this file in 2 of my html templates by using Jinja's {%include ... %}. +Now my first approach was to put this file in my templates folder. However, I never call render_template on this file, so it seems unapt to store it in that directory. +Another approach would be to put it into the static folder, since its content is indeed static. But then I don't know how to tell Jinja to look for it in a different directory, since all the files using Jinja are in the templates folder. +Is there a way to accomplish this with Jinja, or is there a better approach altogether?","You're over-thinking this. If it's included by Jinja, then it's a template file and belongs in the templates directory.",1.2,True,1,6195 +2019-07-16 13:11:16.803,"Keras, Tensorflow are reserving all GPU memory on model build","my GPU is NVIDIA RTX 2080 TI +Keras 2.2.4 +Tensorflow-gpu 1.12.0 +CUDA 10.0 +Once I load build a model ( before compilation ), I found that GPU memory is fully allocated +[0] GeForce RTX 2080 Ti | 50'C, 15 % | 10759 / 10989 MB | issd/8067(10749M) +What could be the reason, how can i debug it? +I don't have spare memory to load the data even if I load via generators +I have tried to monitor the GPUs memory usage found out it is full just after building the layers (before compiling model)","I meet a similar problem when I load pre-trained ResNet50. The GPU memory usage just surges to 11GB while ResNet50 usually only consumes less than 150MB. +The problem in my case is that I also import PyTorch without actually used it in my code. After commented it, everything works fine. +But I have another PC with the same code that works just fine. So I uninstall and reinstall the Tensorflow and PyTorch with the correct version. Then everything works fine even if I import PyTorch.",0.0,False,1,6196 +2019-07-16 13:41:47.907,how is every object related to pyObject when c does not have Inheritance,"I have been going through source code of python. It looks like every object is derived from PyObject. But, in C, there is no concept of object oriented programming. So, how exactly is this implemented without inheritance?","What makes the Object Oriented programming paradigm is the relation between ""classes"" as templates for a data set and functions that will operate on this data set. And, the inheritance mechanism which is a relation from a class to ancestor classes. +These relations, however, do not depend on a particular language Syntax - just that they are present in anyway. +So, nothing stops one from doing ""object orientation"" in C, and in fact, organized libraries, even without an OO framework, end up with an organization related to OO. +It happens that the Python object system is entirely defined in pure C, with objects having a __class__ slot that points to its class with a C pointer - only when ""viwed"" from Python the full represenation of the class is resented. Classes in their turn having a __mro__ and __bases__ slots that point to the different arrangements of superclasses (the pointers this time are for containers that will be seen from Python as sequences). +So, when coding in C using the definitions and API of the Python runtime, one can use OOP just in the same way as coding in Python - and in fact use Python objects that are interoperable with the Python language. (The cython project will even transpile a superset of the Python language to C and provide transparent ways f writing native code with Python syntax) +There are other frameworks available to C that provide different OOP systems, that are equaly conformant, for example, glib - which defines ""gobject"" and is the base for all GTK+ and GNOME applications.",0.3869120172231254,False,1,6197 +2019-07-16 15:52:02.673,How can I verify if there is an incoming message to my node with MPI?,"I'm doing a project using Python with MPI. Every node of my project needs to know if there is any incoming message for it before continuing the execution of other tasks. +I'm working on a system where multiple nodes executes some operations. Some nodes may need the outputs of another nodes and therefore needs to know if this output is available. +For illustration purposes, let's consider two nodes, A and B. A needs the output of B to execute it's task, but if the output is not available A needs to do some other tasks and then verify if B has send it's output again, in a loop. What I want to do is this verification of availability of output from B in A. +I made some research and found something about a method called probe, but don't understood neither found a usefull documentation about what it does or how to use. So, I don't know if it solves my problem. +The idea of what I want is ver simple: I just need to check if there is data to be received when I use the method ""recv"" of mpi4py. If there is something the code do some tasks, if there ins't the code do some other taks.","(elaborating on Gilles Gouaillardet's comment) +If you know you will eventually receive a message, but want to be able to run some computations while it is being prepared and sent, you want to use non-blocking receives, not probe. +Basically use MPI_Irecv to setup a receive request as soon as possible. If you want to know whether the message is ready yet, use MPI_Test to check the request. +This is much better than using probes, because you ensure that a receive buffer is ready as early as possible and the sender is not blocked, waiting for the receiver to see that there is a message and post the receive. +For the specific implementation you will have to consult the manual of the Python MPI wrapper you use. You might also find helpful information in the MPI standard itself.",1.2,True,1,6198 +2019-07-17 14:40:06.277,Getting the Raw Data Out of an Excel Pivot Table in Python,"I have a pivot table in excel that I want to read the raw data from that table into python. Is it possible to do this? I do not see anything in the documentation on it or on Stack Overflow. +If the community could be provided some examples on how to read the raw data that drives pivot tables, this could greatly assist in routine analytical tasks. +EDIT: +In this scenario there are no raw data tabs. I want to know how to ping the pivot table get the raw data and read it into python.","First, recreate raw data from the pivot table. The pivot table has full information to rebuild the raw data. + +Make sure that none of the items in the pivot table fields are hidden -- clear all the filters and Slicers that have been applied. +The pivot table does not need to contain all the fields -- just make sure that there is at least one field in the Values area. +Show the grand totals for rows and columns. If the totals aren't visible, select a cell in the pivot table, and on the Ribbon, under PivotTable Tools, click the Analyze tab. In the Layout group, click Grand totals, then click On for Rows and Columns. +Double-click the grand total cell at the bottom right of the pivot table. This should create a new sheet with the related records from the original source data. + +Then, you could read the raw data from the source.",0.0,False,1,6199 +2019-07-18 11:24:22.350,how to add custom Keras model in OpenCv in python,"i have created a model for classification of two types of shoes +now how to deploy it in OpenCv (videoObject detection)?? +thanks in advance","You would save the model to H5 file model.save(""modelname.h5"") , then load it in OpenCV code load_model(""modelname.h5""). Then in a loop detect the objects you find via model.predict(ImageROI)",0.0,False,1,6200 +2019-07-18 17:58:17.687,How to remove a virtualenv which is created by PyCharm?,"Since I have selected my project's interpreter as Pipenv during project creation, PyCharm has automatically created the virtualenv. Now, when I try to remove the virtualenv via pipenv --rm, I get the error You are attempting to remove a virtualenv that Pipenv did not create. Aborting. So, how can I properly remove this virtualenv?","the command ""pipenv"" actually comes from the virtualenv,he can't remove himself.you should close the project and remove it without activated virtualenv",0.9950547536867304,False,1,6201 +2019-07-18 22:58:53.810,How to deal with infrequent data in a time series prediction model,"I am trying to create a basic model for stock price prediction and some of the features I want to include come from the companies quarterly earnings report (every 3 months) so for example; if my data features are Date, OpenPrice, Close Price, Volume, LastQrtrRevenue how do I include LastQrtrRevenue if I only have a value for it every 3 months? Do I leave the other days blank (or null) or should I just include a constant of the LastQrtrRevenue and just update it on the day the new figures are released? Please if anyone has any feedback on dealing with data that is released infrequently but is important to include please share.... Thank you in advance.","I would be tempted to put the last quarter revenue in a separate table, with a date field representing when that quarter began (or ended, it doesn't really matter). Then you can write queries to work the way that most suits your application. You could certainly reconstitute the view you mention above using that table, as long as you can relate it to the main table. +You would just need to join the main table by company name, while selected the max() of the last quarter revenue table.",0.3869120172231254,False,1,6202 +2019-07-19 11:26:59.093,Compare list items in pythonic way,"For a list say l = [1, 2, 3, 4] how do I compare l[0] < l[1] < l[2] < l[3] in pythonic way?",Another way would be to use the .sort() method in which case you'd have to return a new list altogether.,0.0,False,1,6203 +2019-07-19 18:13:40.823,How does log in spark stage/tasks help in understanding actual spark transformation it corresponds to,"Often during debugging Spark Jobs on failure we can find the appropriate Stage and task responsible for the failure such as String Index Out of Bounds exception but it becomes difficult to understand which transformation is responsible for this failure.The UI shows information such as Exchange/HashAggregate/Aggregate but finding the actual transformation responsible for this failure becomes really difficult in 500+ lines of code, so how should it be possible to debug Spark task failures and tracing the transformation responsible for the same?","Break your execution down. It's the easiest way to understand where the error might be coming from. Running a 500+ line of code for the first time is never a good idea. You want to have the intermediate results while you are working with it. Another way is to use an IDE and walk through the code. This can help you understand where the error originated from. I prefer PyCharm (Community Edition is free), but VS Code might be a good alternative too.",0.0,False,1,6204 +2019-07-20 16:40:06.663,How to host a Python script on the cloud?,"I wrote a Python script which scrapes a website and sends emails if a certain condition is met. It repeats itself every day in a loop. +I converted the Python file to an EXE and it runs as an application on my computer. But I don't think this is the best solution to my needs since my computer isn't always on and connected to the internet. +Is there a specific website I can host my Python code on which will allow it to always run? +More generally, I am trying to get the bigger picture of how this works. What do you actually have to do to have a Python script running on the cloud? Do you just upload it? What steps do you have to undertake? +Thanks in advance!",You can deploy your application using AWS Beanstalk. It will provide you with the whole python environment along with server configuration likely to be changed according to your needs. Its a PAAS offering from AWS cloud.,0.2655860252697744,False,2,6205 +2019-07-20 16:40:06.663,How to host a Python script on the cloud?,"I wrote a Python script which scrapes a website and sends emails if a certain condition is met. It repeats itself every day in a loop. +I converted the Python file to an EXE and it runs as an application on my computer. But I don't think this is the best solution to my needs since my computer isn't always on and connected to the internet. +Is there a specific website I can host my Python code on which will allow it to always run? +More generally, I am trying to get the bigger picture of how this works. What do you actually have to do to have a Python script running on the cloud? Do you just upload it? What steps do you have to undertake? +Thanks in advance!","well i think one of the best option is pythonanywhere.com there you can upload your python script(script.py) and then run it and then finish. +i did this with my telegram bot",0.3869120172231254,False,2,6205 +2019-07-20 19:16:43.307,Using Python Libraries or Codes in My Java Application,"I'm trying to build an OCR desktop application using Java and, to do this, I have to use libraries and functions that were created using the Python programming language, so I want to figure out: how can I use those libraries inside my Java application? +I have already seen Jython, but it is only useful for cases when you want to run Java code in Python; what I want is the other way around (using Python code in Java applications).","I have worked in projects where Python was used for ML (machine learning) tasks and everything else was written in Java. +We separated the execution environments entirely. Instead of mixing Python and Java in some esoteric way, you create independent services (one for Python, one for Java), and then handle inter-process communication via HTTP or messaging or some other mechanism. ""Mircoservices"" if you will.",1.2,True,1,6206 +2019-07-21 23:11:36.943,Spotfire - Dynamically creating buttons using,"Hello I am creating a spotfire dashboard which I would like to be reusable for each year. +Currently my layout is designed as a page with 8 buttons containing the names of stores, if clicked on, spotfire applies a filter so that only informations relating to that store shows. (these were individually created manually) +Is there a way to automate this with JS or Iron Python, so that for each store a button is automatically created, and in action control for each button is to apply that stores filter? +I have looked around but cannot find anything relating to dynamically creating buttons. Not asking for you to code this for me, but if someone can point me towards some resources or general logic on how this could be done it would be much appreciated.","Why not just putting a textarea on your page? Inside this textarea, you add a filter control that filters data the way you want ;) +With this you don't have problem with elements to create dynamically, because it's impossible to create spotfirecontrols dynamically.",0.1352210990936997,False,3,6207 +2019-07-21 23:11:36.943,Spotfire - Dynamically creating buttons using,"Hello I am creating a spotfire dashboard which I would like to be reusable for each year. +Currently my layout is designed as a page with 8 buttons containing the names of stores, if clicked on, spotfire applies a filter so that only informations relating to that store shows. (these were individually created manually) +Is there a way to automate this with JS or Iron Python, so that for each store a button is automatically created, and in action control for each button is to apply that stores filter? +I have looked around but cannot find anything relating to dynamically creating buttons. Not asking for you to code this for me, but if someone can point me towards some resources or general logic on how this could be done it would be much appreciated.","Think txemsukr is right. This is not possible. To do it with JS or IP, the API would have to exist. Several of the elements you mentioned (action controls), you can't control with the API.",0.1352210990936997,False,3,6207 +2019-07-21 23:11:36.943,Spotfire - Dynamically creating buttons using,"Hello I am creating a spotfire dashboard which I would like to be reusable for each year. +Currently my layout is designed as a page with 8 buttons containing the names of stores, if clicked on, spotfire applies a filter so that only informations relating to that store shows. (these were individually created manually) +Is there a way to automate this with JS or Iron Python, so that for each store a button is automatically created, and in action control for each button is to apply that stores filter? +I have looked around but cannot find anything relating to dynamically creating buttons. Not asking for you to code this for me, but if someone can point me towards some resources or general logic on how this could be done it would be much appreciated.","instead of buttons, why not a dropdown populated by the unique values in the ""store names"" column to set a document property, and have your data listing limit the data to [store_name] = ${store_name}",0.2655860252697744,False,3,6207 +2019-07-22 05:27:22.927,Can I use GCP for training only but predict with my own AI machine?,"My laptop had a problem with training big a dataset but not for predicting. Can I use Google Cloud Platform for training, only then export and download some sort of weights or model of that machine learning, so I can use it on my own laptop, and if so how to do it?","Decide if you want to use Tensorflow or Keras etc. Prepare scripts to train and save model, and another script to use it for prediction. +It should be simple enough to use GCP for training and download the model to use on your machine. You can choose to use a high end machine (lot of memory, cores, GPU) on GCP. Training in distributed mode may be more complex. Then download the model and use it on local machine. +If you run into issues, post your scripts and ask another question.",0.0,False,1,6208 +2019-07-23 06:15:00.117,Define dynamic environment path variables for different system configuration,"I'm not sure if this is a valid question, but I'm stuck doing this. +I've a python script which does some operation on my local system +Users/12345/Desktop/Sample/one.py +I want the same to be run on remote server whose path is +Server/Users/23552/Dir/ASR/Desktop/Sample/one.py +I know how to do this in PHP using define path APP_HOME sort of I'm baffled in Python +Can someone pl help me?","You can always use the relative path, I guess relative path should solve your issue.",0.0,False,1,6209 +2019-07-23 06:31:38.687,how to get best fit line when we have data on vertical line?,"I started learning Linear Regression and I was solving this problem. When i draw scatter plot between independent variable and dependent variable, i get vertical lines. I have 0.5M sample data. X-axis data is given within range of let say 0-20. In this case I am getting multiple target value for same x-axis point hence it draws vertical line. +My question is, Is there any way i can transform the data in such a way that it doesn't perform vertical line and i can get my model working. There are 5-6 such independent variable that draw the same pattern. +Thanks in advance.","Instead of fitting y as a function of x, in this case you should fit x as a function of y.",0.0,False,1,6210 +2019-07-23 23:56:46.253,How to best(most efficiently) read the first sheet in Excel file into Pandas Dataframe?,"Loading the excel file using read_excel takes quite long. Each Excel file has several sheets. The first sheet is pretty small and is the sheet I'm interested in but the other sheets are quite large and have graphs in them. Generally this wouldn't be a problem if it was one file, but I need to do this for potentially thousands of files and pick and combine the necessary data together to analyze. If somebody knows a way to efficiently load in the file directly or somehow quickly make a copy of the Excel data as text that would be helpful!","The method read_excel() reads the data into a Pandas Data Frame, where the first parameter is the filename and the second parameter is the sheet. +df = pd.read_excel('File.xlsx', sheetname='Sheet1')",-0.2012947653214861,False,1,6211 +2019-07-24 04:43:32.027,How can I combine 2 pivot ( Sale and Pos Order ) into 1 pivot view on my new module?,"I have a problem because I'm new guy in Odoo 11, my task is combine 2 pivot ( Sales and Pos Order ) into 1 pivot view of new Module that i create. So how can i do this? step by step, because I'm just new guy. Please help me, thanks in advance","You Can use select queries for both the models and there is no need for same field or relation you can just use Union All.For Example +select pos_order as po +LEFT JOIN pos_order_line pol ON(pol.order_id = po.id) +UNION ALL +select sale_order so +LEFT JOIN sale_order_line sol ON(sol.order_id = so.id) +Hope this will help you in this regard and don't forget to define the fields you want to show on the pivot view.",0.0,False,1,6212 +2019-07-24 09:38:40.803,What is the difference between the _tkinter and tkinter modules?,I am also trying to understand how to use Tkinter so could you please explain the basics?,"What is the difference between the _tkinter and tkinter modules? + +_tkinter is a C-based module that exposes an embedded tcl/tk interpreter. When you import it, and only it, you get access to this interpreter but you do not get access to any of the tkinter classes. This module is not designed to be imported by python scripts. +tkinter provides python-based classes that use the embedded tcl/tk interpreter. This is the module that defines Tk, Button, Text, etc.",0.3869120172231254,False,1,6213 +2019-07-24 13:11:20.187,How to send a query or stored procedure execution request to a specific location/region of cosmosdb?,"I'm trying to multi-thread some tasks using cosmosdb to optimize ETL time, and I can't find how, using the python API (but I could do something in REST if required) if I have a stored procedure to call twice for two partitions keys, I could send it to two different regions (namely 'West Europe' and 'Central France) +I defined those as PreferredLocations in the connection policy but don't know how to include to a query, the instruction to route it to a specific location.","The only place you could specify that on would be the options objects of the requests. However there is nothing related to the regions. +What you can do is initialize multiple clients that have a different order in the preferred locations and then spread the load that way in different regions. +However, unless your apps are deployed on those different regions and latency is less, there is no point in doing so since Cosmos DB will be able to cope with all the requests in a single region as long as you have the RUs needed.",1.2,True,1,6214 +2019-07-25 19:40:43.613,Is it possible to start using Django's migration system after years of not using it?,"A project I recently joined, for various reasons, decided not to use Django migration system and uses our own system (which is similar enough to Django's that we could possibly automate translations) +Primary Question +Is it possible to start using Django's migration system now? +More Granular Question(s) +Ideally, we'd like to find some way of saying ""all our tables and models are in-sync (i.e. there is no need to create and apply any migrations), Django does not need to produce any migrations for any existing model, only for changes we make. + + +Is it possible to do this? + +Is it simply a case of ""create the django migration table, generate migrations (necessary?), and manually update the migration table to say that they've all been ran""? + +Where can I find more information for how to go about doing this? Are there any examples of people doing this in the past? + + +Regarding SO Question Rules +I didn't stop to think for very long about whether or not this is an ""acceptable"" question to ask on SO. I assume that it isn't due to the nature of the question not having a clear, objective set of criteria for a correct answer. however, I think that this problem is surely common enough, that it could provide an extremely valuable resource for anyone in my shoes in the future. Please consider this before voting to remove.","I think you should probably be able to do manage.py makemigrations (you might need to use each app name the first time) which will create the migrations files. You should then be able to do manage.py migrate --fake which will mimic the migration run without actually impacting your tables. +From then on (for future changes), you would run makemigrations and migrate as normal.",0.3869120172231254,False,1,6215 +2019-07-25 19:50:15.017,Peewee incrementing an integer field without the use of primary key during migration,"I have a table I need to add columns to it, one of them is a column that dictates business logic. So think of it as a ""priority"" column, and it has to be unique and a integer field. It cannot be the primary key but it is unique for business logic purposes. +I've searched the docs but I can't find a way to add the column and add default (say starting from 1) values and auto increment them without setting this as a primarykey.. +Thus creating the field like + +example_column = IntegerField(null=False, db_column='PriorityQueue',default=1) + +This will fail because of the unique constraint. I should also mention this is happening when I'm migrating the table (existing data will all receive a value of '1') +So, is it possible to do the above somehow and get the column to auto increment?","Depends on your database, but postgres uses sequences to handle this kind of thing. Peewee fields accept a sequence name as an initialization parameter, so you could pass it in that manner.",0.0,False,2,6216 +2019-07-25 19:50:15.017,Peewee incrementing an integer field without the use of primary key during migration,"I have a table I need to add columns to it, one of them is a column that dictates business logic. So think of it as a ""priority"" column, and it has to be unique and a integer field. It cannot be the primary key but it is unique for business logic purposes. +I've searched the docs but I can't find a way to add the column and add default (say starting from 1) values and auto increment them without setting this as a primarykey.. +Thus creating the field like + +example_column = IntegerField(null=False, db_column='PriorityQueue',default=1) + +This will fail because of the unique constraint. I should also mention this is happening when I'm migrating the table (existing data will all receive a value of '1') +So, is it possible to do the above somehow and get the column to auto increment?","It should definitely be possible, especially outside of peewee. You can definitely make a counter that starts at 1 and increments to the stop and at the interval of your choice with range(). You can then write each incremented variable to the desired field in each row as you iterate through.",1.2,True,2,6216 +2019-07-26 08:32:45.950,How to check if a url is valid in Scrapy?,"I have a list of url and many of them are invalid. When I use scrapy to crawl, the engine will automatically filter those urls with 404 status code, but some urls' status code aren't 404 and will be crawled so when I open it, it says something like there's nothing here or the domain has been changed, etc. Can someone let me know how to filter these types of invalid urls?","In your callback (e.g. parse) implement checks that detect those cases of 200 responses that are not valid, and exit the callback right away (return) when you detect one of those requests.",0.0,False,1,6217 +2019-07-27 18:25:21.400,Unable to view django error pages on Google Cloud web app,"Settings.py DEBUG=True +But the django web application shows Server Error 500. +I need to see the error pages to debug what is wrong on the production server. +The web application works fine in development server offline. +The google logs does not show detail errors. Only shows the http code of the request.","Thank you all for replying to my question. The project had prod.py (production settings file, DEBUG=False) and a dev.py (development settings file). When python manage.py is called it directly calls dev.py(DEBUG=True). However, when I push to google app engine main.py is used to specify how to run the application. main.py calls wsgi.py which calls prod.pd (DEBUG=False). This is why the django error pages were not showing. I really appreciate you all. VictorTorres, Mahirq9 and ParthS007",0.0,False,1,6218 +2019-07-28 11:14:17.907,How to turn the image in the correct orientation?,"I have a paper on which there are scans of documents, I use tesseract to recognize the text, but sometimes the images are in the wrong orientation, then I cut these documents from the sheet and work with each one individually, but I need to turn them in the correct position, how to do it?","If all scans are in same orientation on the paper, then you can always try rotating it in reverse if tesseract is causing the problem in reading. If individual scans can be in arbitrary orientation, then you will have to use the same method on individual scans instead.",0.2012947653214861,False,2,6219 +2019-07-28 11:14:17.907,How to turn the image in the correct orientation?,"I have a paper on which there are scans of documents, I use tesseract to recognize the text, but sometimes the images are in the wrong orientation, then I cut these documents from the sheet and work with each one individually, but I need to turn them in the correct position, how to do it?","I’m not sure if there are simple ways, but you can rotate the document after you do not find adequate characters in it, if you see letters, then the document is in the correct orientation. +As I understand it, you use a parser, so the check can be very simple, if there are less than 5 keys, then the document is turned upside down incorrectly",1.2,True,2,6219 +2019-07-29 14:50:34.170,Tensorflow Serving number of requests in queue,"I have my own TensorFlow serving server for multiple neural networks. Now I want to estimate the load on it. Does somebody know how to get the current number of requests in a queue in TensorFlow serving? I tried using Prometheus, but there is no such option.","what 's more ,you can assign the number of threads by the --rest_api_num_threads or let it empty and automatically configured by tf serivng",0.0,False,2,6220 +2019-07-29 14:50:34.170,Tensorflow Serving number of requests in queue,"I have my own TensorFlow serving server for multiple neural networks. Now I want to estimate the load on it. Does somebody know how to get the current number of requests in a queue in TensorFlow serving? I tried using Prometheus, but there is no such option.","Actually ,the tf serving doesn't have requests queue , which means that the tf serving would't rank the requests, if there are too many requests. +The only thing that tf serving would do is allocating a threads pool, when the server is initialized. +when a request coming , the tf serving will use a unused thread to deal with the request , if there are no free threads, the tf serving will return a unavailable error.and the client shoule retry again later. +you can find the these information in the comments of tensorflow_serving/batching/streaming_batch_schedulor.h",1.2,True,2,6220 +2019-07-31 10:20:06.377,speed up pandas search for a certain value not in the whole df,"I have a large pandas DataFrame consisting of some 100k rows and ~100 columns with different dtypes and arbitrary content. +I need to assert that it does not contain a certain value, let's say -1. +Using assert( not (any(test1.isin([-1]).sum()>0))) results in processing time of some seconds. +Any idea how to speed it up?","Just to make a full answer out of my comment: +With -1 not in test1.values you can check if -1 is in your DataFrame. +Regarding the performance, this still needs to check every single value, which is in your case +10^5*10^2 = 10^7. +You only save with this the performance cost for summation and an additional comparison of these results.",1.2,True,1,6221 +2019-07-31 13:05:46.203,Is it possible to write a Python web scraper that plays an mp3 whenever an element's text changes?,"Trying to figure out how to make python play mp3s whenever a tag's text changes on an Online Fantasy Draft Board (ClickyDraft). +I know how to scrape elements from a website with python & beautiful soup, and how to play mp3s. But how do you think can I have it detect when a certain element changes so it can play the appropriate mp3? +I was thinking of having the program scrape the site every 0.5seconds to detect the changes, +but I read that that could cause problems? Is there any way of doing this?","The only way is too scrape the site on a regular basis. 0.5s is too fast. I don't know how time sensitive this project is. But scraping every 1/5/10 minute is good enough. If you need it quicker, just get a proxy (plenty of free ones out there) and you can scrape the site more often. +Just try respecting the site, Don't consume too much of the sites ressources by requesting every 0.5 seconds",1.2,True,1,6222 +2019-07-31 15:27:06.880,How to point Django app to new DB without dropping the previous DB?,"I am working on Django app on branch A with appdb database in settings file. Now I need to work on another branch(B) which has some new DB changes(eg. new columns, etc). The easiest for me is to point branch B to a different DB by changing the settings.py and then apply the migrations. I did the migrations but I am getting error like 1146, Table 'appdb_b.django_site' doesn't exist. So how can I use a different DB for my branchB code without dropping database appdb?","The existing migration files have information that causes the migrate command to believe that the tables should exist and so it complains about them not existing. +You need to MOVE the migration files out of the migrations directory (everything except init.py) and then do a makemigrations and then migrate.",0.3869120172231254,False,1,6223 +2019-08-01 09:16:33.917,Stop music from playing in headless browser,"I was learning how to play music using selenium so I wrote a program which would be used as a module to play music. Unfortunately I exited the python shell without exiting the headless browser and now the song is continuously playing. +Could someone tell me how I can find the current headless browser and exit it?",You need to include in your script to stop the music before closing the session of your headless browser.,0.2012947653214861,False,2,6224 +2019-08-01 09:16:33.917,Stop music from playing in headless browser,"I was learning how to play music using selenium so I wrote a program which would be used as a module to play music. Unfortunately I exited the python shell without exiting the headless browser and now the song is continuously playing. +Could someone tell me how I can find the current headless browser and exit it?","If you are on a Linux box, You can easily find the process Id with ps aux| grep chrome command and Kill it. If you are on Windows kill the process via Task Manager",0.2012947653214861,False,2,6224 +2019-08-01 13:21:36.583,How to install a new python module on VSCode?,"I'm trying to install new python modules on my computer and I know how to install through the terminal, but I wish to know if there is a way to install a new module directly through VSCode (like it is possible on PyCharm)? +I already installed through the terminal, it isn't a problem, but I want to install without be obligate to open the terminal when I'm working on VSCode.","Unfortunately! for now, only possible way is terminal.",0.2655860252697744,False,1,6225 +2019-08-01 19:24:54.457,feeding annotations as ground truth along with the images to the model,"I am working on an object detection model. I have annotated images whose values are stored in a data frame with columns (filename,x,y,w,h, class). I have my images inside /drive/mydrive/images/ directory. I have saved the data frame into a CSV file in the same directory. So, now I have annotations in a CSV file and images in the images/ directory. +I want to feed this CSV file as the ground truth along with the image so that when the bounding boxes are recognized by the model and it learns contents of the bounding box. +How do I feed this CSV file with the images to the model so that I can train my model to detect and later on use the same to predict bounding boxes of similar images? +I have no idea how to proceed. +I do not get an error. I just want to know how to feed the images with bounding boxes so that the network can learn those bounding boxes.","We need to feed the bounding boxes to the loss function. We need to design a custom loss function, preprocess the bounding boxes and feed it back during back propagation.",0.0,False,1,6226 +2019-08-02 14:20:48.933,Detecting which words are the same between two pieces of text,"I need some python advice to implement an algorithm. +What I need is to detect which words from text 1 are in text 2: + +Text 1: ""Mary had a dog. The dog's name was Ethan. He used to run down + the meadow, enjoying the flower's scent."" +Text 2: ""Mary had a cat. The cat's name was Coco. He used to run down + the street, enjoying the blue sky."" + +I'm thinking I could use some pandas datatype to check repetitions, but I'm not sure. +Any ideas on how to implement this would be very helpful. Thank you very much in advance.","Since you do not show any work of your own, I'll just give an overall algorithm. +First, split each text into its words. This can be done in several ways. You could remove any punctuation then split on spaces. You need to decide if an apostrophe as in dog's is part of the word--you probably want to leave apostrophes in. But remove periods, commas, and so forth. +Second, place the words for each text into a set. +Third, use the built-in set operations to find which words are in both sets. +This will answer your actual question. If you want a different question that involves the counts or positions of the words, you should make that clear.",0.0,False,2,6227 +2019-08-02 14:20:48.933,Detecting which words are the same between two pieces of text,"I need some python advice to implement an algorithm. +What I need is to detect which words from text 1 are in text 2: + +Text 1: ""Mary had a dog. The dog's name was Ethan. He used to run down + the meadow, enjoying the flower's scent."" +Text 2: ""Mary had a cat. The cat's name was Coco. He used to run down + the street, enjoying the blue sky."" + +I'm thinking I could use some pandas datatype to check repetitions, but I'm not sure. +Any ideas on how to implement this would be very helpful. Thank you very much in advance.","You can use dictionary to first store words from first text and than just simply look up while iterating the second text. But this will take space. +So best way is to use regular expressions.",0.0,False,2,6227 +2019-08-03 16:17:51.303,I can not get pigments highlighting for Python to work in my Sphinx documentation,I've tried adding highlight_language and pygments_style in the config.py and also tried various ways I found online inside the .rst file. Can anyone offer any advice on how to get the syntax highlighting working?,"Sorry, it turns out that program arguments aren't highlighted (the test I was using)",0.0,False,1,6228 +2019-08-04 03:43:05.637,Install & run an extra APK file with Kivy,I am currently developing mobile applications in Kivy. I would like to create an app to aid in the development process. This app would download an APK file from a network location and install/run it. I know how to download files of course. How can I programmatically install and run an Android APK file in Kivy/Android/Python3?,"Look up how you would do it in Java, then you should be able to do it from Kivy using Pyjnius.",0.0,False,1,6229 +2019-08-04 09:57:01.713,Java code to convert between UTF8 and UTF16 offsets (Java string offsets to/from Python 3 string offsets),"Given a Java string and an offset into that String, what is the correct way of calculating the offset of that same location into an UTF8 string? +More specifically, given the offset of a valid codepoint in the Java string, how can one map that offset to a new offset of that codepoint in a Python 3 string? And vice versa? +Is there any library method which already provides the mapping between Java String offsets and Python 3 string offsets?","No, there cannot be. UTF-16 uses a varying number of code units per codepoint and so does UTF-8. So, the indices are entirely dependent on the codepoints in the string. You have to scan the string and count. +There are relationships between the encodings, though. A codepoint has two UTF-16 code units if and only if it has four UTF-8 code units. So, an algorithm could tally UTF-8 code units by scanning UTF-16 codepoints: 4 four a high surrogate, 0 for a low surrogate, 3 for some range, 2 for another and 1 for another.",0.0,False,1,6230 +2019-08-04 13:24:08.390,Why converting dictionaries to lists only returns keys?,"I am wondering why when I use list(dictionary) it only returns keys and not their definitions into a list? +For example, I import a glossary with terms and definitions into a dictionary using CSV reader, then use the built in list() function to convert the dictionary to a list, and it only returns keys in the list. +It's not really an issue as it actually allows my program to work well, was just wondering is that just how it is supposed to behave or? +Many thanks for any help.","In short: In essence it works that way, because it was designed that way. It makes however sense if we take into account that x in some_dict performs a membercheck on the dictionary keys. + +Frequently Python code iterates over a collection, and does not know the type of the collection it iterates over: it can be a list, tuple, set, dictionary, range object, etc. +The question is, do we see a dictionary as a collection, and if yes, a collection of what? If we want to make it collection, there are basically three logical answers to the second question: we can see it as a collection of the keys, of the values, or key-value pairs. Especially keys and key-value pairs are popular. C# for example sees a dictionary as a collection of KeyValuePairs. Python provides the .values() and .items() method to iterate over the values and key-value pairs. +Dictionaries are mainly designed to perform a fast lookup for a key and retrieve the corresponding value. Therefore the some_key in some_dict would be a sensical query, or (some_key, some_value) in some_dict, since the latter chould check if the key is in the dictionary, and then check if it matches with some_value. The latter is however less flexible, since often we might not want to be interested in the corresponding value, we simply want to check if the dictionary contains a certain key. We furthermore can not support both use cases concurrently, since if the dictionary would for example contain 2-tuples as keys, then that means it is ambiguous if (1, 2) in some_dict would mean that for key 1 the value is 2; or if (1, 2) is a key in the dictionary. +Since the designers of Python decided to define the membership check on dictionaries on the keys, it makes more sense to make a dictionary an iterable over its keys. Indeed, one usually expects that if x in some_iterable holds, then x in list(some_iterable) should hold as well. If the iterable of a dictionary would return 2-tuples of key-value pairs (like C# does), then if we would make a list of these 2-tuples, it would not be in harmony with the membership check on the dictionary itself. Since if 2 in some_dict holds, 2 in list(some_dict) would fail.",0.0,False,1,6231 +2019-08-04 13:31:10.467,How do I bulk download images (70k) from urls with a restriction on the simultaneous downloads?,"I'm a bit clueless. I have a csv file with these columns: name - picture url +I would like to bulk download the 70k images into a folder, rename the images with the name in the first column and number them if there is more than one per name. +Some are jpegs some are pngs. +I'm guessing I need to use pandas to get the data from the csv but I don't know how to make the downloading/renaming part without starting all the downloads at the same time, which will for sure crash my computer (It did, I wasn't even mad). +Thanks in advance for any light you can shed on this.",Try downloading in batches like 500 images...then sleep for some 1 seconds and loop it....quite time consuming...but sure fire method....for the coding reference you can explore packges like urllib (for downloading) and as soon as u download the file use os.rename() to change the name....As u already know for that csv file use pandas...,0.2012947653214861,False,1,6232 +2019-08-05 03:07:24.777,Standard Deviation of every pixel in an image in Python,"I have an image stored in a 2D array called data. I know how to calculate the standard deviation of the entire array using numpy that outputs one number quantifying how much the data is spread. However, how can I made a standard deviation map (of the same size as my image array) and each element in this array is the standard deviation of the corresponding pixel in the image array (i.e, data).","Use slicing, given images[num, width, height] you may calculate std. deviation of a single image using images[n].std() or for a single pixel: images[:, x, y].std()",1.2,True,1,6233 +2019-08-05 06:28:35.310,Convert each row in a PySpark DataFrame to a file in s3,"I'm using PySpark and I need to convert each row in a DataFrame to a JSON file (in s3), preferably naming the file using the value of a selected column. +Couldn't find how to do that. Any help will be very appreciated.","I think directly we can't store for each row as a JSON based file. Instead of that we can do like iterate for each partition of dataframe and connect to S3 using AWS S3 based library's (to connect to S3 on the partition level). Then, On each partition with the help of iterator, we can convert the row into JSON based file and push to S3.",0.0,False,1,6234 +2019-08-06 04:52:59.987,what do hidden layers mean in a neural network?,"in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model. +I understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer) +So for example, given the standard MNIST datset that is used to train and predict handwritten digits between 0 to 9, a model would look like this: + +An image of a handwritten digit will have 784 pixels. +Since there are 784 pixels, there would be 784 input nodes and the value of each node is the pixel intensity(0-255) +each node branches out and these branches are the weights. +My next layer is my hidden layer, and the value of a given node in the hidden layer is the weighted sum of my input nodes (pixels*weights). +Whatever value I get, I squash it with a sigmoid function and I get a value between 0 and 1. + +That number that I get from the sigmoid. What does it represent exactly and why is it relevant? My understanding is that if I want to build more hidden layers, I'll be using the values of my initial hidden layer, but at this point, i'm stuck as to what the values of the first hidden layer mean exactly. +Thank you!","Consider a very basic example of AND, OR, NOT and XOR functions. +You may already know that a single neuron is only suitable when the problem is linearly separable. +Here in this case, AND, OR and NOT functions are linearly separable and so they can be easy handled using a single neuron. +But consider the XOR function. It is not linearly separable. So a single neuron will not be able to predict the value of XOR function. +Now, XOR function is a combination of AND, OR and NOT. Below equation is the relation between them: + +a XOR b = (a AND (NOT b)) OR ((NOT a) AND b) + +So, for XOR, we can use a network which contain three layers. +First layer will act as NOT function, second layer will act as AND of the output of first layer and finally the output layer will act as OR of the 2nd hidden layer. +Note: This is just a example to explain why it is needed, XOR can be implemented in various other combination of neurons.",0.0,False,3,6235 +2019-08-06 04:52:59.987,what do hidden layers mean in a neural network?,"in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model. +I understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer) +So for example, given the standard MNIST datset that is used to train and predict handwritten digits between 0 to 9, a model would look like this: + +An image of a handwritten digit will have 784 pixels. +Since there are 784 pixels, there would be 784 input nodes and the value of each node is the pixel intensity(0-255) +each node branches out and these branches are the weights. +My next layer is my hidden layer, and the value of a given node in the hidden layer is the weighted sum of my input nodes (pixels*weights). +Whatever value I get, I squash it with a sigmoid function and I get a value between 0 and 1. + +That number that I get from the sigmoid. What does it represent exactly and why is it relevant? My understanding is that if I want to build more hidden layers, I'll be using the values of my initial hidden layer, but at this point, i'm stuck as to what the values of the first hidden layer mean exactly. +Thank you!","A hidden layer in a neural network may be understood as a layer that is neither an input nor an output, but instead is an intermediate step in the network's computation. +In your MNIST case, the network's state in the hidden layer is a processed version of the inputs, a reduction from full digits to abstract information about those digits. +This idea extends to all other hidden layer cases you'll encounter in machine learning -- a second hidden layer is an even more abstract version of the input data, a recurrent neural network's hidden layer is an interpretation of the inputs that happens to collect information over time, or the hidden state in a convolutional neural network is an interpreted version of the input with certain features isolated through the process of convolution. +To reiterate, a hidden layer is an intermediate step in your neural network's process. The information in that layer is an abstraction of the input, and holds information required to solve the problem at the output.",0.0,False,3,6235 +2019-08-06 04:52:59.987,what do hidden layers mean in a neural network?,"in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model. +I understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer) +So for example, given the standard MNIST datset that is used to train and predict handwritten digits between 0 to 9, a model would look like this: + +An image of a handwritten digit will have 784 pixels. +Since there are 784 pixels, there would be 784 input nodes and the value of each node is the pixel intensity(0-255) +each node branches out and these branches are the weights. +My next layer is my hidden layer, and the value of a given node in the hidden layer is the weighted sum of my input nodes (pixels*weights). +Whatever value I get, I squash it with a sigmoid function and I get a value between 0 and 1. + +That number that I get from the sigmoid. What does it represent exactly and why is it relevant? My understanding is that if I want to build more hidden layers, I'll be using the values of my initial hidden layer, but at this point, i'm stuck as to what the values of the first hidden layer mean exactly. +Thank you!","AFAIK, for this digit recognition case, one way to think about it is each level of the hidden layers represents the level of abstraction. +For now, imagine the neural network for digit recognition has only 3 layers which is 1 input layer, 1 hidden layer and 1 output layer. +Let's take a look at a number. To recognise that it is a number we can break the picture of the number to a few more abstract concepts such as lines, circles and arcs. If we want to recognise 6, we can first recognise the more abstract concept that is exists in the picture. for 6 it would be an arc and a circle for this example. For 8 it would be 2 circles. For 1 it would be a line. +It is the same for a neural network. We can think of layer 1 for pixels, layer 2 for recognising the abstract concept we talked earlier such as lines, circles and arcs and finally in layer 3 we determine which number it is. +Here we can see that the input goes through a series of layers from the most abstract layer to the less abstract layer (pixels -> line, circle, arcs -> number). In this example we only have 1 hidden layer but in real implementation it would be better to have more hidden layer that 1 depending on your interpretation of the neural network. Sometime we don't even have to think about what each layer represents and let the training do it fo us. That is the purpose of the training anyway.",0.0,False,3,6235 +2019-08-06 09:58:13.633,Showing text coordinate from png on raw .pdf file,"I am doing OCR on Raw PDF file where in i am converting into png images and doing OCR on that. My objective is to extract coordinates for a certain keyword from png and showcase those coordinates on actual raw pdf. +I have already tried showing those coordinates on png images using opencv but i am not able to showcase those coordinates on actual raw pdf since the coordinate system of both format are different. Can anyone please helpme on how to showcase bounding box on actual raw pdf based on the coordinates generated from png images.","All you need to do is map the coordinates of the OCR token (which would be given for the image) to that of the pdf page. +For instance, +image_dimensions = [1800, 2400] # width, height +pdf_page_dimension = [595, 841] # these are coordinates of the specific page of the pdf +Assuming, on OCRing the image, a word has coordinates = [400, 700, 450, 720] , the same can be rendered on the pdf by multiplying them with scale on each axis +x_scale = pdf_page_dimension[0] / image_dimensions[0] +y_scale = pdf_page_dimension[1] / image_dimensions[1] +scaled_coordinates = [400*x_scale, 700*y_scale, 450*x_scale, 720*y_scale] +Pdf page dimensions can be obtained from any of the packages: poppler, pdfparser, pdfminer, pdfplumber",-0.3869120172231254,False,1,6236 +2019-08-06 13:48:46.087,How to evaluate HDBSCAN text clusters?,"I'm currently trying to use HDBSCAN to cluster movie data. The goal is to cluster similar movies together (based on movie info like keywords, genres, actor names, etc) and then apply LDA to each cluster and get the representative topics. However, I'm having a hard time evaluating the results (apart from visual analysis, which is not great as the data grows). With LDA, although it's hard to evaluate it, i've been using the coherence measure. However, does anyone have any idea on how to evaluate the clusters made by HDBSCAN? I haven't been able to find much info on it, so if anyone has any idea, I'd very much appreciate!","Its the same problem everywhere in unsupervised learning. +It is unsupervised, you are trying to discover something new and interesting. There is no way for the computer to decide whether something is actually interesting or new. It can decide and trivial cases when the prior knowledge is coded in machine processable form already, and you can compute some heuristics values as a proxy for interestingness. But such measures (including density-based measures such as DBCV are actually in no way better to judge this than the clustering algorithm itself is choosing the ""best"" solution). +But in the end, there is no way around manually looking at the data, and doing the next steps - try to put into use what you learned of the data. Supposedly you are not invory tower academic just doing this because of trying to make up yet another useless method... So use it, don't fake using it.",1.2,True,1,6237 +2019-08-06 14:54:30.430,Best Practice for Batch Processing with RabbitMQ,"I'm looking for the best way to preform ETL using Python. +I'm having a channel in RabbitMQ which send events (can be even every second). +I want to process every 1000 of them. +The main problem is that RabbitMQ interface (I'm using pika) raise callback upon every message. +I looked at Celery framework, however the batch feature was depreciated in version 3. +What is the best way to do it? I thinking about saving my events in a list, and when it reaches 1000 to copy it to other list and preform my processing. However, how do I make it thread-safe? I don't want to lose events, and I'm afraid of losing events while synchronising the list. +It sounds like a very simple use-case, however I didn't find any good best practice for it.","First of all, you should not ""batch"" messages from RabbitMQ unless you really have to. The most efficient way to work with messaging is to process each message independently. +If you need to combine messages in a batch, I would use a separate data store to temporarily store the messages, and then process them when they reach a certain condition. Each time you add an item to the batch, you check that condition (for example, you reached 1000 messages) and trigger the processing of the batch. +This is better than keeping a list in memory, because if your service dies, the messages will still be persisted in the database. +Note : If you have a single processor per queue, this can work without any synchronization mechanism. If you have multiple processors, you will need to implement some sort of locking mechanism.",0.2012947653214861,False,1,6238 +2019-08-07 07:44:51.857,"Pycharm is not letting me run my script 'test_splitter.py' , but instead 'Nosetests in test_splitter.py'?","I see many posts on 'how to run nosetests', but none on how to make pycharm et you run a script without nosetests. And yet, I seem to only be able to run or debug 'Nosetests test_splitter.py' and not ust 'test_splitter.py'! +I'm relatively new to pycharm, and despite going through the documentation, I don't quite understand what nosetests are about and whether they would be preferrable for me testing myscript. But I get an error + +ModuleNotFoundError: No module named 'nose' +Process finished with exit code 1 +Empty suite + +I don't have administartive access so cannot download nosetests, if anyone would be sugesting it. I would just like to run my script! Other scripts are letting me run them just fine without nosetests!","I found the solution: I can run without nosetests from the 'Run' dropdown options in the toolbar, or Alt+Shift+F10.",0.0,False,1,6239 +2019-08-07 11:43:33.013,How do I display a large black rectangle with a moveable transparent circle in pygame?,"That question wasn't very clear. +Essentially, I am trying to make a multi-player Pac-Man game whereby the players (when playing as ghosts) can only see a certain radius around them. My best guess for going about this is to have a rectangle which covers the whole maze and then somehow cut out a circle which will be centred on the ghost's rect. However, I am not sure how to do this last part in pygame. +I'd just like to add if it's even possible in pygame, it would be ideal for the circle to be pixelated and not a smooth circle, but this is not essential. +Any suggestions? Cheers.","The best I can think of is kind of a hack. Build an image outside pygame that is mostly black with a circle of zero-alpha in the center, then blit that object on top of your ghost character to only see a circle around it. I hope there is a better way but I do not know what that is.",0.2012947653214861,False,1,6240 +2019-08-07 16:34:16.810,How do I convert scanned PDF into searchable PDF in Python (Mac)? e.g. OCRMYPDF module,"I am writing a program in python that can read pdf document, extract text from the document and rename the document using extracted text. At first, the scanned pdf document is not searchable. I would like to convert the pdf into searchable pdf on Python instead of using Google doc, Cisdem pdf converter. +I have read about ocrmypdf module which can used to solve this. However, I do not know how to write the code due to my limited knowledge. +I expect the output to convert the scanned pdf into searchable pdf.","This would be done well into two steps + +Create Python OCR Python function +import ocrmypdf +def ocr(file_path, save_path): +ocrmypdf.ocr(file_path, save_path) + +Call and use a function. +ocr(""input.pdf"",""output.pdf"") + + +Thank you, if you got any question ask please.",0.0,False,2,6241 +2019-08-07 16:34:16.810,How do I convert scanned PDF into searchable PDF in Python (Mac)? e.g. OCRMYPDF module,"I am writing a program in python that can read pdf document, extract text from the document and rename the document using extracted text. At first, the scanned pdf document is not searchable. I would like to convert the pdf into searchable pdf on Python instead of using Google doc, Cisdem pdf converter. +I have read about ocrmypdf module which can used to solve this. However, I do not know how to write the code due to my limited knowledge. +I expect the output to convert the scanned pdf into searchable pdf.","I suggest working on the working through the turoial, will maybe take you some time but it should be wortht it. +I'm not exactly sure what you exactly want. In my project the settings below work fine in Most of the Cases. +import ocrmypdf , tesseract +def ocr(file_path, save_path): + ocrmypdf.ocr(file_path, save_path, rotate_pages=True, + remove_background=True,language=""en"", deskew=True, force_ocr=True)",0.5457054096481145,False,2,6241 +2019-08-07 16:55:55.510,How to direct the same Amazon S3 events into several different SQS queues?,"I'm working with AWS Lambda functions (in Python), that process new files that appear in the same Amazon S3 bucket and folders. +When new file appears in s3:/folder1/folderA, B, C, an event s3:ObjectCreated:* is generated and it goes into sqs1, then processed by Lambda1 (and then deleted from sqs1 after successful processing). +I need the same event related to the same new file that appears in s3:/folder1/folderA (but not folderB, or C) to go also into sqs2, to be processed by Lambda2. Lambda1 modifies that file and saves it somewhere, Lambda2 gets that file into DB, for example. +But AWS docs says that: + +Notification configurations that use Filter cannot define filtering rules with overlapping prefixes, overlapping suffixes, or prefix and suffix overlapping. + +So question is how to bypass this limitation? Are there any known recommended or standard solutions?","Instead of set up the S3 object notification of (S3 -> SQS), you should set up a notification of (S3 -> Lambda). +In your lambda function, you parse the S3 event and then you write your own logic to send whatever content about the S3 event to whatever SQS queues you like.",0.2012947653214861,False,1,6242 +2019-08-08 08:08:44.877,Triggering actions in Python Flask via cron or similar,"I'm needing to trigger an action at a particular date/time either in python or by another service. +Let's say I have built an application that stores the expiry dates of memberships in a database. I'm needing to trigger a number of actions when the member expires (for example, changing the status of the membership and sending an expiry email to the member), which is fine - I can deal with the actions. +However, what I am having trouble with is how do I get these actions to trigger when the expiry date is reached? Are there any concepts or best practices that I should stick to when doing this? +Currently, I've achieved this by executing a Google Cloud Function every day (via Google Cloud Scheduler) which checks if the membership expiry is equal to today, and completes the action if it is. I feel like this solution is quite 'hacky'.","I'm not sure which database you are using but I'm inferring you have a table that have the ""membership"" details of all your users. And each day you run a Cron job that queries this table to see which row has ""expiration_date = today"", is that correct?. +I believe that's an efficient way to do it (it will be faster if you have few columns on that table).",1.2,True,1,6243 +2019-08-09 10:59:47.240,How to deploy and run python scripts inside nodejs application?,"I'm working with a MEAN stack application, that passes a file to python script, and this script doing some tasks and then it returns some results. +The question is and how to install the required python packages when I deploy it? +Thanks! +I've tried to run python code inside nodejs application, using python shell.","Place python script along with requirements.txt(which has your python dependencies) in your nodejs project +directory. +During deployment , call pip install on the requirements.txt and it +should install the packages for you. +You can call python script from nodejs just like any shell command +using inbuild child_process module or python-shell.",1.2,True,1,6244 +2019-08-09 14:23:00.953,Trying to extract Certificate information in Python,"I am very new to python and cannot seem to figure out how to accomplish this task. I want to connect to a website and extract the certificate information such as issuer and expiration dates. +I have looked all over, tried all kinds of steps but because I am new I am getting lost in the socket, wrapper etc. +To make matters worse, I am in a proxy environment and it seems to really complicate things. +Does anyone know how I could connect and extract the information while behind the proxy?",Python SSL lib don't deal with proxies.,0.0,False,1,6245 +2019-08-09 22:16:56.867,Python - Using RegEx to extract only the String in between pattern,"Hoping somebody can point me in the right direction. +I am trying to parse log file to figure out how many users are logging into the system on a per-day basis. +The log file gets generated in the pattern listed below. +""<""Commit ts=""20141001114139"" client=""ABCREX/John Doe""> +""8764"",""ABCREX/John Doe"",""00.000.0.000"",""User 'ABCREX/John Doe' successfully logged in from address '00.000.0.000'."" +""<""/Commit> +""<""Commit ts=""20141001114139"" client=""ABCREX/John Doe""> +""8764"",""ABCREX/Jerry Doe"",""00.000.0.000"",""User 'ABCREX/Jerry Doe' successfully logged in from address '00.000.0.000'."" +""<""/Commit> +""<""Commit ts=""20141001114139"" client=""ABCREX/John Doe""> +""8764"",""ABCREX/Jane Doe"",""00.000.0.000"",""User 'ABCREX/Jane Doe' successfully logged in from address '00.000.0.000'."" +""<""/Commit> +I am trying to capture the username from the above lines and load into DB. +so I am interested only in values +John Doe, Jerry Doe, Jane Doe +but the when I do pattern match using REGEX it returns the below +client=""ABCREX/John Doe""> +then using the code I am employing I have to apply multiple replace to remove + ""Client"", ""ABCREX/"", "">""...etc +I currently have code which is working but I feel its highly inefficient and resource consuming. I am performing split on tags then parsing reading line by line. +'''extract the user login Name''' +UserLoginName = str(re.search('client=(.*)>',items).group()).replace('ABCREX/', '').replace('client=""','').replace('"">', '') +print(UserLoginName) +Is there any way I can tell the REGEX to grab only the string found within the pattern and not include the pattern in the results as well?","pattern = r'User\s\'ABCREX/(.*?)\'' +list_of_usernames = re.findall(pattern, output) +That would match the pattern +""User 'ABCREX/Jerry Doe'"" and pull out the username and add it to a list. Is that helpful? I'm new here too so let me know if there is more I can help answer.",0.0,False,1,6246 +2019-08-10 05:15:41.640,SelectField to create dropdown menu,"I have a database with some tables in it. I want now on my website has the dropdown and the choices are the names of people from a column of the table from my database and every time I click on a name it will show me a corresponding ID also from a column from this table. how I can do that? or maybe a guide where should I find an answer ! +many thanks!!!","You have to do that in python(if that's what you are using in the backend). +You can create functions in python that gets the list of name of tables which then you can pass to your front-end code. Similarly, you can setup functions where you get the specific table name from HTML and pass it to python and do all sort of database queries. +If all these sounds confusing to you. I suggest you take a Full stack course on udemy, youtube, etc because it can't really be explained in one simple answer. +I hope it was helpful. Feel free to ask me more",0.0,False,1,6247 +2019-08-10 21:15:17.527,Using the original python packages instead of the jython packages,"I am trying to create a hybrid application with python back-end and java GUI and for that purpose I am using jython to access the data from the GUI. +I wrote code using a standard Python 3.7.4 virtual environment and it worked ""perfectly"". But when I try to run the same code on jython it doesn't work so it seems that in jython some packages like threading are overwritten with java functionality. +My question is how can I use the threading package for example from python but in jython environment? +Here is the error: + +Exception in thread Thread-1:Traceback (most recent call last): + File ""/home/dexxrey/jython2.7.0/Lib/threading.py"", line 222, in _Thread__bootstrap + self.run() + self._target(*self._args, **self._kwargs)","Since you have already decoupled the application i.e using python for backend and java for GUI, why not stick to that and build in a communication layer between the backend and frontend, this layer could either be REST or any Messaging framework.",0.2012947653214861,False,1,6248 +2019-08-11 22:56:25.070,How to get the rand() function in excel to rerun when accessing an excel file through python,"I am trying to access an excel file using python for my physics class. I have to generates data that follows a function but creates variance so it doesn’t line up perfectly to the function(simulating the error experienced in experiments). I did this by using the rand() function. We need to generate a lot of data sets so that we can average them together and eliminate the error/noise creates by the rand() function. I tried to do this by loading the excel file and recording the data I need, but then I can’t figure out how to get the rand() function to rerun and create a new data set. In excel it reruns when i change the value of any cell on the excel sheet, but I don’t know how to do this when I’m accessing the file with Python. Can someone help me figure out how to do this? Thank You.","Excel formulas like RAND(), or any other formula, will only refresh when Excel is actually running and recalculating the worksheet. +So, even though you may be access the data in an Excel workbook with Python, you won't be able to run Excel calculations that way. You will need to find a different approach.",1.2,True,1,6249 +2019-08-12 13:56:37.400,Jupyter notebook: need to run a cell even though close the tab,"My notebook is located on a server, which means that the kernel will still run even though I close the notebook tab. I was thus wondering if it was possible to let the cell running by itself while closing the window? As the notebook is located on a server the kernel will not stop running... +I tried to read previous questions but could not find an answer. Any idea on how to proceed? +Thanks!","If you run the cell before closing the tab it will continue to run once the tab has been closed. However, the output will be lost (anything using print functions to stdout or plots which display inline) unless it is written to file.",0.0,False,2,6250 +2019-08-12 13:56:37.400,Jupyter notebook: need to run a cell even though close the tab,"My notebook is located on a server, which means that the kernel will still run even though I close the notebook tab. I was thus wondering if it was possible to let the cell running by itself while closing the window? As the notebook is located on a server the kernel will not stop running... +I tried to read previous questions but could not find an answer. Any idea on how to proceed? +Thanks!",You can make open a new file and write outputs to it. I think that's the best that you can do.,0.2012947653214861,False,2,6250 +2019-08-12 16:37:23.493,Rasa NLU model to old,"I have a problem. I am trying to use my model with Rasa core, but it gives me this error: + +rasa_nlu.model.UnsupportedModelError: The model version is to old to + be loaded by this Rasa NLU instance. Either retrain the model, or run + withan older version. Model version: 0.14.6 Instance version: 0.15.1 + +Does someone know which version I need to use then and how I can install that version?","I believe you trained this model on the previous version of Rasa NLU and updated Rasa NLU to a new version (Rasa NLU is a dependency for Rasa Core, so changes were made in requirenments.txt file). +If this is a case, there are 2 ways to fix it: + +Recommended solution. If you have data and parameters, train your NLU model again using current dependencies (this one that you have running now). So you have a new model which is compatible with your current version of Rasa +If you don't have a data or can not retrain a model for some reason, then downgrade Rasa NLU to version 0.14.6. I'm not sure if your current Rasa core is compatible with NLU 0.14.6, so you might also need to downgrade Rasa core if you see errors. + +Good luck!",1.2,True,1,6251 +2019-08-12 18:34:13.630,how can i split a full name to first name and last name in python?,"I'm a novice in python programming and i'm trying to split full name to first name and last name, can someone assist me on this ? so my example file is: +Sarah Simpson +I expect the output like this : Sarah,Simpson","name = ""Thomas Winter"" +LastName = name.split()[1] +(note the parantheses on the function call split.) +split() creates a list where each element is from your original string, delimited by whitespace. You can now grab the second element using name.split()[1] or the last element using name.split()[-1]",0.0,False,1,6252 +2019-08-12 19:59:30.143,pythonanwhere newbie: I don't see sqlite option,"I see an option for MySql and Postgres, and have read help messages for sqlite, but I don't see anyway to use it or to install it. So it appears that it's available or else there wouldn't be any help messages, but I can't find it. I can't do any 'sudo', so no 'apt install', so don't know how to invoke and use it!",sqlite is already installed. You don't need to invoke anything to install it. Just configure your web app to use it.,0.3869120172231254,False,1,6253 +2019-08-12 21:44:29.637,Python/Django and services as classes,"Are there any conventions on how to implement services in Django? Coming from a Java background, we create services for business logic and we ""inject"" them wherever we need them. +Not sure if I'm using python/django the wrong way, but I need to connect to a 3rd party API, so I'm using an api_service.py file to do that. The question is, I want to define this service as a class, and in Java, I can inject this class wherever I need it and it acts more or less like a singleton. Is there something like this I can use with Django or should I build the service as a singleton and get the instance somewhere or even have just separate functions and no classes?","Adding to the answer given by bruno desthuilliers and TreantBG. +There are certain questions that you can ask about the requirements. +For example one question could be, does the api being called change with different type of objects ? +If the api doesn't change, you will probably be okay with keeping it as a method in some file or class. +If it does change, such that you are calling API 1 for some scenario, API 2 for some and so on and so forth, you will likely be better off with moving/abstracting this logic out to some class (from a better code organisation point of view). +PS: Python allows you to be as flexible as you want when it comes to code organisation. It's really upto you to decide on how you want to organise the code.",0.0,False,1,6254 +2019-08-14 01:33:36.937,Does it make sense to use a part of the dataset to train my model?,"The dataset I have is a set of quotations that were presented to various customers in order to sell a commodity. Prices of commodities are sensitive and standardized on a daily basis and therefore negotiations are pretty tricky around their prices. I'm trying to build a classification model that had to understand if a given quotation will be accepted by a customer or rejected by a customer. +I made use of most classifiers I knew about and XGBClassifier was performing the best with ~95% accuracy. Basically, when I fed an unseen dataset it was able to perform well. I wanted to test how sensitive is the model to variation in prices, in order to do that, I synthetically recreated quotations with various prices, for example, if a quote was being presented for $30, I presented the same quote at $5, $10, $15, $20, $25, $35, $40, $45,.. +I expected the classifier to give high probabilities of success as the prices were lower and low probabilities of success as the prices were higher, but this did not happen. Upon further investigation, I found out that some of the features were overshadowing the importance of price in the model and thus had to be dealt with. Even though I dealt with most features by either removing them or feature engineering them to better represent them I was still stuck with a few features that I just cannot remove (client-side requirements) +When I checked the results, it turned out the model was sensitive to 30% of the test data and was showing promising results, but for the rest of the 70% it wasn't sensitive at all. +This is when the idea struck my mind to feed only that segment of the training data where price sensitivity can be clearly captured or where the success of the quote is inversely related to the price being quoted. This created a loss of about 85% of the data, however the relationship that I wanted the model to learn was being captured perfectly well. +This is going to be an incremental learning process for the model, so each time a new dataset comes, I'm thinking of first evaluating it for the price sensitivity and then feeding in only that segment of the data for training which is price sensitive. +Having given some context to the problem, some of the questions I had were: + +Does it make sense to filter out the dataset for segments where the kind of relationship I'm looking for is being exhibited? +Post training the model on the smaller segment of the data and reducing the number of features from 21 to 8, the model accuracy went down to ~87%, however it seems to have captured the price sensitivity bit perfectly. The way I evaluated price sensitivity is by taking the test dataset and artificially adding 10 rows for each quotation with varying prices to see how the success probability changes in the model. Is this a viable approach to such a problem?","To answer your first question, deleting the part of the dataset that doesn't work is not a good idea because then your model will overfit on the data that gives better numbers. This means that the accuracy will be higher, but when presented with something that is slightly different from the dataset, the probability of the network adapting is lower. +To answer the second question, it seems like that's a good approach, but again I'd recommend keeping the full dataset.",0.3869120172231254,False,1,6255 +2019-08-14 15:13:27.587,should jedi be install in every python project environment?,"I am using jedi and more specifically deoplete-jedi in neovim and I wonder if I should install it in every project as a dependency or if I can let jedi reside in the same python environment as neovim uses (and set the setting to tell deoplete-jedi where to look) +It seems wasteful to have to install it in ever project but then again IDK how it would find my project environment from within the neovim environment either.","If by the word ""project""you mean Python virtual environments then yes, you have to install every program and every library that you use to every virtualenv separately. flake8, pytest, jedi, whatever. Python virtual environments are intended to protect one set of libraries from the other so that you could install different sets of libraries and even different versions of libraries. The price is that you have to duplicate programs/libraries that are used often. +There is a way to connect a virtualenv to the globally installed packages but IMO that brings more harm than good.",0.6730655149877884,False,1,6256 +2019-08-15 00:11:44.450,ModuleNotFoundError: no module named efficientnet.tfkeras,"I attempted to do import segmentation_models as sm, but I got an error saying efficientnet was not found. So I then did pip install efficientnet and tried it again. I now get ModuleNotFoundError: no module named efficientnet.tfkeras, even though Keras is installed as I'm able to do from keras.models import * or anything else with Keras +how can I get rid of this error?",To install segmentation-models use the following command: pip install git+https://github.com/qubvel/segmentation_models,1.2,True,1,6257 +2019-08-15 03:25:14.913,Editing a python package,"The question is really simple: +I have a python package installed using pip3 and I'd like to tweak it a little to perform some computations. I've read (and it seems logical) that is very discouraged to not to edit the installed modules. Thus, how can I do this once I downloaded the whole project folder to my computer? Is there any way to, once edited this source code install it with another name? How can I avoid mixing things up? +Thanks!","You can install the package from its source code, instead of PyPi. + +Download the source code - do a git clone of the package +Instead of pip install , install with pip install -e +Change code in the source code, and it will be picked up automatically.",0.0,False,1,6258 +2019-08-15 04:06:41.070,Setting up keras and tensoflow to operate with AMD GPU,"I am trying to set up Keras in order to run models using my GPU. I have a Radeon RX580 and am running Windows 10. +I saw realized that CUDA only supports NVIDIA GPUs and was having difficulty finding a way to get my code to run on the GPU. I tried downloading and setting up plaidml but afterwards from tensorflow.python.client import device_lib +print(device_lib.list_local_devices()) +only printed that I was running on a CPU and there was not a GPU available even though the plaidml setup was a success. I have read that PyOpenCl is needed but have not gotten a clear answer as to why or to what capacity. Does anyone know how to set up this AMD GPU to work properly? any help would be much appreciated. Thank you!","To the best of my knowledge, PlaidML was not working because I did not have the required prerequisites such as OpenCL. Once I downloaded the Visual Studio C++ build tools in order to install PyopenCL from a .whl file. This seemed to resolve the issue",1.2,True,1,6259 +2019-08-15 19:33:15.593,How to deploy flask GUI web application only locally with exe file?,"I'd like to build a GUI for a few Python functions I've written that pull data from MS SQL Server. My boss wants me to share the magic of Python & SQL with the rest of the team, without them having to learn any coding. +I've decided to go down the route of using Flask to create a webapp and creating an executable file using pyinstaller. I'd like it to work similarly to Jupyter Notebook, where you click on the file and it opens the notebook in your browser. +I was able to hack together some code to get a working prototype of the GUI. The issue is I don't know how to deploy it. I need the GUI/Webapp to only run on the local computer for the user I sent the file to, and I don't want it accessible via the internet (because of proprietary company data, security issues, etc). +The only documentation I've been able to find for deploying Flask is going the routine route of a web server. +So the question is, can anyone provide any guidance on how to deploy my GUI WebApp so that it's only available to the user who has the file, and not on the world wide web? +Thank you!","Unfortunately, you do not have control over a give users computer. +You are using flask, so your application is a web application which will be exposing your data to some port. I believe the default flask port is 5000. +Regardless, if your user opens the given port in their firewall, and this is also open on whatever router you are connected to, then your application will be publicly visible. +There is nothing that you can do from your python application code to prevent this. +Having said all of that, if you are running on 5000, it is highly unlikely your user will have this port publicly exposed. If you are running on port 80 or 8080, then the chances are higher that you might be exposing something. +A follow up question would be where is the database your web app is connecting to? Is it also on your users machine? If not, and your web app can connect to it regardless of whose machine you run it on, I would be more concerned about your DB being publicly exposed.",0.0,False,2,6260 +2019-08-15 19:33:15.593,How to deploy flask GUI web application only locally with exe file?,"I'd like to build a GUI for a few Python functions I've written that pull data from MS SQL Server. My boss wants me to share the magic of Python & SQL with the rest of the team, without them having to learn any coding. +I've decided to go down the route of using Flask to create a webapp and creating an executable file using pyinstaller. I'd like it to work similarly to Jupyter Notebook, where you click on the file and it opens the notebook in your browser. +I was able to hack together some code to get a working prototype of the GUI. The issue is I don't know how to deploy it. I need the GUI/Webapp to only run on the local computer for the user I sent the file to, and I don't want it accessible via the internet (because of proprietary company data, security issues, etc). +The only documentation I've been able to find for deploying Flask is going the routine route of a web server. +So the question is, can anyone provide any guidance on how to deploy my GUI WebApp so that it's only available to the user who has the file, and not on the world wide web? +Thank you!","So, a few assumptions-- since you're a business and you're rocking a SQLServer-- you likely have Active Directory, and the computers that you care to access this app are all hooked into that domain (so, in reality, you, or your system admin does have full control over those computers). +Also, the primary function of the app is to access a SQLServer to populate itself with data before doing something with that data. If you're deploying that app, I'm guessing you're probably also including the SQLServer login details along with it. +With that in mind, I would just serve the Flask app on the network on it's own machine (maybe even the SQLServer machine if you have the choice), and then either implement security within the app that feeds off AD to authenticate, or just have a simple user/pass authentication you can distribute to users. By default random computers online aren't going to be able to access that app unless you've set your firewalls to deliberately route WAN traffic to it. +That way, you control the Flask server-- updates only have to occur at one point, making development easier, and users simply have to open up a link in an email you send, or a shortcut you leave on their desktop.",0.2012947653214861,False,2,6260 +2019-08-18 17:11:06.800,how to host python script in a Web Server and access it by calling an API from xamarin application?,"I need to work with opencv in my xamarin application . +I found that if I use openCV directly in xamarin , the size of the app will be huge . +the best solution I found for this is to use the openCV in python script then to host the python script in a Web Server and access it by calling an API from xamarin . +I have no idea how to do this . +any help please ? +and is there is a better solutions ?",You can create your web server using Flask or Django. Flask is a simple micro framework whereas Django is a more advanced MVC like framework.,0.3869120172231254,False,1,6261 +2019-08-20 02:11:48.553,Python: Reference forked project in requirements.txt,"I have a Python project which uses an open source package registered as a dependency in requirements.txt +The package has some deficiencies, so I forked it on Github and made some changes. Now I'd like to test out these changes by running my original project, but I'd like to use the now forked (updated) code for the package I'm depending on. +The project gets compiled into a Docker image; pip install is used to add the package into the project during the docker-compose build command. +What are the standard methods of creating a docker image and running the project using the newly forked dependency, as opposed to the original one? Can requirements.txt be modified somehow or do I need to manually include it into the project? If the latter, how?",you can use git+https://github.com/...../your_forked_repo in your requirements.txt instead of typing Package==1.1.1,1.2,True,1,6262 +2019-08-20 17:45:25.027,Parsing in Python where delimiter also appears in the data,"Wow, I'm thankful for all of the responses on this! To clarify the data pattern does repeat. Here is a sample: +Item: some text Name: some other text Time recorded: hh:mm Time left: hh:mm + other unrelated text some other unrelated text lots more text that is unrelated Item: some text Name: some other text Time recorded: hh:mm Time left: hh:mm other unrelated text some other unrelated text lots more text that is unrelated Item: some text Name: some other text Time recorded: hh:mm Time left: hh:mm + and so on and so on +I am using Python 3.7 to parse input from a text file that is formatted like this sample: +Item: some text Name: some other text Time recorded: hh:mm Time left: hh:mm and the pattern repeats, with other similar fields, through a few hundred pages. +Because there is a "":"" value in some of the values (i.e. hh:mm), I not sure how to use that as a delimiter between the key and the value. I need to obtain all of the values associated with ""Item"", ""Name"", and ""Time left"" and output all of the matching values to a CSV file (I have the output part working) +Any suggestions? Thank you! +(apologies, I asked this on Stack Exchange and it was deleted, I'm new at this)",Use the ': ' (with a space) as a delimiter.,0.1016881243684853,False,1,6263 +2019-08-20 20:09:50.087,Getting error 'file is not a database' after already accessing the database,"I am currently helping with some NLP code and in the code we have to access a database to get the papers. I have fun the code successfully before but every time I try to run the code again I get the error sqlite3.DatabaseError: file is not a database. I am not sure what is happening here because the database is still in the same exact position and the path doesn't change. +I've tried looking up this problem but haven't found similar issues. +I am hoping that someone can explain what is happening here because I don't even know how to start with this issue because it runs once but not again.","I got the same issue. I have a program that print some information from my database but after running it again and again, I got an error that my database was unable to load. For me I think it may be because I have tried to be connected to my database that this problem occurs. And what I suggest you is to reboot your computer or to research the way of being connected several times to the database",1.2,True,1,6264 +2019-08-21 11:32:33.130,How do I stop a Python script from running in my command line?,"I recently followed a tutorial on web scraping, and as part of that tutorial, I had to execute (?) the script I had written in my command line.Now that script runs every hour and I don't know how to stop it. +I want to stop the script from running. I have tried deleting the code, but the script still runs. What should I do?","I can't comment, but you must show us the script or part of the script so we can try to find out the problem or the video you were watching. Asking just a question without an example doesn't help us as much figure out the problem. + +If you're using Flask, in the terminal or CMD you're running the script. Type in CTRL+C and it should stop the script. OR set the debug to false eg. app.run(debug=False) turn that to False because sometimes that can make it run in background and look for updates even though the script was stopped. In conclusion: Try to type CTRL+C or if not set debug to False",0.2012947653214861,False,2,6265 +2019-08-21 11:32:33.130,How do I stop a Python script from running in my command line?,"I recently followed a tutorial on web scraping, and as part of that tutorial, I had to execute (?) the script I had written in my command line.Now that script runs every hour and I don't know how to stop it. +I want to stop the script from running. I have tried deleting the code, but the script still runs. What should I do?",You can kill it from task manager.,1.2,True,2,6265 +2019-08-23 04:11:30.900,How to insert variables into a text cell using google colab,"I would like to insert a python variable into a text cell, in google colab. +For example, if a=10, I would like to insert the a into a text cell and render the value. +So in the text cell (using Jupyter Notebook with nbextensions) I would like to write the following in the text cell: +There will be {{ a }} pieces of fruit at the reception. +It should show up as: +There will be 10 pieces of fruit at the reception. +The markdown cheatsheets and explanations do not say how to achieve this. Is this possible currently?",It's not possible to change 'input cell' (either code or markdown) programmatically. You can change only the output cells. Input cells always require manually change. (even %load doesn't work),0.9950547536867304,False,1,6266 +2019-08-24 07:50:39.733,How to get a callback when the specified epoch number is over?,"I want to fine turn my model when using Keras, and I want to change my training data and learning rate to train when the epochs arrive 10, So how to get a callback when the specified epoch number is over.","Actually, the way keras works this is probably not the best way to go, it would be much better to treat this as fine tuning, meaning that you finish the 10 epochs, save the model and then load the model (from another script) and continue training with the lr and data you fancy. +There are several reasons for this. + +It is much clearer and easier to debug. You check you model properly after the 10 epochs, verify that it works properly and carry on +It is much better to do several experiments this way, starting from epoch 10. + +Good luck!",0.0,False,1,6267 +2019-08-25 13:10:47.760,Is Python's pipenv slow?,"I tried switching from venv & conda to pipenv to manage my virtual environments, but one thing I noticed about pipenv that it's oddly slow when it's doing ""Locking"" and it gets to the point where it stops executing for ""Running out of time"". Is it usually this slow or is it just me? Also, could you give me some advice regarding how to make it faster?","try using --skip-lock like this : +pipenv install --skip-lock +Note : do not skip-lock when going in production",0.1618299653758019,False,2,6268 +2019-08-25 13:10:47.760,Is Python's pipenv slow?,"I tried switching from venv & conda to pipenv to manage my virtual environments, but one thing I noticed about pipenv that it's oddly slow when it's doing ""Locking"" and it gets to the point where it stops executing for ""Running out of time"". Is it usually this slow or is it just me? Also, could you give me some advice regarding how to make it faster?","Pipenv is literally a joke. I spent 30 minutes staring at ""Locking"", which eventually fails after exactly 15 minutes, and I tried two times. +The most meaningless thirty minutes in my life. +Was my Pipfile complex? No. I included ""flask"" with ""flake8"" + ""pylint"" + ""mypy"" + ""black"". +Every time someone tries to fix the ""dependency management"" of Python, it just gets worse. +I'm expecting Poetry to solve this, but who knows. +Maybe it's time to move on to typed languages for web development.",0.9950547536867304,False,2,6268 +2019-08-26 01:21:16.273,Python script denied in terminal,"I have a folder on my desktop that contains my script and when I run it in the pycharm ide it works perfectly but when I try to run from the terminal I get /Users/neelmukherjee/Desktop/budgeter/product_price.py: Permission denied +I'm not quite sure as to why this is happening +I tried using ls -al to check the permissions and for some reason, the file is labelled as +drwx------@ 33 neelmukherjee staff 1056 26 Aug 09:03 Desktop +I'm assuming this means that I should run this file as an admin. But how exactly can I do that? +My goal is to run my script from the terminal successfully and that may be possible by running it as an admin how should I do that?","Ok, so I was able to figure it out. I had to use +chmod +x to help make it executable first. +chmod +x /Users/neelmukherjee/Desktop/budgeter/product_price.py +and the run /Users/neelmukherjee/Desktop/budgeter/product_price.py",1.2,True,1,6269 +2019-08-26 06:10:10.417,How to watch an hdfs directory and copy the latest file that arrives in hdfs to local?,"I want to write a script in bash/python such that the script copies the latest file which arrives at hdfs directory.I know I can use inotify in local, but how to implement it in hdfs? +Can you please share the sample code for it. When I searched for it in google it gives me long codes.Is there a simpler way other than inotify(if its too complex)","Inelegant hack: +Mount hdfs using FUSE then periodically use find -cmin n to get a list of files created in the last n minutes. +Then use find -anewer to sort them.",0.0,False,1,6270 +2019-08-26 22:11:31.383,PySpark Group and apply UDF row by row operation,"I have a dataset that contains 'tag' and 'date'. I need to group the data by 'tag' (this is pretty easy), then within each group count the number of row that the date for them is smaller than the date in that specific row. I basically need to loop over the rows after grouping the data. I don't know how to write a UDF which takes care of that in PySpark. I appreciate your help.","you need an aggregation ? +df.groupBy(""tag"").agg({""date"":""min""}) +what about that ?",0.0,False,1,6271 +2019-08-26 23:20:06.887,How to install stuff like Requests and BeautifulSoup to use in Python?,"I am an extreme beginner with Python and its libraries and installation in general. I want to make an extremely simple google search web scraping tool. I was told to use Requests and BeautifulSoup. I have installed python3 on my Mac by using brew install python3 and I am wondering how to get those two libraries +I googled around and many results said that by doing brew install python3 it will automatically install pip so I can use something like pip install requests but it says pip: command not found. +by running python3 --version it says Python 3.7.4","Since you're running with Python3, not Python (which usually refers to 2.7), you should try using pip3. +pip on the other hand, is the package installer for Python, not Python3.",1.2,True,1,6272 +2019-08-27 14:17:56.143,Stop subprocess.check_output to print on video,"I'm writing a python program which uses subprocess to send files via cURL. It works, but for each file/zip it outputs the loading progress, time and other stuff which I don't want to be shown. Does anyone know how to stop it?",You should add stderr=subprocess.DEVNULL or stderr=subprocess.PIPE to your check_output call,1.2,True,1,6273 +2019-08-27 15:22:51.420,In a Jupyter Notebook how do I split a bulleted list in multiple text cells?,"Suppose I have a bulleted list in Jupyter in a markdown cell like this: + +Item1 +Item2 +Item3 + +Is there a way to convert this one cell list in three markdown text cells?","Ctrl + Shift + - will split a cell on cursor. Else, cannot process a text of a cell with code unless you're importing a notebook within another notebook.",0.0,False,1,6274 +2019-08-27 17:48:12.200,Saving large numpy 2d arrays,"I have an array with ~1,000,000 rows, each of which is a numpy array of 4,800 float32 numbers. +I need to save this as a csv file, however using numpy.savetxt has been running for 30 minutes and I don't know how much longer it will run for. +Is there a faster method of saving the large array as a csv? +Many thanks, +Josh","As pointed out in the comments, 1e6 rows * 4800 columns * 4 bytes per float32 is 18GiB. Writing a float to text takes ~9 bytes of text (estimating 1 for integer, 1 for decimal, 5 for mantissa and 2 for separator), which comes out to 40GiB. This takes a long time to do, since just the conversion to text itself is non-trivial, and disk I/O will be a huge bottle-neck. +One way to optimize this process may be to convert the entire array to text on your own terms, and write it in blocks using Python's binary I/O. I doubt that will give you too much benefit though. +A much better solution would be to write the binary data to a file instead of text. Aside from the obvious advantages of space and speed, binary has the advantage of being searchable and not requiring transformation after loading. You know where every individual element is in the file, if you are clever, you can access portions of the file without loading the entire thing. Finally, a binary file is more likely to be highly compressible than a relatively low-entropy text file. +Disadvantages of binary are that it is not human-readable, and not as portable as text. The latter is not a problem, since transforming into an acceptable format will be trivial. The former is likely a non-issue given the amount of data you are attempting to process anyway. +Keep in mind that human readability is a relative term. A human can not read 40iGB of numerical data with understanding. A human can process A) a graphical representation of the data, or B) scan through relatively small portions of the data. Both cases are suitable for binary representations. Case A) is straightforward: load, transform and plot the data. This will be much faster if the data is already in a binary format that you can pass directly to the analysis and plotting routines. Case B) can be handled with something like a memory mapped file. You only ever need to load a small portion of the file, since you can't really show more than say a thousand elements on screen at one time anyway. Any reasonable modern platform should be able to keep upI/O and binary-to-text conversion associated with a user scrolling around a table widget or similar. In fact, binary makes it easier since you know exactly where each element belongs in the file.",1.2,True,1,6275 +2019-08-28 08:59:57.570,Overriding celery result table (celery_taskmeta) for Postgres,"I am using celery to do some distributed tasks and want to override celery_taskmeta and add some more columns. I use Postgres as DB and SQLAlchemy as ORM. I looked up celery docs but could not find out how to do it. +Help would be appreciated.","I would suggest a different approach - add an extra table with your extended data. This table would have a foreign-key constraint that would ensure each record is related to the particular entry in the celery_taskmeta. Why this approach? - It separates your domain (domain of your application), from the Celery domain. Also it does not involve modifying the table structure that may (in theory it should not) cause trouble.",0.3869120172231254,False,1,6276 +2019-08-28 14:01:28.443,how to remove airflow install,"I tried pip uninstall airflow and pip3 uninstall airflow and both return + +Cannot uninstall requirement airflow, not installed + +I'd like to remove airflow completely and run clean install.",Airflow now is apache-airflow.,1.2,True,1,6277 +2019-08-28 16:38:47.373,ImportError: cannot import name 'deque' from 'collections' how to clear this?,"I have get + +ImportError: cannot import name 'deque' from 'collections' + +How to resolve this issue? I have already changed module name (the module name is collections.py) but this is not worked.",In my case I had to rename my python file from keyword.py to keyword2.py.,0.0,False,2,6278 +2019-08-28 16:38:47.373,ImportError: cannot import name 'deque' from 'collections' how to clear this?,"I have get + +ImportError: cannot import name 'deque' from 'collections' + +How to resolve this issue? I have already changed module name (the module name is collections.py) but this is not worked.","I had the same problem when i run the command python -m venv . Renamed my file from: collections.py to my_collections.py. +It worked!",0.0,False,2,6278 +2019-08-30 04:47:58.880,Authenticating Google Cloud Storage SDK in Cloud Functions,"This is probably a really simple question, but I can't seem to find an answer online. +I'm using a Google Cloud Function to generate a CSV file and store the file in a Google Storage bucket. I've got the code working on my local machine using a json service account. +I'm wanting to push this code to a cloud function, however, I can't use the json service account file in the cloud environment - so how do I authenticate to my storage account in the cloud function?","You don't need the json service account file in the cloud environment. +If the GCS bucket and GCF are in the same project, you can just directly access it. +Otherwise, add your GCF default service account(Note: it's App Engine default service account ) to your GCS project's IAM and grant relative GSC permission.",0.999329299739067,False,1,6279 +2019-08-30 15:12:00.713,Can selenium post real traffic on a website?,"I have written a script in selenium python which is basically opening up a website and clicking on links in it and doing this thing multiple times.. +Purpose of the software was to increase traffic on the website but after script was made it has observed that is not posting real traffic on website while website is just taking it as a test and ignoring it. +Now I am wondering whether it is basically possible with selenium or not? +I have searched around and I suppose it is possible but don't know how. Do anyone know about this? Or is there any specific piece of code for this?","It does create traffic, the problem is websites sometimes defends from bots and can guess if the income connection is a bot or not, maybe you should put some time.wait(seconds) between actions to deceive the website control and make it thinks you are a person",0.0,False,1,6280 +2019-08-30 22:08:19.797,what are the options to implement random search?,"So i want to implement random search but there is no clear cut example as to how to do this. I am confused between the following methods: + +tune.randint() +ray.tune.suggest.BasicVariantGenerator() +tune.sample_from(lambda spec: blah blah np.random.choice()) + +Can someone please explain how and why these methods are same/different for implementing random search.","Generally, you don't need to use ray.tune.suggest.BasicVariantGenerator(). +For the other two choices, it's up to what suits your need. tune.randint() is just a thin wrapper around tune.sample_from(lambda spec: np.random.randint(...)). You can do more expressive/conditional searches with the latter, but the former is easier to use.",0.0,False,1,6281 +2019-09-01 08:27:48.270,Python Linter installation issue with VScode,"[warning VSCode newbie here] +When installing pylinter from within VScode I got this message: +The script isort.exe is installed in 'C:\Users\fjanssen\AppData\Roaming\Python\Python37\Scripts' which is not on PATH. +Which is correct. However, my Python is installed in C:\Program Files\Python37\ +So I am thinking Python is installed for all users, while pylinter seems to be installed for the user (me). +Checking the command-line that VScode threw to install pylinter it indeed seems to install for the user: + +& ""C:/Program Files/Python37/python.exe"" -m pip install -U pylint --user + +So, I have some questions on resolving this issue; +1 - how can I get the immediate issue resolved? +- remove pylinter as user +- re-install for all users +2 - Will this (having python installed for all users) keep bugging me in the future? +- should I re-install python for the current user only when using it with VScode?","If the goal is to simply use pylint with VS Code, then you don't need to install it globally. Create a virtual environment and select that in VS Code as your Python interpreter and then pylint will be installed there instead of globally. That way you don't have to worry about PATH.",0.3869120172231254,False,1,6282 +2019-09-01 14:17:27.727,Taking specified number of user inputs and storing each in a variable,"I am a beginner in python and want to know how to take just the user specified number of inputs in one single line and store each input in a variable. +For example: +Suppose I have 3 test cases and have to pass 4 integers separated by a white space for each such test case. +The input should look like this: +3 +1 0 4 3 +2 5 -1 4 +3 7 1 9 +I know about the split() method that helps you to separate integers with a space in between. But since I need to input only 4 integers, I need to know how to write the code so that the computer would take only 4 integers for each test case, and then the input line should automatically move, asking the user for input for the next test case. +Other than that, the other thing I am looking for is how to store each integer for each test case in some variable so I can access each one later.","For the first part, if you would like to store input in a variable, you would do the following... + (var_name) = input() +Or if you want to treat your input as an integer, and you are sure it is an integer, you would want to do this + (var_name) = int(input()) +Then you could access the input by calling up the var_name. +Hope that helped :D",0.0,False,1,6283 +2019-09-02 11:48:46.377,How to automatically update view once the database is updated in django?,"I have a problem in which I have to show data entered into a database without having to press any button or doing anything. +I am creating an app for a hospital, it has two views, one for a doctor and one for a patient. +I want as soon as the patient enters his symptoms, it shows up on doctor immediately without having to press any button. +I have no idea how to do this. +Any help would be appreciated. +Thanks in advance","You can't do that with Django solely. You have to use some JS framework (React, Vue, Angular) and WebSockets, for example.",0.0,False,1,6284 +2019-09-04 11:00:26.350,how do I give permission to bash to run to multiple gcloud commands from local jupyter notebook,"I am practicing model deployment to GCP cloud ML Engine. However, I receive errors stated below when I execute the following code section in my local jupyter notebook. Please note I do have bash installed in my local PC and environment variables are properly set. +%%bash +gcloud config set project $PROJECT +gcloud config set compute/region $REGION +Error messages: +-bash: line 1: /mnt/c/Users/User/AppData/Local/Google/Cloud SDK/google-cloud-sdk/bin/gcloud: Permission denied +-bash: line 2: /mnt/c/Users/User/AppData/Local/Google/Cloud SDK/google-cloud-sdk/bin/gcloud: Permission denied +CalledProcessError: Command 'b'gcloud config set project $PROJECT\ngcloud config set compute/region $REGION\n\n'' returned non-zero exit status 126.","Perhaps you installed Google Cloud SDK with root? +try +sudo gcloud config set project $PROJECT +and +sudo gcloud config set compute/region $REGION",0.0,False,1,6285 +2019-09-04 13:31:44.333,how to use breakpoint in mydll.dll using python3 and pythonnet,"I have function imported from a DLL file using pythonnet: +I need to trace my function(in a C# DLL) with Python.",you can hook a Visual Studio debugger to python.exe which runs your dll,0.0,False,1,6286 +2019-09-04 13:40:07.500,Python Oracle DB Connect without Oracle Client,"I am trying to build an application in python which will use Oracle Database installed in corporate server and the application which I am developing can be used in any local machine. +Is it possible to connect to oracle DB in Python without installing the oracle client in the local machine where the python application will be stored and executed? +Like in Java, we can use the jdbc thin driver to acheive the same, how it can be achieved in Python. +Any help is appreciated +Installing oracle client, connect is possible through cx_Oracle module. +But in systems where the client is not installed, how can we connect to the DB.","It is not correct that java can connect to oracle without any oracle provided software. +It needs a compatible version of ojdbc*.jar to connect. Similarly python's cx_oracle library needs oracle instant-client software from oracle to be installed. +Instant client is free software and has a small footprint.",0.2655860252697744,False,2,6287 +2019-09-04 13:40:07.500,Python Oracle DB Connect without Oracle Client,"I am trying to build an application in python which will use Oracle Database installed in corporate server and the application which I am developing can be used in any local machine. +Is it possible to connect to oracle DB in Python without installing the oracle client in the local machine where the python application will be stored and executed? +Like in Java, we can use the jdbc thin driver to acheive the same, how it can be achieved in Python. +Any help is appreciated +Installing oracle client, connect is possible through cx_Oracle module. +But in systems where the client is not installed, how can we connect to the DB.",Installing Oracle client is a huge pain. Could you instead create a Webservice to a system that does have OCI and then connect to it that way? This might end being a better solution rather than direct access.,0.0,False,2,6287 +2019-09-05 03:55:31.020,How to take multi-GPU support to the OpenNMT-py (pytorch)?,"I used python-2.7 version to run the PyTorch with GPU support. I used this command to train the dataset using multi-GPU. +Can someone please tell me how can I fix this error with PyTorch in OpenNMT-py or is there a way to take pytorch support for multi-GPU using python 2.7? +Here is the command that I tried. + + +CUDA_VISIBLE_DEVICES=1,2 + python train.py -data data/demo -save_model demo-model -world_size 2 -gpu_ranks 0 1 + + +This is the error: + +Traceback (most recent call last): + File ""train.py"", line 200, in + main(opt) + File ""train.py"", line 60, in main + mp = torch.multiprocessing.get_context('spawn') + AttributeError: 'module' object has no attribute 'get_context'","Maybe you can check whether your torch and python versions fit the openmt requiremen. +I remember their torch is 1.0 or 1.2 (1.0 is better). You have to lower your latest of version of torch. Hope that would work",0.0,False,1,6288 +2019-09-05 18:28:58.863,What does wave_read.readframes() return if there are multiple channels?,"I understand how the readframes() method works for mono audio input, however I don't know how it will work for stereo input. Would it give a tuple of two byte objects?","A wave file has: +sample rate of Wave_read.getframerate() per second (e.g 44100 if from an audio CD). +sample width of Wave_read.getsampwidth() bytes (i.e 1 for 8-bit samples, 2 for 16-bit samples) +Wave_read.getnchannels() channels (typically 1 for mono, 2 for stereo) +Every time you do a Wave_read.getframes(N), you get N * sample_width * n_channels bytes.",0.0,False,1,6289 +2019-09-07 03:28:26.677,Does SciPy have utilities for parsing and keeping track of the units associated with its constants?,"scipy.constants.physical_constants returns (value, unit, uncertainty) tuples for many specific physical constants. The units are given in the form of a string. (For example, one of the options for the universal gas constant has a unit field of 'J kg^-1 K^-1'.) +At first blush, this seems pretty useful. Keeping track of your units is very important in scientific calculations, but, for the life of me, I haven't been able to find any facilities for parsing these strings into something that can be tracked. Without that, there's no way to simplify the combined units after different values have been added, subtracted, etc with eachother. +I know I can manually declare the units of constants with separate libraries such as what's available in SymPy, but that would make ScyPy's own units completely useless (maybe just a convenience for printouts). That sounds pretty absurd. I can't imagine that ScyPy doesn't know how to deal with units. +What am I missing? +Edit: +I know that SciPy is a stack, and I am well aware of what libraries are part of it. My questions is about if SciPy knows how to work with the very units it spits out with its constants (or if I have to throw out those units and manually redefine everything). As far as I can see, it can't actually parse its own unit strings (and nothing else in the ecosystem seems to know how to make heads or tails of them either). This doesn't make sense to me because if SciPy proper can't deal with these units, why would they be there in the first place? Not to mention, keeping track of your units across your calculations is the exact kind of thing you need to do in science. Forcing manual redefinitions of all the units someone went through the trouble of associating with all these constants doesn't make sense.","No, scipy the library does not have any notion of quantities with units and makes no guarantees when operating on quantities with units (from e.g. pint, astropy.Quantity or other objects from other unit-handling packages).",0.0,False,1,6290 +2019-09-07 11:50:52.290,LightGBM unexpected behaviour outside of jupyter,"I have this strange but when I'm using a LightGBM model to calculate some predictions. +I trained a LightGBM model inside of jupyter and dumped it into a file using pickle. This model is used in an external class. +My problem is when I call my prediction function from this external class outside of jupyter it always predicts an output of 0.5 (on all rows). When I use the exact same class inside of jupyter I get the expected output. In both cases the exact same model is used with the exact same data. +How can this behavior be explained and how can I achieve to get the same results outside of jupyter? Has it something to do with the fact I trained the model inside of jupyter? (I can't imagine why it would, but atm have no clue where this bug is coming from) +Edit: Used versions: +Both times the same lgb version is used (2.2.3), I also checked the python version which are equal (3.6.8) and all system paths (sys.path output). The paths are equal except of '/home/xxx/.local/lib/python3.6/site-packages/IPython/extensions' and '/home/xxx/.ipython'. +Edit 2: I copied the code I used inside of my jupyter and ran it as a normal python file. The model made this way works now inside of jupyter and outside of it. I still wonder why this bug accrued.",It can't be a jupyter problem since jupyter is just an interface to communicate with python. The problem could be that you are using different python environment and different version of lgbm... Check import lightgbm as lgb and lgb.__version__ on both jupyter and your python terminal and make sure there are the same (or check if there has been some major changements between these versions),0.3869120172231254,False,1,6291 +2019-09-08 16:32:01.487,Create Python setup,I have to create a setup screen with tk that starts only at the first boot of the application where you will have to enter names etc ... a sort of setup. Does anyone have any ideas on how to do so that A) is performed only the first time and B) the input can be saved and used in the other scripts? Thanks in advance,"Why not use a file to store the details? You could use a text file or you could use pickle to save a python object then reload it. On starting your application you could check to see if the file exists and contains the necessary information, if it doesn't you can activate your setup screen, if not skip it.",0.3869120172231254,False,1,6292 +2019-09-09 13:09:00.117,What is the best way to combine two data sets that depend on each other?,"I am encountering a task and I am not entirely sure what the best solution is. +I currently have one data set in mongo that I use to display user data on a website, backend is in Python. A different team in the company recently created an API that has additional data that I would let to show along side the user data, and the data from the newly created API is paired to my user data (Shows specific data per user) that I will need to sync up. +I had initially thought of creating a cron job that runs weekly (as the ""other"" API data does not update often) and then taking the information and putting it directly into my data after pairing it up. +A coworker has suggested caching the ""other"" API data and then just returning the ""mixed"" data to display on the website. +What is the best course of action here? Actually adding the data to our data set would allow us to have 1 source of truth and not rely on the other end point, as well as doing less work each time we need the data. Also if we end up needing that information somewhere else in the project, we already have the data in our DB and can just use it directly without needing to re-organize/pair it. +Just looking for general pro's and cons for each solution. Thanks!","Synchronization will always cost more than federation. I would either A) embrace CORS and integrate it in the front-end, or B) create a thin proxy in your Python App. +Which you choose depends on how quickly this API changes, whether you can respond to those changes, and whether you need graceful degradation in case of remote API failure. If it is not mission-critical data, and the API is reliable, just integrate it in the browser. If they support things like HTTP cache-control, all the better, the user's browser will handle it. +If the API is not scalable/reliable, then consider putting in a proxy server-side so that you can catch errors and provide graceful degradation.",1.2,True,1,6293 +2019-09-09 20:26:07.763,pandas pd.options.display.max_rows not working as expected,"I’m using pandas 0.25.1 in Jupyter Lab and the maximum number of rows I can display is 10, regardless of what pd.options.display.max_rows is set to. +However, if pd.options.display.max_rows is set to less than 10 it takes effect and if pd.options.display.max_rows = None then all rows show. +Any idea how I can get a pd.options.display.max_rows of more than 10 to take effect?","min_rows displays the number of rows to be displayed from the top (head) and from the bottom (tail) it will be evenly split..despite putting in an odd number. If you only want a set number of rows to be displayed without reading it into the memory, +another way is to use nrows = 'putnumberhere'. +e.g. results = pd.read_csv('ex6.csv', nrows = 5) # display 5 rows from the top 0 - 4 +If the dataframe has about 100 rows and you want to display only the first 5 rows from the top...NO TAIL use .nrows",-0.2012947653214861,False,1,6294 +2019-09-11 00:46:34.683,Using tensorflow object detection for either or detection,"I have used Tensorflow object detection for quite awhile now. I am more of a user, I dont really know how it works. I am wondering is it possible to train it to recognize an object is something and not something? For example, I want to detect cracks on the tiles. Can i use object detection to do so where i show an image of a tile and it can tell me if there is a crack (and also show the location), or it will tell me if there is no crack on the tile? +I have tried to train using pictures with and without defect, using 2 classes (1 for defect and 1 for no defect). But the results keep showing both (if the picture have defect) in 1 picture. Is there a way to show only the one with defect? +Basically i would like to do defect checking. This is a simplistic case of 1 defect. but the actual case will have a few defects. +Thank you.","In case you're only expecting input images of tiles, either with defects or not, you don't need a class for no defect. +The API adds a background class for everything which is not the other classes. +So you simply need to state one class - defect, and tiles which are not detected as such are not defected. +So in your training set - simply give bounding boxes of defects, and no bounding box in case of no defect, and then your model should learn to detect the defects as mentioned above.",1.2,True,1,6295 +2019-09-11 16:52:17.283,How can I find memory leaks without external packages?,"I am writing a data mining script to pull information off of a program called Agisoft PhotoScan for my lab. PhotoScan uses its own Python library (and I'm not sure how to access pip for this particular build), which has caused me a few problems installing other packages. After dragging, dropping, and praying, I've gotten a few packages to work, but I'm still facing a memory leak. If there is no way around it, I can try to install some more packages to weed out the leak, but I'd like to avoid this if possible. +My understanding of Python garbage collection so far is, when an object loses its reference, it should be deleted. I used sys.getrefcount() to check all my variables, but they all stay constant. I have a hunch that the issue could be in the mysql-connector package I installed, or in PhotoScan itself, but I am not sure how to go about testing. I will be more than happy to provide code if that will help!","It turns out that the memory leak was indeed with the PhotoScan program. I've worked around it by having a separate script open and close it, running my original script once each time. Thank you all for the help!",0.0,False,1,6296 +2019-09-15 06:56:39.743,Start cmd and run multiple commands in the created cmd instance,"I am trying to start cmd window and then running a chain of cmds in succession one after the other in that cmd window. +something like start cmd /k pipenv shell && py manage.py runserver the start cmd should open a new cmd window, which actually happens, then the pipenv shell should start a virtual environment within that cmd instance, also happens, and the py manage.py runserver should run in the created environment but instead it runs where the script is called. +Any ideas on how I can make this work?","Your py manage.py runserver command calling python executor in your major environment. In your case, you could use pipenv run manage.py runserver that detect your virtual env inside your pipfile and activate it to run your command. An alternative way is to use virtualenv that create virtual env directly inside your project directory and calling envname\Scripts\activate each time you want to run something inside your virtual env.",0.2012947653214861,False,1,6297 +2019-09-15 21:33:55.463,"structured numpy ndarray, how to get values","I have a structured numpy ndarray la = {'val1':0,'val2':1} and I would like to return the vals using the 0 and 1 as keys, so I wish to return val1 when I have 0 and val2 when I have 1 which should have been straightforward however my attempts have failed, as I am not familiar with this structure. +How do I return only the corresponding val, or an array of all vals so that I can read in order?","Just found out that I can use la.tolist() and it returns a dictionary, somehow? when I wanted a list, alas from there on I was able to solve my problem.",0.0,False,1,6298 +2019-09-16 15:19:19.583,impossible to use pip,"I start on python, I try to use mathplotlib on my code but I have an error ""ModuleNotFoundError: No module named 'matplotlib'"" on my cmd. So I have tried to use pip on the cmd: pip install mathplotlib. +But I have an other error ""No python at 'C:...\Microsoft Visual Studio..."" +Actually I don't use microsoft studio anymore so I usinstall it but I think I have to change the path for the pip modul but I don't know how... I add the link of the script of the python folder on the variables environment but it doesn't change anything. How can I use pip ? +Antoine","Your setup seems messed up. A couple of ideas: + +long term solution: Uninstall everything related to Python, make sure your PATH environment variables are clean, and reinstall Python from scratch. +short term solution: Since py seems to work, you could go along with it: py, py -3 -m pip install , and so on. +If you feel comfortable enough you could try to salvage what works by looking at the output of py -0p, this should tell you where are the Python installations that are potentially functional, and you could get rid of the rest.",0.0,False,1,6299 +2019-09-16 16:45:45.577,How to create button based chatbot,"I have created a chatbot using RASA to work with free text and it is working fine. As per my new requirement i need to build button based chatbot which should follow flowchart kind of structure. I don't know how to do that what i thought is to convert the flowchart into graph data structure using networkx but i am not sure whether it has that capability. I did search but most of the examples are using dialogue or chat fuel. Can i do it using networkx. +Please help.","Sure, you can. +You just need each button to point to another intent. The payload of each button should point have the /intent_value as its payload and this will cause the NLU to skip evaluation and simply predict the intent. Then you can just bind a trigger to the intent or use the utter_ method. +Hope that helps.",1.2,True,1,6300 +2019-09-16 19:35:35.813,Teradataml: Remove all temporary tables created by Teradata MLE functions,In teradataml how should the user remove temporary tables created by Teradata MLE functions?,At the end of a session call remove_context() to trigger the dropping of tables.,0.0,False,1,6301 +2019-09-17 06:03:09.647,How to inherit controller of a third party module for customization Odoo 12?,"I have a module with a controller and I need to inherit it in a newly created module for some customization. I searched about the controller inheritance in Odoo and I found that we can inherit Odoo's base modules' controllers this way: +from odoo.addons.portal.controllers.portal import CustomerPortal, pager as portal_pager, get_records_pager +but how can I do this for a third party module's controller? In my case, the third party module directory is one step back from my own module's directory. If I should import the class of a third party module controller, how should I do it?","It is not a problem whether you are using a custom module.If the module installed in the database you can import as from odoo.addons. +Eg : from odoo.addons.your_module.controllers.main import MyClass",1.2,True,1,6302 +2019-09-17 13:31:40.087,how to deal with high cardinal categorical feature into numeric for predictive machine learning model?,"I have two columns of having high cardinal categorical values, one column(area_id) has 21878 unique values and other has(page_entry) 800 unique values. I am building a predictive ML model to predict the hits on a webpage. +column information: +area_id: all the locations that were visited during the session. (has location code number of different areas of a webpage) +page_entry: describes the landing page of the session. +how to change these two columns into numerical apart from one_hot encoding? +thank you.","One approach could be to group your categorical levels into smaller buckets using business rules. In your case for the feature area_id you could simply group them based on their geographical location, say all area_ids from a single district (or for that matter any other level of aggregation) will be replaced by a single id. Similarly, for page_entry you could group similar pages based on some attributes like nature of the web page like sports, travel, etc. In this way you could significantly reduce the number dimensions of your variables. +Hope this helps!",0.0,False,1,6303 +2019-09-18 17:09:01.753,How to restrict the maximum size of an element in a list in Python?,"Problem Statement: +There are 5 sockets and 6 phones. Each phone takes 60 minutes to charge completely. What is the least time required to charge all phones? +The phones can be interchanged along the sockets +What I've tried: +I've made a list with 6 elements whose initial value is 0. I've defined two functions. Switch function, which interchanges the phone one socket to the left. Charge function, which adds value 10(charging time assumed) to each element, except the last (as there are only 5 sockets). As the program proceeds, how do I restrict individual elements to 60, while other lower value elements still get added 10 until they attain the value of 60?","In the charge function, add an if condition that checks the value of the element. +I'm not sure what you're add function looks like exactly, but I would define the pseudocode to look something like this: +if element < 60: +add 10 to the element +This way, if an element is greater than or equal to 60, it won't get caught by the if condition and won't get anything added to it.",0.0,False,2,6304 +2019-09-18 17:09:01.753,How to restrict the maximum size of an element in a list in Python?,"Problem Statement: +There are 5 sockets and 6 phones. Each phone takes 60 minutes to charge completely. What is the least time required to charge all phones? +The phones can be interchanged along the sockets +What I've tried: +I've made a list with 6 elements whose initial value is 0. I've defined two functions. Switch function, which interchanges the phone one socket to the left. Charge function, which adds value 10(charging time assumed) to each element, except the last (as there are only 5 sockets). As the program proceeds, how do I restrict individual elements to 60, while other lower value elements still get added 10 until they attain the value of 60?","You cannot simply restrict the maximum element size. What you can do is check the element size with a if condition and terminate the process. +btw, answer is 6x60/5=72 mins.",0.0,False,2,6304 +2019-09-18 18:44:22.307,how to display plot images outside of jupyter notebook?,"So, this might be an utterly dumb question, but I have just started working with python and it's data science libs, and I would like to see seaborn plots displayed, but I prefer to work with editors I have experience with, like VS Code or PyCharm instead of Jupyter notebook. Of course, when I run the python code, the console does not display the plots as those are images. So how do I get to display and see the plots when not using jupyter?","You can try to run an matplotlib example code with python console or ipython console. They will show you a window with your plot. +Also, you can use Spyder instead of those consoles. It is free, and works well with python libraries for data science. Of course, you can check your plots in Spyder.",0.0,False,1,6305 +2019-09-19 18:35:33.863,Tasks linger in celery amqp when publisher is terminated,"I am using Celery with a RabbitMQ server. I have a publisher, which could potentially be terminated by a SIGKILL and since this signal cannot be watched, I cannot revoke the tasks. What would be a common approach to revoke the tasks where the publisher is not alive anymore? +I experimented with an interval on the worker side, but the publisher is obviously not registered as a worker, so I don't know how I can detect a timeout","There's nothing built-in to celery to monitor the producer / publisher status -- only the worker / consumer status. There are other alternatives that you can consider, for example by using a redis expiring key that has to be updated periodically by the publisher that can serve as a proxy for whether a publisher is alive. And then in the task checking to see if the flag for a publisher still exists within redis, and if it doesn't the task returns doing nothing.",0.6730655149877884,False,2,6306 +2019-09-19 18:35:33.863,Tasks linger in celery amqp when publisher is terminated,"I am using Celery with a RabbitMQ server. I have a publisher, which could potentially be terminated by a SIGKILL and since this signal cannot be watched, I cannot revoke the tasks. What would be a common approach to revoke the tasks where the publisher is not alive anymore? +I experimented with an interval on the worker side, but the publisher is obviously not registered as a worker, so I don't know how I can detect a timeout","Another solution, which works in my case, is to add the next task only if the current processed ones are finished. In this case the queue doesn't fill up.",1.2,True,2,6306 +2019-09-19 19:03:13.597,"Python ""Magic methods"" are realy methods?","I know how to use magical methods in python, but I would like to understand more about them. +For it I would like to consider three examples: +1) __init__: +We use this as constructor in the beginning of most classes. If this is a method, what is the object associated with it? Is it a basic python object that is used to generate all the other objects? +2) __add__ +We use this to change the behaviour of the operator +. The same question above. +3) __name__: +The most common use of it is inside this kind of structure:if __name__ == ""__main__"": +This is return True when you are running the module as the main program. +My question is __name__ a method or a variable? If it is a variable what is the method associated with it. If this is a method, what is the object associated with it? +Since I do not understand very well these methods, maybe the questions are not well formulated. I would like to understand how these methods are constructed in Python.","The object is the class that's being instantiated, a.k.a. the Foo in Foo.__init__(actual_instance) +In a + b the object is a, and the expression is equivalent to a.__add__(b) +__name__ is a variable. It can't be a method because then comparisons with a string would always be False since a function is never equal to a string",0.2012947653214861,False,1,6307 +2019-09-19 21:07:37.810,Python - how to check if user is on the desktop,"I am trying to write a program with python that works like android folders bit for Windows. I want the user to be able to single click on a desktop icon and then a window will open with the contents of the folder in it. After giving up trying to find a way to allow single click to open a desktop application (for only one application I am aware that you can allow single click for all files and folders), I decided to check if the user clicked in the location of the file and if they were on the desktop while they were doing that. So what I need to know is how to check if the user is viewing the desktop in python. +Thanks, +Harry +TLDR; how to check if user is viewing the desktop - python","I don't know if ""single clicking"" would work in any way but you can use Pyautogui to automatically click as many times as you want",0.0,False,1,6308 +2019-09-20 11:50:30.050,How to fine-tune a keras model with existing plus newer classes?,"Good day! +I have a celebrity dataset on which I want to fine-tune a keras built-in model. SO far what I have explored and done, we remove the top layers of the original model (or preferably, pass the include_top=False) and add our own layers, and then train our newly added layers while keeping the previous layers frozen. This whole thing is pretty much like intuitive. +Now what I require is, that my model learns to identify the celebrity faces, while also being able to detect all the other objects it has been trained on before. Originally, the models trained on imagenet come with an output layer of 1000 neurons, each representing a separate class. I'm confused about how it should be able to detect the new classes? All the transfer learning and fine-tuning articles and blogs tell us to replace the original 1000-neuron output layer with a different N-neuron layer (N=number of new classes). In my case, I have two celebrities, so if I have a new layer with 2 neurons, I don't know how the model is going to classify the original 1000 imagenet objects. +I need a pointer on this whole thing, that how exactly can I have a pre-trained model taught two new celebrity faces while also maintaining its ability to recognize all the 1000 imagenet objects as well. +Thanks!","With transfer learning, you can make the trained model classify among the new classes on which you just trained using the features learned from the new dataset and the features learned by the model from the dataset on which it was trained in the first place. Unfortunately, you can not make the model to classify between all the classes (original dataset classes + second time used dataset classes), because when you add the new classes, it keeps their weights only for classification. +But, let's say for experimentation you change the number of output neurons (equal to the number of old + new classes) in the last layer, then it will now give random weights to these neurons which on prediction will not give you meaningful result. +This whole thing of making the model to classify among old + new classes experimentation is still in research area. +However, one way you can achieve it is to train your model from scratch on the whole data (old + new).",0.5457054096481145,False,1,6309