Q_CreationDate
stringlengths
23
23
Title
stringlengths
11
149
Question
stringlengths
25
6.53k
Answer
stringlengths
15
5.1k
Score
float64
-1
1.2
Is_accepted
bool
2 classes
N_answers
int64
1
17
Q_Id
int64
0
6.76k
2018-11-15 11:25:13.747
How Do I store downloaded pdf files to Mongo DB
I download the some of pdf and stored in directory. Need to insert them into mongo database with python code so how could i do these. Need to store them by making three columns (pdf_name, pdf_ganerateDate, FlagOfWork)like that.
You can use GridFS. Please check this url http://api.mongodb.com/python/current/examples/gridfs.html. It will help you to store any file to mongoDb and get them. In other collection you can save file metadata.
0.386912
false
1
5,821
2018-11-15 15:28:09.797
how to use pipenv to run file in current folder
Using pipenv to create a virtual environment in a folder. However, the environment seems to be in the path: /Users/....../.local/share/virtualenvs/...... And when I run the command pipenv run python train.py, I get the error: can't open file 'train.py': [Errno 2] No such file or directory How to run a file in the folder where I created the virtual environment?
You need to be in the same directory of the file you want to run then use: pipenv run python train.py Note: You may be at the project main directory while the file you need to run is inside a directory inside your project directory If you use django to create your project, it will create two folders inside each other with the same name so as a best practice change the top directory name to 'yourname-project' then inside the directory 'yourname' run the pipenv run python train.py command
1.2
true
1
5,822
2018-11-15 20:21:37.897
xgboost feature importance of categorical variable
I am using XGBClassifier to train in python and there are a handful of categorical variables in my training dataset. Originally, I planed to convert each of them into a few dummies before I throw in my data, but then the feature importance will be calculated for each dummy, not the original categorical ones. Since I also need to order all of my original variables (including numerical + categorical) by importance, I am wondering how to get importance of my original variables? Is it simply adding up?
You could probably get by with summing the individual categories' importances into their original, parent category. But, unless these features are high-cardinality, my two cents would be to report them individually. I tend to err on the side of being more explicit with reporting model performance/importance measures.
0
false
1
5,823
2018-11-15 20:22:49.817
How to run a briefly running Docker container on Azure on a daily basis?
In the past, I've been using WebJobs to schedule small recurrent tasks that perform a specific background task, e.g., generating a daily summary of user activities. For each task, I've written a console application in C# that was published as an Azure Webjob. Now I'd like to daily execute some Python code that is already working in a Docker container. I think I figured out how to get a container running in Azure. Right now, I want to minimize the operation cost since the container will only run for a duration of 5 minutes. Therefore, I'd like to somehow schedule that my container starts once per day (at 1am) and shuts down after completion. How can I achieve this setup in Azure?
I'd probably write a scheduled build job on vsts\whatever to run at 1am daily to launch a container on Azure Container Instances. Container should shutdown on its own when the program exists (so your program has to do that without help from outside).
1.2
true
1
5,824
2018-11-16 16:47:57.803
MongoDB - how can i set a documents limit to my capped collection?
I'm fairly new to MongoDB. I need my Python script to query new entries from my Database in real time, but the only way to do this seems to be replica sets, but my Database is not a replica set, or with a Tailable cursor, which is only for capped collections. From what i understood, a capped collection has a limit, but since i don't know how big my Database is gonna be and for when i'm gonna need to send data there, i am thinking of putting the limit to 3-4 million documents. Would this be possible?. How can i do that?.
so do you want to increase the size of capped collection ? if yes then if you know average document size then you may define size like: db.createCollection("sample", { capped : true, size : 10000000, max : 5000000 } ) here 5000000 is max documents with size limit of 10000000 bytes
0.386912
false
1
5,825
2018-11-17 02:57:21.293
Import aar of Android library in Python
I have wrote an Android library and build an aar file. And I want to write a python program to use the aar library. Is it possible to do that? If so, how to do that? Thanks
There is no way to include all dependencies to your aar file. So According to the open source licenses you can add their sources to your project.
0
false
1
5,826
2018-11-17 12:15:24.270
GraphQL/Graphene for backend calls in Django's templates
I just installed Graphene on my Django project and would like to use it also for the back-end, templating. So far, I find just tutorials how to use it only for front-end, no mention about back-end. Should I suppose that it is not a good idea to use it instead of a SQL database? If yes, then why? Is there a downside in the speed in the comparison to a SQL databases like MySQL? What's the best option how to retrieve the data for templates in Python? I mean, best for the performance. Thnx.
GraphQL is an API specification. It doesn't specify how data is stored, so it is not a replacement for a database. If you're using GraphQL, you don't use Django templates to specify the GraphQL output, because GraphQL specifies the entire HTTP response from the web service, so this question doesn't make sense.
0.673066
false
1
5,827
2018-11-17 18:20:40.807
How to use F-score as error function to train neural networks?
I am pretty new to neural networks. I am training a network in tensorflow, but the number of positive examples is much much less than negative examples in my dataset (it is a medical dataset). So, I know that F-score calculated from precision and recall is a good measure of how well the model is trained. I have used error functions like cross-entropy loss or MSE before, but they are all based on accuracy calculation (if I am not wrong). But how do I use this F-score as an error function? Is there a tensorflow function for that? Or I have to create a new one? Thanks in advance.
the loss value and accuracy is a different concept. The loss value is used for training the NN. However, accuracy or other metrics is to value the training result.
0
false
1
5,828
2018-11-17 20:57:16.567
How to determine file path in Google colab?
I mounted my drive using this : from google.colab import drive drive.mount('/content/drive/') I have a file inside a folder that I want the path of how do I determine the path? Say the folder that contains the file is named 'x' inside my drive
The path will be /content/drive/My\ Drive/x/the_file.
1.2
true
2
5,829
2018-11-17 20:57:16.567
How to determine file path in Google colab?
I mounted my drive using this : from google.colab import drive drive.mount('/content/drive/') I have a file inside a folder that I want the path of how do I determine the path? Say the folder that contains the file is named 'x' inside my drive
The path as parameter for a function will be /content/drive/My Drive/x/the_file, so without backslash inside My Drive
0.545705
false
2
5,829
2018-11-17 23:12:26.597
virtualenv - Birds Eye View of Understanding
Using Windows Learning about virtualenv. Here is my understanding of it and a few question that I have. Please correct me if my understanding is incorrect. virtualenv are environments where your pip dependencies and its selected version are stored for a particular project. A folder is made for your project and inside there are the dependencies. I was told you would not want to save your .py scripts in side of virtual ENV, if that's the case how do I access the virtual env when I want to run that project? Open it up in the command line under source ENV/bin/activate then cd my way to where my script is stored? By running pip freeze that creates a requirements.txt file in that project folder that is just a txt. copy of the dependencies of that virtual env? If I'm in a second virutalenv who do I import another virtualenv's requirements? I've been to the documentation but I still don't get it. $ env1/bin/pip freeze > requirements.txt $ env2/bin/pip install -r requirements.txt Guess I'm confused on the "requirements" description. Isn't best practice to always call our requirements, requirements.txt? If that's the case how does env2 know I'm want env1 requirements? Thank you for any info or suggestions. Really appreciate the assistance. I created a virtualenv C:\Users\admin\Documents\Enviorments>virtualenv django_1 Using base prefix'c:\\users\\admin\\appdata\\local\\programs\\python\\python37-32' New python executable in C:\Users\admin\Documents\Enviorments\django_1\Scripts\python.exe Installing setuptools, pip, wheel...done. How do I activate it? source django_1/bin/activate doesn't work? I've tried: source C:\Users\admin\Documents\Enviorments\django_1/bin/activate Every time I get : 'source' is not recognized as an internal or external command, operable program or batch file.
virtualenv simply creates a new Python environment for your project. Think of it as another copy of Python that you have in your system. Virutual environment is helpful for development, especially if you will need different versions of the same libraries. Answer to your first question is, yes, for each project that you use virtualenv, you need to activate it first. After activating, when you run python script, not just your project's scripts, but any python script, will use dependencies and configuration of the active Python environment. Answer to the second question, pip freeze > requirements.txt will create requirements file in active folder, not in your project folder. So, let's say in your cmd/terminal you are in C:\Desktop, then the requirements file will be created there. If you're in C\Desktop\myproject folder, the file will be created there. Requirements file will contain the packages installed on active virtualenv. Answer to 3rd question is related to second. Simply, you need to write full path of the second requirements file. So if you are in first project and want to install packages from second virtualenv, you run it like env2/bin/pip install -r /path/to/my/first/requirements.txt. If in your terminal you are in active folder that does not have requirements.txt file, then running pip install will give you an error. So, running the command does not know which requirements file you want to use, you specify it. I created a virtualenv C:\Users\admin\Documents\Enviorments>virtualenv django_1 Using base prefix 'c:\\users\\admin\\appdata\\local\\programs\\python\\python37-32' New python executable in C:\Users\admin\Documents\Enviorments\django_1\Scripts\python.exe Installing setuptools, pip, wheel...done. How do I activate it? source django_1/bin/activate doesn't work? I've tried: source C:\Users\admin\Documents\Enviorments\django_1/bin/activate Every time I get : 'source' is not recognized as an internal or external command, operable program or batch file.
0
false
2
5,830
2018-11-17 23:12:26.597
virtualenv - Birds Eye View of Understanding
Using Windows Learning about virtualenv. Here is my understanding of it and a few question that I have. Please correct me if my understanding is incorrect. virtualenv are environments where your pip dependencies and its selected version are stored for a particular project. A folder is made for your project and inside there are the dependencies. I was told you would not want to save your .py scripts in side of virtual ENV, if that's the case how do I access the virtual env when I want to run that project? Open it up in the command line under source ENV/bin/activate then cd my way to where my script is stored? By running pip freeze that creates a requirements.txt file in that project folder that is just a txt. copy of the dependencies of that virtual env? If I'm in a second virutalenv who do I import another virtualenv's requirements? I've been to the documentation but I still don't get it. $ env1/bin/pip freeze > requirements.txt $ env2/bin/pip install -r requirements.txt Guess I'm confused on the "requirements" description. Isn't best practice to always call our requirements, requirements.txt? If that's the case how does env2 know I'm want env1 requirements? Thank you for any info or suggestions. Really appreciate the assistance. I created a virtualenv C:\Users\admin\Documents\Enviorments>virtualenv django_1 Using base prefix'c:\\users\\admin\\appdata\\local\\programs\\python\\python37-32' New python executable in C:\Users\admin\Documents\Enviorments\django_1\Scripts\python.exe Installing setuptools, pip, wheel...done. How do I activate it? source django_1/bin/activate doesn't work? I've tried: source C:\Users\admin\Documents\Enviorments\django_1/bin/activate Every time I get : 'source' is not recognized as an internal or external command, operable program or batch file.
* disclaimer * I mainly use conda environments instead of virtualenv, but I believe that most of this is the same across both of them and is true to your case. You should be able to access your scripts from any environment you are in. If you have virtenvA and virtenvB then you can access your script from inside either of your environments. All you would do is activate one of them and then run python /path/to/my/script.py, but you need to make sure any dependent libraries are installed. Correct, but for clarity the requirements file contains a list of the dependencies by name only. It doesn't contain any actual code or packages. You can print out a requirements file but it should just be a list which says package names and their version numbers. Like pandas 1.0.1 numpy 1.0.1 scipy 1.0.1 etc. In the lines of code you have here you would export the dependencies list of env1 and then you would install these dependencies in env2. If env2 was empty then it will now just be a copy of env1, otherwise it will be the same but with all the packages of env1 added and if it had a different version number of some of the same packages then this would be overwritten
0
false
2
5,830
2018-11-19 08:19:34.017
How do I efficiently understand a framework with sparse documentation?
I have the problem that for a project I need to work with a framework (Python), that has a poor documentation. I know what it does since it is the back end of a running application. I also know that no framework is good if the documentation is bad and that I should prob. code it myself. But, I have a time constraint. Therefore my question is: Is there a cooking recipe on how to understand a poorly documented framework? What I tried until now is checking some functions and identify the organizational units in the framework but I am lacking a system to do it more effectively.
If I were you, with time constaraints, and bound to use a specific framework. I'll go in the following manner: List down the use cases I desire to implement using the framework Identify the APIs provided by the framework that helps me implement the use cases Prototype the usecases based on the available documentation and reading The prototyping is not implementing the entire use case, but to identify the building blocks around the case and implementing them. e.g., If my usecase is to fetch the Students, along with their courses, and if I were using Hibernate to implement, I would prototype the database accesss, validating how easily am I able to access the database using Hibernate, or how easily I am able to get the relational data by means of joining/aggregation etc. The prototyping will help me figure out the possible limitations/bugs in the framework. If the limitations are more of show-stoppers, I will implement the supporting APIs myself; or I can take a call to scrap out the entire framework and write one for myself; whichever makes more sense.
0.386912
false
1
5,831
2018-11-20 02:45:03.200
Python concurrent.futures.ThreadPoolExecutor max_workers
I am searching for a long time on net. But no use. Please help or try to give me some ideas how to achieve this. When I use python module concurrent.futures.ThreadPoolExecutor(max_workers=None), I want to know the max_workers how much the number of suitable. I've read the official document. I still don't know the number of suitable when I coding. Changed in version 3.5: If max_worker is None or not give, it will default to the number of processors on the machine, multiplied by 5, assuming that ThreadPoolExecutor is often used to overlap I/O instead of CPU work and the number of workers should be higher than the number of workers for ProcessPoolExecutor. How to understand "max_workers" better? For the first time to ask questions, thank you very much.
max_worker, you can take it as threads number. If you want to make the best of CPUs, you should keep it running (instead of sleeping). Ideally if you set it to None, there will be ( CPU number * 5) threads at most. On average, each CPU has 5 thread to schedule. Then if one of them falls into sleep, another thread will be scheduled.
0.999909
false
1
5,832
2018-11-20 20:23:47.973
wget with subprocess.call()
I'm working on a domain fronting project. Basically I'm trying to use the subprocess.call() function to interpret the following command: wget -O - https://fronteddomain.example --header 'Host: targetdomain.example' With the proper domains, I know how to domain front, that is not the problem. Just need some help with writing using the python subprocess.call() function with wget.
I figured it out using curl: call(["curl", "-s", "-H" "Host: targetdomain.example", "-H", "Connection: close", "frontdomain.example"])
1.2
true
1
5,833
2018-11-20 23:58:45.450
How to install Poppler to be used on AWS Lambda
I have to run pdf2image on my Python Lambda Function in AWS, but it requires poppler and poppler-utils to be installed on the machine. I have tried to search in many different places how to do that but could not find anything or anyone that have done that using lambda functions. Would any of you know how to generate poppler binaries, put it on my Lambda package and tell Lambda to use that? Thank you all.
Hi @Alex Albracht thanks for compiled easy instructions! They helped a lot. But I really struggled with getting the lambda function find the poppler path. So, I'll try to add that up with an effort to make it clear. The binary files should go in a zip folder having structure as: poppler.zip -> bin/poppler where poppler folder contains the binary files. This zip folder can be then uploaded as a layer in AWS lambda. For pdf2image to work, it needs poppler path. This should be included in the lambda function in the format - "/opt/bin/poppler". For example, poppler_path = "/opt/bin/poppler" pages = convert_from_path(PDF_file, 500, poppler_path=poppler_path)
0
false
1
5,834
2018-11-21 13:30:25.713
CPLEX Error 1016: Promotional version , use academic version CPLEX
I am using python with clpex, when I finished my model I run the program and it throws me the following error: CplexSolverError: CPLEX Error 1016: Promotional version. Problem size limits exceeded. I have the IBM Academic CPLEX installed, how can I make python recognize this and not the promotional version?
you can go to the direction you install CPLEX. For Example, D:\Cplex After that you will see a foler name cplex, then you click on that, --> python --> choose the version of your python ( Ex: 3.6 ), then choose the folder x64_win64, you will see another file name cplex. You copy this file into your python site packakges ^^ and then you will not be restricted
1.2
true
1
5,835
2018-11-23 22:49:20.307
How can i create a persistent data chart with Flask and Javascript?
I want to add a real-time chart to my Flask webapp. This chart, other than current updated data, should contain historical data too. At the moment i can create the chart and i can make it real time but i have no idea how to make the data 'persistent', so i can't see what the chart looked like days or weeks ago. I'm using a Javascript charting library, while Data is being sent from my Flask script, but what it's not really clear is how i can "store" my data on Javascript. At the moment, indeed, the chart will reset each time the page is loaded. How would it be possible to accomplish that? Is there an example for it?
You can try to store the data in a database and or in a file and extract from there . You can also try to use dash or you can make on the right side a menu with dates like 21 september and see the chart from that day . For dash you can look on YouTube at Sentdex
0
false
1
5,836
2018-11-25 13:55:55.643
How do I count how many items are in a specific row in my RDD
as you can tell I’m fairly new to using Pyspark Python my RDD is set out as follows: (ID, First name, Last name, Address) (ID, First name, Last name, Address) (ID, First name, Last name, Address) (ID, First name, Last name, Address) (ID, First name, Last name, Address) Is there anyway I can count how many of these records I have stored within my RDD such as count all the IDs in the RDD. So that the output would tell me I have 5 of them. I have tried using RDD.count() but that just seems to return how many items I have in my dataset in total.
If you have RDD of tuples like RDD[(ID, First name, Last name, Address)] then you can perform below operation to do different types of counting. Count the total number of elements/Rows in your RDD. rdd.count() Count Distinct IDs from your above RDD. Select the ID element and then do a distinct on top of it. rdd.map(lambda x : x[0]).distinct().count() Hope it helps to do the different sort of counting. Let me know if you need any further help here. Regards, Neeraj
0
false
1
5,837
2018-11-25 19:22:29.680
Adding charts to a Flask webapp
I created a web app with Flask where I'll be showing data, so I need charts for it. The problem is that I don't really know how to do that, so I'm trying to find the best way to do that. I tried to use a Javascript charting library on my frontend and send the data to the chart using SocketIO, but the problem is that I need to send that data frequently and at a certain point I'll be having a lot of data, so sending each time a huge load of data through AJAX/SocketIO would not be the best thing to do. To solve this, I had this idea: could I generate the chart from my backend, instead of sending data to the frontend? I think it would be the better thing to do, since I won't have to send the data to the frontend each time and there won't be a need to generate a ton of data each time the page is loaded, since the chart will be processed on the frontend. So would it be possible to generate a chart from my Flask code in Python and visualize it on my webpage? Is there a good library do that?
Try to use dash is a python library for web charts
1.2
true
1
5,838
2018-11-25 22:35:57.257
How to strip off left side of binary number in Python?
I got this binary number 101111111111000 I need to strip off the 8 most significant bits and have 11111000 at the end. I tried to make 101111111111000 << 8, but this results in 10111111111100000000000, it hasn't the same effect as >> which strips the lower bits. So how can this be done? The final result MUST BE binary type.
To achieve this for a number x with n digits, one can use this x&(2**(len(bin(x))-2-8)-1) -2 to strip 0b, -8 to strip leftmost Simply said it ands your number with just enough 1s that the 8 leftmost bits are set to 0.
0
false
1
5,839
2018-11-26 06:17:56.463
how do I clear a printed line and replace it with updated variable IDLE
I need to clear a printed line, but so far I have found no good answers for using python 3.7, IDLE on windows 10. I am trying to make a simple code that prints a changing variable. But I don't want tons of new lines being printed. I want to try and get it all on one line. Is it possible to print a variable that has been updated later on in the code? Do remember I am doing this in IDLE, not kali or something like that. Thanks for all your help in advance.
The Python language definition defines when bytes will be sent to a file, such as sys.stdout, the default file for print. It does not define what the connected device does with the bytes. When running code from IDLE, sys.stdout is initially connected to IDLE's Shell window. Shell is not a terminal and does not interpret terminal control codes other than '\n'. The reasons are a) IDLE is aimed at program development, by programmers, rather than program running by users, and developers sometimes need to see all the output from a program; and b) IDLE is cross-platform, while terminal behaviors are various, depending on the system, settings, and current modes (such as insert versus overwrite). However, I am planning to add an option to run code in an IDLE editor with sys.stdout directed to the local system terminal/console.
0.386912
false
1
5,840
2018-11-27 09:51:12.057
how to run python in eclipse with both py2 and py3?
pre: I installed both python2.7 and python 3.70 eclipse installed pydev, and configured two interpreters for each py version I have a project with some py scripts question: I choose one py file, I want run it in py2, then i want it run in py3(manually). I know that each file cound has it's run configuration, but it could only choose one interpreter a time. I also know that py.exe could help you get the right version of python. I tried to add an interpreter with py.exe, but pydev keeps telling me that "python stdlibs" is necessary for a interpreter while only python3's lib shows up. so, is there a way just like right click the file and choose "run use interpreter xxx"? or, does pydev has the ability to choose interpreters by "#! python2"/"#! python3" at file head?
I didn't understand what's the actual workflow you want... Do you want to run each file on a different interpreter (say you have mod1.py and want to run it always on py2 and then mod2.py should be run always on py3) or do you want to run the same file on multiple interpreters (i.e.: you have mod1.py and want to run it both on py2 and py3) or something else? So, please give more information on what's your actual problem and what you want to achieve... Options to run a single file in multiple interpreters: Always run with the default interpreter (so, make a regular run -- F9 to run the current editor -- change the default interpreter -- using Ctrl+shift+Alt+I -- and then rerun with Ctrl+F11). Create a .sh/.bat which will always do 2 launches (initially configure it to just be a wrapper to launch with one python, then, after properly configuring it inside of PyDev that way change it to launch python 2 times, one with py2 and another with py3 -- note that I haven't tested, but it should work in theory).
0.386912
false
1
5,841
2018-11-27 23:32:32.593
Python regex to identify capitalised single word lines in a text abstract
I am looking for a way to extract words from text if they match the following conditions: 1) are capitalised and 2) appear on a new line on their own (i.e. no other text on the same line). I am able to extract all capitalised words with this code: caps=re.findall(r"\b[A-Z]+\b", mytext) but can't figure out how to implement the second condition. Any help will be greatly appreciated.
please try following statements \r\n at the begining of your regex expression
-0.201295
false
1
5,842
2018-11-28 12:15:31.400
Python and Dart Integration in Flutter Mobile Application
Can i do these two things: Is there any library in dart for Sentiment Analysis? Can I use Python (for Sentiment Analysis) in dart? My main motive for these questions is that I'm working on an application in a flutter and I use sentiment analysis and I have no idea that how I do that. Can anyone please help me to solve this Problem.? Or is there any way that I can do text sentiment analysis in the flutter app?
You can create an api using Python then serve it your mobile app (FLUTTER) using http requests. I
0.673066
false
1
5,843
2018-11-28 15:25:07.900
Why is LocationLocal: Relative Alt dropping into negative values on a stationary drone?
I'm running the Set_Attitude_Target example on an Intel Aero with Ardupilot. The code is working as intended but on top of a clear sensor error, that becomes more evident the longer I run the experiment. In short, the altitude report from the example is reporting that in LocationLocal there is a relative altitude of -0.01, which gets smaller and smaller the longer the drone stays on. If the drone takes off, say, 1 meter, then the relative altitude is less than that, so the difference is being taken out. I ran the same example with the throttle set to a low value so the drone would stay stationary while "trying to take off" with insufficient thrust. For the 5 seconds that the drone was trying to take off, as well as after it gave up, disarmed and continued to run the code, the console read incremental losses to altitude, until I stopped it at -1 meter. Where is this sensor error coming from and how do I remedy it?
As per Agustinus Baskara's comment on the original post, it would appear the built-in sensor is simply that bad - it can't be improved upon with software.
0
false
1
5,844
2018-11-29 00:38:11.560
The loss function and evaluation metric of XGBoost
I am confused now about the loss functions used in XGBoost. Here is how I feel confused: we have objective, which is the loss function needs to be minimized; eval_metric: the metric used to represent the learning result. These two are totally unrelated (if we don't consider such as for classification only logloss and mlogloss can be used as eval_metric). Is this correct? If I am, then for a classification problem, how you can use rmse as a performance metric? take two options for objective as an example, reg:logistic and binary:logistic. For 0/1 classifications, usually binary logistic loss, or cross entropy should be considered as the loss function, right? So which of the two options is for this loss function, and what's the value of the other one? Say, if binary:logistic represents the cross entropy loss function, then what does reg:logistic do? what's the difference between multi:softmax and multi:softprob? Do they use the same loss function and just differ in the output format? If so, that should be the same for reg:logistic and binary:logistic as well, right? supplement for the 2nd problem say, the loss function for 0/1 classification problem should be L = sum(y_i*log(P_i)+(1-y_i)*log(P_i)). So if I need to choose binary:logistic here, or reg:logistic to let xgboost classifier to use L loss function. If it is binary:logistic, then what loss function reg:logistic uses?
'binary:logistic' uses -(y*log(y_pred) + (1-y)*(log(1-y_pred))) 'reg:logistic' uses (y - y_pred)^2 To get a total estimation of error we sum all errors and divide by number of samples. You can find this in the basics. When looking on Linear regression VS Logistic regression. Linear regression uses (y - y_pred)^2 as the Cost Function Logistic regression uses -(y*log(y_pred) + (y-1)*(log(1-y_pred))) as the Cost function Evaluation metrics are completely different thing. They design to evaluate your model. You can be confused by them because it is logical to use some evaluation metrics that are the same as the loss function, like MSE in regression problems. However, in binary problems it is not always wise to look at the logloss. My experience have thought me (in classification problems) to generally look on AUC ROC. EDIT according to xgboost documentation: reg:linear: linear regression reg:logistic: logistic regression binary:logistic: logistic regression for binary classification, output probability So I'm guessing: reg:linear: is as we said, (y - y_pred)^2 reg:logistic is -(y*log(y_pred) + (y-1)*(log(1-y_pred))) and rounding predictions with 0.5 threshhold binary:logistic is plain -(y*log(y_pred) + (1-y)*(log(1-y_pred))) (returns the probability) You can test it out and see if it do as I've edited. If so, I will update the answer, otherwise, I'll just delete it :<
0.999967
false
1
5,845
2018-11-29 09:16:08.143
After I modified my Python code in Pycharm, how to deploy the change to my Portainer?
Perhaps it is a basic question but I am really not a profession in Portainer. I have a local Portainer, a Pycharm to manage the Python code. What should I do after I modified my code and deploy this change to the local Portainer? Thx
If you have mounted the folder where your code resides directly in the container the changes will be also be applied in your container so no further action is required. If you have not mounted the folder to your container (for example if you copy the code when you build the image), you would have to rebuild the image. Of course this is a lot more work so I would recommend using the mounted volumes.
0
false
1
5,846
2018-11-30 04:23:07.330
Sqlalchemy before_execute event - how to pass some external variable, say app user id?
I am trying to obtain an application variable (app user id) in before_execute(conn, clauseelement, multiparam, param) method. The app user id is stored in python http request object which I do not have any access to in the db event. Is there any way to associate a piece of sqlalchemy external data somewhere to fetch it in before_execute event later? Appreciate your time and help.
Answering my own question here with a possible solution :) From http request copied the piece of data to session object Since the session binding was at engine level, copied the data from session to connection object in SessionEvent.after_begin(session, transaction, connection). [Had it been Connection level binding, we could have directly set the objects from session object to connection object.] Now the data is available in connection object and in before_execute() too.
0
false
1
5,847
2018-11-30 05:17:50.717
Session cookie is too large flask application
I'm trying to load certain data using sessions (locally) and it has been working for some time but, now I get the following warning and my data that was loaded through sessions is no longer being loaded. The "b'session'" cookie is too large: the value was 13083 bytes but the header required 44 extra bytes. The final size was 13127 bytes but the limitis 4093 bytes. Browsers may silently ignore cookies larger than this. I have tried using session.clear(). I also opened up chrome developer tools and tried deleting the cookies associated with 127.0.0.1:5000. I have also tried using a different secret key to use with the session. It would be greatly appreciated if I could get some help on this, since I have been searching for a solution for many hours. Edit: I am not looking to increase my limit by switching to server-side sessions. Instead, I would like to know how I could clear my client-side session data so I can reuse it. Edit #2: I figured it out. I forgot that I pushed way more data to my database, so every time a query was performed, the session would fill up immediately.
It looks like you are using the client-side type of session that is set by default with Flask which has a limited capacity of 4KB. You can use a server-side type session that will not have this limit, for example, by using a back-end file system (you save the session data in a file system in the server, not in the browser). To do so, set the configuration variable 'SESSION_TYPE' to 'filesystem'. You can check other alternatives for the 'SESSION_TYPE' variable in the Flask documentation.
1.2
true
1
5,848
2018-11-30 12:32:27.360
not having to load a dataset over and over
Currently in R, once you load a dataset (for example with read.csv), Rstudio saves it as a variable in the global environment. This ensures you don't have to load the dataset every single time you do a particular test or change. With Python, I do not know which text editor/IDE will allow me to do this. E.G - I want to load a dataset once, and then subsequently do all sorts of things with it, instead of having to load it every time I run the script. Any points as to how to do this would be very useful
It depends how large your data set is. For relatively smaller datasets you could look at installing Anaconda Python Jupyter notebooks. Really great for working with data and visualisation once the dataset is loaded. For larger datasets you can write some functions / generators to iterate efficiently through the dataset.
0
false
1
5,849
2018-11-30 14:16:09.813
pymysql - Get value from a query
I am executing the query using pymysql in python. select (sum(acc_Value)) from accInfo where acc_Name = 'ABC' The purpose of the query is to get the sum of all the values in acc_Value column for all the rows matchin acc_Name = 'ABC'. The output i am getting when using cur.fetchone() is (Decimal('256830696'),) Now how to get that value "256830696" alone in python. Thanks in advance.
It's a tuple, just take the 0th index
-0.386912
false
1
5,850
2018-12-01 14:09:56.980
Saving objects from tk canvas
I'm trying to make a save function in a program im doing for bubbling/ballooning drawings. The only thing I can't get to work is save a "work copy". As if a drawing gets revision changes, you don't need to redo all the work. Just load the work copy, and add/remove/re-arrage bubbles. I'm using tkinter and canvas. And creates ovals and text for bubbles. But I can't figure out any good way to save the info from the oval/text objects. I tried to pickle the whole canvas, but that seems like it won't work after some googeling. And pickle every object when created seems to only save the object id. 1, 2 etc. And that also won't work since some bubbles will be moved and receive new coordinates. They might also have a different color, size etc. In my next approach I'm thinking of saving the whole "can.create_oval( x1, y1, x2, y2, fill = fillC, outli...." as a string to a txt and make the function to recreate a with eval() Any one have any good suggestion on how to approach this?
There is no built-in way to save and restore the canvas. However, the canvas has methods you can use to get all of the information about the items on the canvas. You can use these methods to save this information to a file and then read this file back and recreate the objects. find_all - will return an ordered list of object ids for all objects on the canvas type - will return the type of the object as a string ("rectangle", "circle", "text", etc) itemconfig - returns a dictionary with all of the configuration values for the object. The values in the dictionary are a list of values which includes the default value of the option at index 3 and the current value at index 4. You can use this to save only the option values that have been explicitly changed from the default. gettags - returns a list of tags associated with the object
1.2
true
1
5,851
2018-12-03 01:15:30.087
Different sized vectors in word2vec
I am trying to generate three different sized output vectors namely 25d, 50d and 75d. I am trying to do so by training the same dataset using the word2vec model. I am not sure how I can get three vectors of different sizes using the same training dataset. Can someone please help me get started on this? I am very new to machine learning and word2vec. Thanks
You run the code for one model three times, each time supplying a different vector_size parameter to the model initialization.
1.2
true
1
5,852
2018-12-03 03:23:29.990
data-item-url is on localhost instead of pythonanywhere (wagtail + snipcart project)
So instead of having data-item-url="https://miglopes.pythonanywhere.com/ra%C3%A7%C3%A3o-de-c%C3%A3o-purina-junior-10kg/" it keeps on appearing data-item-url="http://localhost/ra%C3%A7%C3%A3o-de-c%C3%A3o-purina-junior-10kg/" how do i remove the localhost so my snipcart can work on checkout?
Without more details of where this tag is coming from it's hard to know for sure... but most likely you need to update your site's hostname in the Wagtail admin, under Settings -> Sites.
0
false
1
5,853
2018-12-03 21:09:40.843
Using MFCC's for voice recognition
I'm currently using the Fourier transformation in conjunction with Keras for voice recogition (speaker identification). I have heard MFCC is a better option for voice recognition, but I am not sure how to use it. I am using librosa in python (3) to extract 20 MFCC features. My question is: which MFCC features should I use for speaker identification? In addition to this I am unsure on how to implement these features. What I would do is to get the necessary features and make one long vector input for a neural network. However, it is also possible to display colors, so could image recognition also be possible, or is this more aimed at speech, and not speaker recognition? In short, I am unsure where I should start, as I am not very experienced with image recognition and have no idea where to start. Thanks in advance!!
You can use MFCCs with dense layers / multilayer perceptron, but probably a Convolutional Neural Network on the mel-spectrogram will perform better, assuming that you have enough training data.
0
false
1
5,854
2018-12-04 18:22:55.240
How to add text to a file in python3
Let's say i have the following file, dummy_file.txt(contents below) first line third line how can i add a line to that file right in the middle so the end result is: first line second line third line I have looked into opening the file with the append option, however that adds the line to the end of the file.
The standard file methods don't support inserting into the middle of a file. You need to read the file, add your new data to the data that you read in, and then re-write the whole file.
1.2
true
1
5,855
2018-12-05 08:13:04.893
DataFrame view in PyCharm when using pyspark
I create a pyspark dataframe and i want to see it in the SciView tab in PyCharm when i debug my code (like I used to do when i have worked with pandas). It says "Nothing to show" (the dataframe exists, I can see it when I use the show() command). someone knows how to do it or maybe there is no integration between pycharm and pyspark dataframe in this case?
Pycharm does not support spark dataframes, you should call the toPandas() method on the dataframe. As @abhiieor mentioned in a comment, be aware that you can potentially collect a lot of data, you should first limit() the number of rows returned.
1.2
true
1
5,856
2018-12-08 01:12:11.607
Is it possible to trigger a script or program if any data is updated in a database, like MySQL?
It doesn't have to be exactly a trigger inside the database. I just want to know how I should design this, so that when changes are made inside MySQL or SQL server, some script could be triggered.
One Way would be to keep a counter on the last updated row in the database, and then you need to keep polling(Checking) the database through python for new records in short intervals. If the value in the counter is increased then you could use the subprocess module to call another Python script.
0
false
1
5,857
2018-12-09 22:47:38.660
Error for word2vec with GoogleNews-vectors-negative300.bin
the version of python is 3.6 I tried to execute my code but, there are still some errors as below: Traceback (most recent call last): File "C:\Users\tmdgu\Desktop\NLP-master1\NLP-master\Ontology_Construction.py", line 55, in , binary=True) File "E:\Program Files\Python\Python35-32\lib\site-packages\gensim\models\word2vec.py", line 1282, in load_word2vec_format raise DeprecationWarning("Deprecated. Use gensim.models.KeyedVectors.load_word2vec_format instead.") DeprecationWarning: Deprecated. Use gensim.models.KeyedVectors.load_word2vec_format instead. how to fix the code? or is the path to data wrong?
This is just a warning, not a fatal error. Your code likely still works. "Deprecation" means a function's use has been marked by the authors as no longer encouraged. The function typically still works, but may not for much longer – becoming unreliable or unavailable in some future library release. Often, there's a newer, more-preferred way to do the same thing, so you don't trigger the warning message. Your warning message points you at the now-preferred way to load word-vectors of that format: use KeyedVectors.load_word2vec_format() instead. Did you try using that, instead of whatever line of code (not shown in your question) that you were trying before seeing the warning?
0.673066
false
1
5,858
2018-12-11 00:40:44.053
Use of Breakpoint Method
I am new to python and am unsure of how the breakpoint method works. Does it open the debugger for the IDE or some built-in debugger? Additionally, I was wondering how that debugger would be able to be operated. For example, I use Spyder, does that mean that if I use the breakpoint() method, Spyder's debugger will open, through which I could the Debugger dropdown menu, or would some other debugger open? I would also like to know how this function works in conjunction with the breakpointhook() method.
No, debugger will not open itself automatically as a consequence of setting a breakpoint. So you have first set a breakpoint (or more of them), and then manually launch a debugger. After this, the debugger will perform your code as usually, but will stop performing instructions when it reaches a breakpoint - the instruction at the breakpoint itself it will not perform. It will pause just before it, given you an opportunity to perform some debug tasks, as inspect variable values, set variables manually to other values, continue performing instructions step by step (i. e. only the next instruction), continue performing instructions to the next breakpoint, prematurely stop debugging your program. This is the common scenario for all debuggers of all programming languages (and their IDEs). For IDEs, launching a debugger will enable or reveal debugging instructions in their menu system, show a toolbar for them and will, enable hot keys for them. Without setting at least one breakpoint, most debuggers perform the whole program without a pause (as launching it without a debugger), so you will have no opportunity to perform any debugging task. (Some IDEs have an option to launch a debugger in the "first instruction, then a pause" mode, so you need not set breakpoints in advance in this case.) Yes, the breakpoint() built-in function (introduced in Python 3.7) stops executing your program, enters it in the debugging mode, and you may use Spyder's debugger drop-down menu. (It isn't a Spyders' debugger, only its drop-down menu; the used debugger will be still the pdb, i. e. the default Python DeBugger.) The connection between the breakpoint() built-in function and the breakpointhook() function (from the sys built-in module) is very straightforward - the first one directly calls the second one. The natural question is why we need two functions with the exactly same behavior? The answer is in the design - the breakpoint() function may be changed indirectly, by changing the behavior of the breakpointhook() function. For example, IDE creators may change the behavior of the breakpointhook() function so that it will launch their own debugger, not the pdb one.
1.2
true
1
5,859
2018-12-11 01:14:39.167
Is there an appropriate version of Pygame for Python 3.7 installed with Anaconda?
I'm new to programming and I just downloaded Anaconda a few days ago for Windows 64-bit. I came across the Invent with Python book and decided I wanted to work through it so I downloaded that too. I ended up running into a couple issues with it not working (somehow I ended up with Spyder (Python 2.7) and end=' ' wasn't doing what it was supposed to so I uninstalled and reinstalled Anaconda -- though originally I did download the 3.7 version). It looked as if I had the 2.7 version of Pygame. I'm looking around and I don't see a Pygame version for Python 3.7 that is compatible with Anaconda. The only ones I saw were for Mac or not meant to work with Anaconda. This is all pretty new to me so I'm not sure what my options are. Thanks in advance. Also, how do I delete the incorrect Pygame version?
just use pip install pygame & python will look for a version compatible with your installation. If you're using Anaconda and pip doesn't work on CMD prompt, try using the Anaconda prompt from start menu.
0.673066
false
1
5,860
2018-12-11 17:54:00.677
python-hypothesis: Retrieving or reformatting a falsifying example
Is it possible to retrieve or reformat the falsifying example after a test failure? The point is to show the example data in a different format - data generated by the strategy is easy to work with in the code but not really user friendly, so I'm looking at how to display it in a different form. Even a post-mortem tool working with the example database would be enough, but there does not seem to be any API allowing that, or am I missing something?
Even a post-mortem tool working with the example database would be enough, but there does not seem to be any API allowing that, or am I missing something? The example database uses a private format and only records the choices a strategy made to generate the falsifying example, so there's no way to extract the data of the example short of re-running the test. Stuart's recommendation of hypothesis.note(...) is a good one.
0
false
1
5,861
2018-12-11 19:43:33.823
Template rest one day from the date
In my view.py I obtain a date from my MSSQL database in this format 2018-12-06 00:00:00.000 so I pass that value as context like datedb and in my html page I render it like this {{datedb|date:"c"}} but it shows the date with one day less like this: 2018-12-05T18:00:00-06:00 Is the 06 not the 05 day. why is this happening? how can I show the right date?
One way of solve the problem was chage to USE_TZ = False has Willem said in the comments, but that gives another error so I found the way to do it just adding in the template this {% load tz %} and using the flter |utc on the date variables like datedb|utc|date:'Y-m-d'.
1.2
true
1
5,862
2018-12-12 12:15:09.190
Add full anaconda package list to existing conda environment
I know how to add single packages and I know that the conda create command supports adding a new environment with all anaconda packages installed. But how can I add all anaconda packages to an existing environment?
I was able to solve the problem as following: Create a helper env with anaconda: conda create -n env_name anaconda Activate that env conda activate env_name Export packages into specification file: conda list --explicit > spec-file.txt Activate the target environment: activate target_env_name Import that specification file: conda install --file spec-file.txt
0.386912
false
1
5,863
2018-12-12 17:20:31.293
how to compare two text document with tfidf vectorizer?
I have two different text which I want to compare using tfidf vectorization. What I am doing is: tokenizing each document vectorizing using TFIDFVectorizer.fit_transform(tokens_list) Now the vectors that I get after step 2 are of different shape. But as per the concept, we should have the same shape for both the vectors. Only then the vectors can be compared. What am I doing wrong? Please help. Thanks in advance.
As G. Anderson already pointed out, and to help the future guys on this, when we use the fit function of TFIDFVectorizer on document D1, it means that for the D1, the bag of words are constructed. The transform() function computes the tfidf frequency of each word in the bag of word. Now our aim is to compare the document D2 with D1. It means we want to see how many words of D1 match up with D2. Thats why we perform fit_transform() on D1 and then only the transform() function on D2 would apply the bag of words of D1 and count the inverse frequency of tokens in D2. This would give the relative comparison of D1 against D2.
1.2
true
1
5,864
2018-12-13 13:43:34.987
python, dictionaries how to get the first value of the first key
So basically I have a dictionary with x and y values and I want to be able to get only the x value of the first coordinate and only the y value of the first coordinate and then the same with the second coordinate and so on, so that I can use it in an if-statement.
if the values are ordered in columns just use x=your_variable[:,0] y=your_variable[:,1] i think
0.386912
false
1
5,865
2018-12-15 21:55:17.020
how to install tkinter with Pycharm?
I used sudo apt-get install python3.6-tk and it works fine. Tkinter works if I open python in terminal, but I cannot get it installed on my Pycharm project. pip install command says it cannot find Tkinter. I cannot find python-tk in the list of possible installs either. Is there a way to get Tkinter just standard into every virtualenv when I make a new project in Pycharm? Edit: on Linux Mint Edit2: It is a clear problem of Pycharm not getting tkinter guys. If I run my local python file from terminal it works fine. Just that for some reason Pycharm cannot find anything tkinter related.
Python already has tkinter installed. It is a base module, like random or time, therefore you don't need to install it.
-0.067922
false
1
5,866
2018-12-18 01:57:32.877
Print output to console while redirect the output to a file in linux
I am using python in linux and tried to use command line to print out the output log while redirecting the output and error to a txt.file. However, after I searched and tried the methods such as python [program] 2>&1 | tee output.log But it just redirected the output the the output.log and the print content disappeared. I wonder how I could print the output to console while save/redirect them to output.log ? It would be useful if we hope to tune the parameter while having notice on the output loss and parameter.
You can create a screen like this: screen -L and then run the python script in this screen which would give the output to the console and also would save it the file: screenlog.0. You could leave the screen by using Ctrl+A+D while the script is running and check the script output by reattaching to the screen by screen -r. Also, in the screen, you won't be able to scroll past the current screen view.
0
false
1
5,867
2018-12-18 10:17:19.160
Regex for Sentences in python
I have one more Query here is two sentences [1,12:12] call basic_while1() Error Code: 1046. No database selected [1,12:12] call add() Asdfjgg Error Code: 1046. No database selected [1,12:12] call add() [1,12:12] Error Code: 1046. No database selected now I want to get output like this ['1','12:12',"call basic_while1"] , ['1','12:12', 'call add() Asdfjgg'],['1','12:12', 'call add()'],['1','12:12'],['','','',' Error Code: 1046. No database selected'] I used this r'^\[(\d+),(\s[0-9:]+)\]\s+(.+) this is my main regex then as per my concern I modified it but It didn't help me I want to cut everything exact before "Error Code" how to do that?
basically you asked to get everything before the "Error Code" I want to cut everything exact before "Error Code" so it is simple, try: find = re.search('((.)+)(\sError Code)*',s) and find.group(1) will give you '[1,12:12] call add() Asdfjgg' which is what you wanted. if after you got that string you want list that you requested : desired_list = find.group(1).replace('[','').replace(']','').replace(',',' ').split()
0
false
1
5,868
2018-12-18 23:09:13.550
install numpy on python 3.5 Mac OS High sierra
I wanted to install the numpy package for python 3.5 on my Mac OS High Sierra, but I can't seem to make it work. I have it on python2.7, but I would also like to install it for the next versions. Currently, I have installed python 2.7, python 3.5, and python 3.7. I tried to install numpy using: brew install numpy --with-python3 (no error) sudo port install [email protected] (no error) sudo port install [email protected] (no error) pip3.5 install numpy (gives "Could not find a version that satisfies the requirement numpy (from versions: ) No matching distribution found for numpy" ) I can tell that it is not installed because when I type python3 and then import numpy as np gives "ModuleNotFoundError: No module named 'numpy'" Any ideas on how to make it work? Thanks in advance.
First, you need to activate the virtual environment for the version of python you wish to run. After you have done that then just run "pip install numpy" or "pip3 install numpy". If you used Anaconda to install python then, after activating your environment, type conda install numpy.
1.2
true
2
5,869
2018-12-18 23:09:13.550
install numpy on python 3.5 Mac OS High sierra
I wanted to install the numpy package for python 3.5 on my Mac OS High Sierra, but I can't seem to make it work. I have it on python2.7, but I would also like to install it for the next versions. Currently, I have installed python 2.7, python 3.5, and python 3.7. I tried to install numpy using: brew install numpy --with-python3 (no error) sudo port install [email protected] (no error) sudo port install [email protected] (no error) pip3.5 install numpy (gives "Could not find a version that satisfies the requirement numpy (from versions: ) No matching distribution found for numpy" ) I can tell that it is not installed because when I type python3 and then import numpy as np gives "ModuleNotFoundError: No module named 'numpy'" Any ideas on how to make it work? Thanks in advance.
If running pip3.5 --version or pip3 --version works, what is the output when you run pip3 freeze? If there is no output, it indicates that there are no packages installed for the Python 3 environment and you should be able to install numpy with pip3 install numpy.
0
false
2
5,869
2018-12-19 15:33:16.960
Python Vscode extension - can't change remote jupyter notebook kernel
I've got the updated Python VSCode extension installed and it works great. I'm able to use the URL with the token to connect to a remote Jupyter notebook. I just cannot seem to figure out how to change the kernel on the remote notebook for use in VSCode. If I connect to the remote notebook through a web browser, I can see my two environments through the GUI and change kernels. Is there a similar option in the VSCode extension?
The command that worked for me in vscode: Notebook: Select Notebook Kernel
0
false
2
5,870
2018-12-19 15:33:16.960
Python Vscode extension - can't change remote jupyter notebook kernel
I've got the updated Python VSCode extension installed and it works great. I'm able to use the URL with the token to connect to a remote Jupyter notebook. I just cannot seem to figure out how to change the kernel on the remote notebook for use in VSCode. If I connect to the remote notebook through a web browser, I can see my two environments through the GUI and change kernels. Is there a similar option in the VSCode extension?
Run the following command in vscode: Python: Select interpreter to start Jupyter server It will allow you to choose the kernel that you want.
0
false
2
5,870
2018-12-21 02:43:43.240
Backtesting a Universe of Stocks
I would like to develop a trend following strategy via back-testing a universe of stocks; lets just say all NYSE or S&P500 equities. I am asking this question today because I am unsure how to handle the storage/organization of the massive amounts of historical price data. After multiple hours of research I am here, asking for your experience and awareness. I would be extremely grateful for any information/awareness you can share on this topic Personal Experience background: -I know how to code. Was a Electrical Engineering major, not a CS major. -I know how to pull in stock data for individual tickers into excel. Familiar with using filtering and custom studies on ThinkOrSwim. Applied Context: From 1995 to today lets evaluate the best performing equities on a relative strength/momentum basis. We will look to compare many technical characteristics to develop a strategy. The key to this is having data for a universe of stocks that we can run backtests on using python, C#, R, or any other coding language. We can then determine possible strategies by assesing the returns, the omega ratio, median excess returns, and Jensen's alpha (measured weekly) of entries and exits that are technical driven. Here's where I am having trouble figuring out what the next step is: -Loading data for all S&P500 companies into a single excel workbook is just not gonna work. Its too much data for excel to handle I feel like. Each ticker is going to have multiple MB of price data. -What is the best way to get and then store the price data for each ticker in the universe? Are we looking at something like SQL or Microsoft access here? I dont know; I dont have enough awareness on the subject of handling lots of data like this. What are you thoughts? I have used ToS to filter stocks based off of true/false parameters over a period of time in the past; however the capabilities of ToS are limited. I would like a more flexible backtesting engine like code written in python or C#. Not sure if Rscript is of any use. - Maybe, there are libraries out there that I do not have awareness of that would make this all possible? If there are let me know. I am aware that Quantopia and other web based Quant platforms are around. Are these my best bets for backtesting? Any thoughts on them? Am I making this too complicated? Backtesting a strategy on a single equity or several equities isnt a problem in excel, ToS, or even Tradingview. But with lots of data Im not sure what the best option is for storing that data and then using a python script or something to perform the back test. Random Final thought:-Ultimately would like to explore some AI assistance with optimizing strategies that were created based off parameters. I know this is a thing but not sure where to learn more about this. If you do please let me know. Thank you guys. I hope this wasn't too much. If you can share any knowledge to increase my awareness on the topic I would really appreciate it. Twitter:@b_gumm
The amout of data is too much for EXCEL or CALC. Even if you want to screen only 500 Stocks from S&P 500, you will get 2,2 Millions of rows (approx. 220 days/year * 20 years * 500 stocks). For this amount of data, you should use a SQL Database like MySQL. It is performant enough to handle this amount of data. But you have to find a way for updating. If you get the complete time series daily and store it into your database, this process can take approx. 1 hour. You could also use delta downloads but be aware of corporate actions (e.g. splits). I don't know Quantopia, but I know a similar backtesting service where I have created a python backtesting script last year. The outcome was quite different to what I have expected. The research result was that the backtesting service was calculating wrong results because of wrong data. So be cautious about the results.
0
false
1
5,871
2018-12-21 11:15:31.803
Date Range for Facebook Graph API request on posts level
I am working on a tool for my company created to get data from our Facebook publications. It has not been working for a while, so I have to get all the historical data from June to November 2018. My two scripts (one that get title and type of publication, and the other that get the number of link clicks) are working well to get data from last pushes, but when I try to add a date range in my Graph API request, I have some issues: the regular query is [page_id]/posts?fields=id,created_time,link,type,name the query for historical data is [page_id]/posts?fields=id,created_time,link,type,name,since=1529280000&until=1529712000, as the API is supposed to work with unixtime I get perfect results for regular use, but the results for historical data only shows video publications in Graph API Explorer, with a debug message saying: The since field does not exist on the PagePost object. Same for "until" field when not using "since". I tried to replace "posts/" with "feed/" but it returned the exact same result... Do you have any idea of how to get all the publications from a Page I own on a certain date range?
So it seems that it is not possible to request this kind of data unfortunately, third party services must be used...
0
false
1
5,872
2018-12-23 03:14:14.787
Pyautogui mouse click on different resolution
I'm writing a script for automatizing some tasks at my job. However, I need to make my script portable and try it on different screen resolution. So far right now I've tried to multiply my coordinate with the ratio between the old and new resolutions, but this doesn't work properly. Do you know how I can convert my X, Y coordinates for mouse's clicks make it works on different resolution?
Quick question: Are you trying to get it to click on certain buttons? (i.e. buttons that look the same on every computer you plug it into) And by portable, do you mean on a thumb drive (usb)? You may be able to take an image of the button (i.e. cropping a screenshot), pass it on to the opencv module, one of the modules has an Image within Image searching ability. you can pass that image along with a screenshot (using pyautogui.screenshot()) and it will return the (x,y) coordinates of the button, pass that on to pyautogui.moveto(x,y) and pyautogui.click(), it might be able to work. you might have to describe the action you are trying to get Pyautogui to do a little better.
0.386912
false
1
5,873
2018-12-24 13:58:52.250
extracting text just after a particular tag using beautifulsoup?
I need to extract the text just after strong tag from html page given below? how can i do it using beautiful soup. It is causing me problem as it doesn't have any class or id so only way to select this tag is using text. {strong}Name:{/strong} Sam smith{br} Required result Sam smith
Thanks for all your answers but i was able to do this by following: b_el = soup.find('strong',text='Name:') print b_el.next_sibling This works fine for me. This prints just next sibling how can i print next 2 sibling is there anyway ?
-0.386912
false
1
5,874
2018-12-25 10:26:24.547
How to train your own model in AWS Sagemaker?
I just started with AWS and I want to train my own model with own dataset. I have my model as keras model with tensorflow backend in Python. I read some documentations, they say I need a Docker image to load my model. So, how do I convert keras model into Docker image. I searched through internet but found nothing that explained the process clearly. How to make docker image of keras model, how to load it to sagemaker. And also how to load my data from a h5 file into S3 bucket for training? Can anyone please help me in getting clear explanation?
You can convert your Keras model to a tf.estimator and train using the TensorFlow framework estimators in Sagemaker. This conversion is pretty basic though, I reimplemented my models in TensorFlow using the tf.keras API which makes the model nearly identical and train with the Sagemaker TF estimator in script mode. My initial approach using pure Keras models was based on bring-your-own-algo containers similar to the answer by Matthew Arthur.
0
false
1
5,875
2018-12-25 21:14:39.453
Installing Python Dependencies locally in project
I am coming from NodeJS and learning Python and was wondering how to properly install the packages in requirements.txt file locally in the project. For node, this is done by managing and installing the packages in package.json via npm install. However, the convention for Python project seems to be to add packages to a directory called lib. When I do pip install -r requirements.txt I think this does a global install on my computer, similar to nodes npm install -g global install. How can I install the dependencies of my requirements.txt file in a folder called lib?
use this command pip install -r requirements.txt -t <path-to-the-lib-directory>
1.2
true
1
5,876
2018-12-26 11:44:32.850
P4Python check if file is modified after check-out
I need to check-in the file which is in client workspace. Before check-in i need to verify if the file has been changed. Please tell me how to check this.
Use the p4 diff -sr command. This will do a diff of opened files and return the names of ones that are unchanged.
1.2
true
1
5,877
2018-12-26 21:26:16.360
How can I source two paths for the ROS environmental variable at the same time?
I have a problem with using the rqt_image_view package in ROS. Each time when I type rqt_image_view or rosrun rqt_image_view rqt_image_view in terminal, it will return: Traceback (most recent call last): File "/opt/ros/kinetic/bin/rqt_image_view", line 16, in plugin_argument_provider=add_arguments)) File "/opt/ros/kinetic/lib/python2.7/dist-packages/rqt_gui/main.py", line 59, in main return super(Main, self).main(argv, standalone=standalone, plugin_argument_provider=plugin_argument_provider, plugin_manager_settings_prefix=str(hash(os.environ['ROS_PACKAGE_PATH']))) File "/opt/ros/kinetic/lib/python2.7/dist-packages/qt_gui/main.py", line 338, in main from python_qt_binding import QT_BINDING ImportError: cannot import name QT_BINDING In the /.bashrc file, I have source : source /opt/ros/kinetic/setup.bash source /home/kelu/Dropbox/GET_Lab/leap_ws/devel/setup.bash --extend source /eda/gazebo/setup.bash --extend They are the default path of ROS, my own working space, the robot simulator of our university. I must use all of them. I have already finished many projects with this environmental variable setting. However, when I want to use the package rqt_image_view today, it returns the above error info. When I run echo $ROS_PACKAGE_PATH, I get the return: /eda/gazebo/ros/kinetic/share:/home/kelu/Dropbox/GET_Lab/leap_ws/src:/opt/ros/kinetic/share And echo $PATH /usr/local/cuda/bin:/opt/ros/kinetic/bin:/usr/local/cuda/bin:/usr/local/cuda/bin:/home/kelu/bin:/home/kelu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin Then I only source the /opt/ros/kinetic/setup.bash ,the rqt_image_view package runs!! It seems that, if I want to use rqt_image_view, then I can not source both /opt/ros/kinetic/setup.bash and /home/kelu/Dropbox/GET_Lab/leap_ws/devel/setup.bash at the same time. Could someone tell me how to fix this problem? I have already search 5 hours in google and haven't find a solution.
Different solutions to try: It sounds like the first path /eda/gazebo/ros/kinetic/share or /home/kelu/Dropbox/GET_Lab/leap_ws/src has an rqt_image_view package that is being used. Try to remove that dependency. Have you tried switching the source files being sourced? This depends on how the rqt_image_view package was built, such as by source or through a package manager. Initially, it sounds like there is a problem with the paths being searched or wrong package being run since the package works with the default ROS environment setup.
0
false
1
5,878
2018-12-27 09:49:47.840
how to constrain scipy curve_fit in positive result
I'm using scipy curve_fit to curve a line for retention. however, I found the result line may produce negative number. how can i add some constrain? the 'bounds' only constrain parameters not the results y
One of the simpler ways to handle negative value in y, is to make a log transformation. Get the best fit for log transformed y, then do exponential transformation for actual error in the fit or for any new value prediction.
0
false
1
5,879
2018-12-27 10:57:53.617
Vpython using Spyder : how to prevent browser tab from opening?
I am using vpython library in spyder. After importing the library when I call simple function like print('x') or carry out any assignment operation and execute the program, immediately a browser tab named localhost and port address opens up and I get the output in console {if I used print function}. I would like to know if there is any option to prevent the tab from opening and is it possible to make the tab open only when it is required. PS : I am using windows 10, chrome as browser, python 3.5 and spyder 3.1.4.
There is work in progress to prevent the opening of a browser tab when there are no 3D objects or graph to display. I don't know when this will be released.
0
false
1
5,880
2018-12-27 16:54:21.267
ImportError: cannot import name 'AFAVSignature'
I get this error after already having installed autofocus when I try to run a .py file from the command line that contains the line: from autofocus import Autofocus2D Output: ImportError: cannot import name 'AFAVSignature' Is anyne familiar with this package and how to import it? Thanks
It doesn't look like the library is supported for python 3. I was getting the same error, but removed that line from init.py and found that there was another error with of something like 'print e' not working, so I put the line back in and imported with python2 and it worked.
0
false
1
5,881
2018-12-28 00:04:02.473
how can I find out which python virtual environment I am using?
I have several virtual environment in my computer and sometimes I am in doubt about which python virtual environment I am using. Is there an easy way to find out which virtual environment I am connected to?
Usually it's set to display in your prompt. You can also try typing in which python or which pip in your terminal to see if it points to you venv location, and which one. (Use where instead of which on Windows.)
0.997458
false
2
5,882
2018-12-28 00:04:02.473
how can I find out which python virtual environment I am using?
I have several virtual environment in my computer and sometimes I am in doubt about which python virtual environment I am using. Is there an easy way to find out which virtual environment I am connected to?
From a shell prompt, you can just do echo $VIRTUAL_ENV (or in Windows cmd.exe, echo %VIRTUAL_ENV%). From within Python, sys.prefix provides the root of your Python installation (the virtual environment if active), and sys.executable tells you which Python executable is running your script.
0.99039
false
2
5,882
2018-12-30 14:34:30.510
how to delete django relation and rebuild model
ive made a mistake with my django and messed up my model I want to delete it & then recreate it - how do I do that? I get this when I try to migrate - i just want to drop it relation "netshock_todo" already exists Thanks in advance
Delete all of your migrations file except __init__.py Then go to database and find migrations table, delete all row in migrations table. Then run makemigrations and migrate command
1.2
true
1
5,883
2018-12-31 14:33:34.473
Scrapy shell doesn't crawl web page
I am trying to use Scrapy shell to try and figure out the selectors for zone-h.org. I run scrapy shell 'webpage' afterwards I tried to view the content to be sure that it is downloaded. But all I can see is a dash icon (-). It doesn't download the page. I tried to enter the website to check if my connection to the website is somehow blocked, but it was reachable. I tried setting user agent to something more generic like chrome but no luck there either. The website is blocking me somehow but I don't know how can I bypass it. I digged through the the website if they block crawling and it doesn't say it is forbidden to crawl it. Can anyone help out?
Can you use scrapy shell "webpage" on another webpage that you know works/doesn't block scraping? Have you tried using the view(response) command to open up what scrapy sees in a web browser? When you go to the webpage using a normal browser, are you redirected to another, final homepage? - if so, try using the final homepage's URL in your scrapy shell command Do you have firewalls that could interfere with a Python/commandline app from connecting to the internet?
0
false
1
5,884
2019-01-03 23:22:36.667
How to add to pythonpath in virtualenvironment
On my windows machine I created a virtual environement in conda where I run python 3.6. I want to permanently add a folder to the virtual python path environment. If I append something to sys.path it is lost on exiting python. Outside of my virtual enviroment I can just add to user variables by going to advanced system settings. I have no idea how to do this within my virtual enviroment. Any help is much appreciated.
If you are on Windows 10+, this should work: 1) Click on the Windows button on the screen or on the keyboard, both in the bottom left section. 2) Type "Environment Variables" (without the quotation marks, of course). 3) Click on the option that says something like "Edit the System Environment Variables" 4) Click on the "Advanced Tab," and then click "Environment Variables" (Near the bottom) 5) Click "Path" in the top box - it should be the 3rd option - and then click "Edit" (the top one) 6) Click "New" at the top, and then add the path to the folder you want to create. 7) Click "Ok" at the bottom of all the pages that were opened as a result of the above-described actions to save. That should work, please let me know in the comments if it doesn't.
-0.201295
false
1
5,885
2019-01-04 08:03:05.297
Do Dash apps reload all data upon client log in?
I'm wondering about how a dash app works in terms of loading data, parsing and doing initial calcs when serving to a client who logs onto the website. For instance, my app initially loads a bunch of static local csv data, parses a bunch of dates and loads them into a few pandas data frames. This data is then displayed on a map for the client. Does the app have to reload/parse all of this data every time a client logs onto the website? Or does the dash server load all the data only the first time it is instantiated and then just dish it out every time a client logs on? If the data reloads every time, I would then use quick parsers like udatetime, but if not, id prefer to use a convenient parser like pendulum which isn't as efficient (but wouldn't matter if it only parses once). I hope that question makes sense. Thanks in advance!
The only thing that is called on every page load is the function you can assign to app.layout. This is useful if you want to display dynamic content like the current date on your page. Everything else is just executed once when the app is starting. This means if you load your data outside the app.layout (which I assume is the case) everything is loaded just once.
1.2
true
1
5,886
2019-01-05 23:50:56.660
How do i implement Logic to Django?
So I have an assignment to build a web interface for a smart sensor, I've already written the python code to read the data from the sensor and write it into sqlite3, control the sensor etc. I've built the HTML, CSS template and implemented it into Django. My goal is to run the sensor reading script pararel to the Django interface on the same server, so the server will do all the communication with the sensor and the user will be able to read and configure the sensor from the web interface. (Same logic as modern routers - control and configure from a web interface) Q: Where do I put my sensor_ctl.py script in my Django project and how I make it to run independent on the server. (To read sensor data 24/7) Q: Where in my Django project I use my classes and method from sensor_ctl.py to write/read data to my djangos database instead of the local sqlite3 database (That I've used to test sensor_ctl.py)
Place your code in app/appname/management/commands folder. Use Official guide for management commands. Then you will be able to use your custom command like this: ./manage getsensorinfo So when you will have this command registered, you can just put in in cron and it will be executed every minute. Secondly you need to rewrite your code to use django ORM models like this: Stat.objects.create(temp1=60,temp2=70) instead of INSERT into....
1.2
true
1
5,887
2019-01-06 02:49:09.817
How does selenium work with hosting services?
I have a Flask app that uses selenium to get data from a website. I have spent 10+ hours trying to get heroku to work with it, but no success. My main problem is selenium. with heroku, there is a "buildpack" that you use to get selenium working with it, but with all the other hosting services, I have found no information. I just would like to know how to get selenium to work with any other recommended service than heroku. Thank you.
You need hosting service that able to install Chrome, chromedriver and other dependencies. Find for Virtual Private hosting (VPS), or Dedicated Server or Cloud Hosting but not Shared hosting.
0
false
1
5,888
2019-01-06 10:28:46.997
How do I root in python (other than square root)?
I'm trying to make a calculator in python, so when you type x (root) y it will give you the x root of y, e.g. 4 (root) 625 = 5. I'm aware of how to do math.sqrt() but is there a way to do other roots?
If you want to 625^(1/4){which is the same as 4th root of 625} then you type 625**(1/4) ** is the operator for exponents in python. print(625**(1/4)) Output: 5.0 To generalize: if you want to find the xth root of y, you do: y**(1/x)
0.673066
false
1
5,889
2019-01-08 17:44:43.800
TF-IDF + Multiple Regression Prediction Problem
I have a dataset of ~10,000 rows of vehicles sold on a portal similar to Craigslist. The columns include price, mileage, no. of previous owners, how soon the car gets sold (in days), and most importantly a body of text that describes the vehicle (e.g. "accident free, serviced regularly"). I would like to find out which keywords, when included, will result in the car getting sold sooner. However I understand how soon a car gets sold also depends on the other factors especially price and mileage. Running a TfidfVectorizer in scikit-learn resulted in very poor prediction accuracy. Not sure if I should try including price, mileage, etc. in the regression model as well, as it seems pretty complicated. Currently am considering repeating the TF-IDF regression on a particular segment of the data that is sufficiently huge (perhaps Toyotas priced at $10k-$20k). The last resort is to plot two histograms, one of vehicle listings containing a specific word/phrase and another for those that do not. The limitation here would be that the words that I choose to plot will be based on my subjective opinion. Are there other ways to find out which keywords could potentially be important? Thanks in advance.
As you mentioned you could only so much with the body of text, which signifies the amount of influence of text on selling the cars. Even though the model gives very poor prediction accuracy, you could ahead to see the feature importance, to understand what are the words that drive the sales. Include phrases in your tfidf vectorizer by setting ngram_range parameter as (1,2) This might gives you a small indication of what phrases influence the sales of a car. If would also suggest you to set norm parameter of tfidf as None, to check if has influence. By default, it applies l2 norm. The difference would come based the classification model, which you are using. Try changing the model also as a last option.
1.2
true
1
5,890
2019-01-09 15:12:08.163
Linux Jupyter Notebook : "The kernel appears to have died. It will restart automatically"
I am using the PYNQ Linux on Zedboard and when I tried to run a code on Jupyter Notebook to load a model.h5 I got an error message: "The kernel appears to have died. It will restart automatically" I tried to upgrade keras and Jupyter but still have the same error I don't know how to fix this problem ?
Model is too large to be loaded into memory so kernel has died.
0
false
1
5,891
2019-01-09 22:59:39.340
Difference between Python Interpreter and IDLE?
For homework in my basic python class, we have to start python interpreter in interactive mode and type a statement. Then, we have to open IDLE and type a statement. I understand how to write statements in both, but can't quite tell them apart? I see that there are to different desktop apps for python, one being the python 3.7 (32-bit), and the other being IDLE. Which one is the interpreter, and how do I get it in interactive mode? Also, when I do open IDLE do I put my statement directly in IDLE or, do I open a 'new file' and do it like that? I'm just a bit confused about the differences between them all. But I do really want to learn this language! Please help!
Python unlike some languages can be written one line at a time with you getting feedback after every line . This is called interactive mode. You will know you are in interactive mode if you see ">>>" on the far left side of the window. This mode is really only useful for doing small tasks you don't think will come up again. Most developers write a whole program at once then save it with a name that ends in ".py" and run it in an interpreter to get the results.
1.2
true
1
5,892
2019-01-10 15:30:10.413
How to handle SQL dump with Python
I received a data dump of the SQL database. The data is formatted in an .sql file and is quite large (3.3 GB). I have no idea where to go from here. I have no access to the actual database and I don't know how to handle this .sql file in Python. Can anyone help me? I am looking for specific steps to take so I can use this SQL file in Python and analyze the data. TLDR; Received an .sql file and no clue how to process/analyze the data that's in the file in Python. Need help in necessary steps to make the .sql usable in Python.
It would be an extraordinarily difficult process to try to construct any sort of Python program that would be capable of parsing the SQL syntax of any such of a dump-file and to try to do anything whatsoever useful with it. "No. Absolutely not. Absolute nonsense." (And I have over 30 years of experience, including senior management.) You need to go back to your team, and/or to your manager, and look for a credible way to achieve your business objective ... because, "this isn't it." The only credible thing that you can do with this file is to load it into another mySQL database ... and, well, "couldn't you have just accessed the database from which this dump came?" Maybe so, maybe not, but "one wonders." Anyhow – your team and its management need to "circle the wagons" and talk about your credible options. Because, the task that you've been given, in my professional opinion, "isn't one." Don't waste time – yours, or theirs.
0.201295
false
2
5,893
2019-01-10 15:30:10.413
How to handle SQL dump with Python
I received a data dump of the SQL database. The data is formatted in an .sql file and is quite large (3.3 GB). I have no idea where to go from here. I have no access to the actual database and I don't know how to handle this .sql file in Python. Can anyone help me? I am looking for specific steps to take so I can use this SQL file in Python and analyze the data. TLDR; Received an .sql file and no clue how to process/analyze the data that's in the file in Python. Need help in necessary steps to make the .sql usable in Python.
Eventually I had to install MAMP to create a local mysql server. I imported the SQL dump with a program like SQLyog that let's you edit SQL databases. This made it possible to import the SQL database in Python using SQLAlchemy, MySQLconnector and Pandas.
0.386912
false
2
5,893
2019-01-10 18:42:54.360
Interfacing a QR code recognition to a django database
I'm coming to you with the following issue: I have a bunch of physical boxes onto which I still stick QR codes generated using a python module named qrcode. In a nutshell, what I would like to do is everytime someone wants to take the object contained in a box, he scans the qr code with his phone, then takes it and put it back when he is done, not forgetting to scan the QR code again. Pretty simple, isn't it? I already have a django table containing all my objects. Now my question is related to the design. I suspect the easiest way to achieve that is to have a POST request link in the QR code which will create a new entry in a table with the name of the object that has been picked or put back, the time (I would like to store this information). If that's the correct way to do, how would you approach it? I'm not too sure I see how to make a POST request with a QR code. Would you have any idea? Thanks. PS: Another alternative I can think of would be to a link in the QR code to a form with a dummy button the user would click on. Once clicked the button would update the database. But I would fine a solution without any button more convenient...
The question boils down to a few choices: (a) what data do you want to encode into the QR code; (b) what app will you use to scan the QR code; and (c) how do you want the app to use / respond to the encoded data. If you want your users to use off-the-shelf QR code readers (like free smartphone apps), then encoding a full URL to the appropriate API on your backend makes sense. Whether this should be a GET or POST depends on the QR code reader. I'd expect most to use GET, but you should verify that for your choice of app. That should be functionally fine, if you don't have any concerns about who should be able to scan the code. If you want more control, e.g. you'd like to keep track of who scanned the code or other info not available to the server side just from a static URL request, you need a different approach. Something like, store the item ID (not URL) in the QR code; create your own simple QR code scanner app (many good examples exist) and add a little extra logic to that client, like requiring the user to log in with an ID + password, and build the URL dynamically from the item ID and the user ID. Many security variations possible (like JWT token) -- how you do that won't be dictated by the contents of the QR code. You could do a lot of other things in that QR code scanner / client, like add GPS location, ask the user to indicate why or where they're taking the item, etc. So you can choose between a simple way with no controls, and a more complex way that would allow you to layer in whatever other controls and extra data you need.
1.2
true
1
5,894
2019-01-11 08:09:37.980
How can I read a file having different column for each rows?
my data looks like this. 0 199 1028 251 1449 847 1483 1314 23 1066 604 398 225 552 1512 1598 1 1214 910 631 422 503 183 887 342 794 590 392 874 1223 314 276 1411 2 1199 700 1717 450 1043 540 552 101 359 219 64 781 953 10 1707 1019 463 827 675 874 470 943 667 237 1440 892 677 631 425 How can I read this file structure in python? I want to extract a specific column from rows. For example, If I want to extract value in the second row, second column, how can I do that? I've tried 'loadtxt' using data type string. But it requires string index slicing, so that I could not proceed because each column has different digits. Moreover, each row has a different number of columns. Can you guys help me? Thanks in advance.
Use something like this to split it split2=[] split1=txt.split("\n") for item in split1: split2.append(item.split(" "))
0
false
1
5,895
2019-01-11 11:02:30.650
How to align training and test set when using pandas `get_dummies` with `drop_first=True`?
I have a data set from telecom company having lots of categorical features. I used the pandas.get_dummies method to convert them into one hot encoded format with drop_first=True option. Now how can I use the predict function, test input data needs to be encoded in the same way, as the drop_first=True option also dropped some columns, how can I ensure that encoding takes place in similar fashion. Data set shape before encoding : (7043, 21) Data set shape after encoding : (7043, 31)
When not using drop_first=True you have two options: Perform the one-hot encoding before splitting the data in training and test set. (Or combine the data sets, perform the one-hot encoding, and split the data sets again). Align the data sets after one-hot encoding: an inner join removes the features that are not present in one of the sets (they would be useless anyway). train, test = train.align(test, join='inner', axis=1) You noted (correctly) that method 2 may not do what you expect because you are using drop_first=True. So you are left with method 1.
0.386912
false
1
5,896
2019-01-11 19:30:04.483
Python anytree application challenges with my jupyter notebook ​
I am working in python 3.7.0 through a 5.6.0 jupyter notebook inside Anaconda Navigator 1.9.2 running in a windows 7 environment. It seems like I am assuming a lot of overhead, and from the jupyter notebook, python doesn’t see the anytree application module that I’ve installed. (Anytree is working fine with python from my command prompt.) I would appreciate either 1) IDE recommendations or 2) advise as to how to make my Anaconda installation better integrated. ​
The core problem with my python IDE environment was that I could not utilize the functions in the anytree module. The anytree functions worked fine from the command prompt python, but I only saw error messages from any of the Anaconda IDE portals. Solution: 1) From the windows start menu, I opened Anaconda Navigator, "run as administrator." 2) Select Environments. My application only has the single environment, “base”, 3.) Open selection “terminal”, and you then have a command terminal window in that environment. 4.) Execute [ conda install -c techtron anytree ] and the anytree module functions are now available. 5.) Execute [ conda update –n base –all ] and all the modules are updated to be current.
1.2
true
1
5,897
2019-01-12 03:01:39.153
How do I get VS Code to recognize modules in virtual environment?
I set up a virtual environment in python 3.7.2 using "python -m venv foldername". I installed PIL in that folder. Importing PIL works from the terminal, but when I try to import it in VS code, I get an ImportError. Does anyone know how to get VS code to recognize that module? I've tried switching interpreters, but the problem persists.
I ended up changing the python.venvpath setting to a different folder, and then moving the virtual env folder(The one with my project in it) to that folder. After restarting VS code, it worked.
0
false
1
5,898
2019-01-15 06:52:45.623
Good resources for video processing in Python?
I am using the yolov3 model running on several surveillance cameras. Besides this I also run tensorflow models on these surveillaince streams. I feel a little lost when it comes to using anything but opencv for rtsp streaming. So far I haven't seen people use anything but opencv in python. Are there any places I should be looking into. Please feel free to chime in. Sorry if the question is a bit vague, but I really don't know how to put this better. Feel free to edit mods.
Of course are the alternatives to OpenCV in python if it comes to video capture but in my experience none of them preformed better
1.2
true
1
5,899
2019-01-15 06:54:00.607
Automate File loading from s3 to snowflake
In s3 bucket daily new JSON files are dumping , i have to create solution which pick the latest file when it arrives PARSE the JSON and load it to Snowflake Datawarehouse. may someone please share your thoughts how can we achieve
There are some aspects to be considered such as is it a batch or streaming data , do you want retry loading the file in case there is wrong data or format or do you want to make it a generic process to be able to handle different file formats/ file types(csv/json) and stages. In our case we have built a generic s3 to Snowflake load using Python and Luigi and also implemented the same using SSIS but for csv/txt file only.
0
false
1
5,900
2019-01-15 20:16:34.613
pythonnet clr is not recognized in jupyter notebook
I have installed pythonnet to use clr package for a specific API, which only works with clr in python. Although in my python script (using command or regular .py files) it works without any issues, in jupyter notebook, import clr gives this error, ModuleNotFoundError: No module named 'clr'. Any idea how to address this issue?
since you are intended to use clr in jupyter, in jupyter cell, you could also !pip install pythonnet for the first time and every later time if the vm is frequently nuked
0
false
2
5,901
2019-01-15 20:16:34.613
pythonnet clr is not recognized in jupyter notebook
I have installed pythonnet to use clr package for a specific API, which only works with clr in python. Although in my python script (using command or regular .py files) it works without any issues, in jupyter notebook, import clr gives this error, ModuleNotFoundError: No module named 'clr'. Any idea how to address this issue?
Here is simple suggestion: compare sys.path in both cases and see the differences. Your ipython kernel in jupyter is probably searching in different directories than in normal python process.
1.2
true
2
5,901
2019-01-15 20:47:18.657
Tried importing Java 8 JDK for PySpark, but PySpark still won't let me start a session
Ok here's my basic information before I go on: MacBook Pro: OS X 10.14.2 Python Version: 3.6.7 Java JDK: V8.u201 I'm trying to install the Apache Spark Python API (PySpark) on my computer. I did a conda installation: conda install -c conda-forge pyspark It appeared that the module itself was properly downloaded because I can import it and call methods from it. However, opening the interactive shell with myuser$ pyspark gives the error: No Java runtime present, requesting install. Ok that's fine. I went to Java's download page to get the current JDK, in order to have it run, and downloaded it on Safari. Chrome apparently doesn't support certain plugins for it to work (although initially I did try to install it with Chrome). Still didn't work. Ok, I just decided to start trying to use it. from pyspark.sql import SparkSession It seemed to import the module correctly because it was auto recognizing SparkSession's methods. However, spark = SparkSession.builder.getOrCreate() gave the error: Exception: Java gateway process exited before sending its port number Reinstalling the JDK doesn't seem to fix the issue, and now I'm stuck with a module that doesn't seem to work because of an issue with Java that I'm not seeing. Any ideas of how to fix this problem? Any and all help is appreciated.
This problem is coming with spark 2.4. please try spark 2.3.
0
false
1
5,902
2019-01-16 08:53:00.437
Install python packages offline on server
I want to install some packages on the server which does not access to internet. so I have to take packages and send them to the server. But I do not know how can I install them.
Download the package from website and extract the tar ball. run python setup.py install
-0.201295
false
1
5,903
2019-01-17 08:51:46.440
Dask: delayed vs futures and task graph generation
I have a few basic questions on Dask: Is it correct that I have to use Futures when I want to use dask for distributed computations (i.e. on a cluster)? In that case, i.e. when working with futures, are task graphs still the way to reason about computations. If yes, how do I create them. How can I generally, i.e. no matter if working with a future or with a delayed, get the dictionary associated with a task graph? As an edit: My application is that I want to parallelize a for loop either on my local machine or on a cluster (i.e. it should work on a cluster). As a second edit: I think I am also somewhat unclear regarding the relation between Futures and delayed computations. Thx
1) Yup. If you're sending the data through a network, you have to have some way of asking the computer doing the computing for you how's that number-crunching coming along, and Futures represent more or less exactly that. 2) No. With Futures, you're executing the functions eagerly - spinning up the computations as soon as you can, then waiting for the results to come back (from another thread/process locally, or from some remote you've offloaded the job onto). The relevant abstraction here would be a Queque (Priority Queque, specifically). 3) For a Delayed instance, for instance, you could do some_delayed.dask, or for an Array, Array.dask; optionally wrap the whole thing in either dict() or vars(). I don't know for sure if it's reliably set up this way for every single API, though (I would assume so, but you know what they say about what assuming makes of the two of us...). 4) The simplest analogy would probably be: Delayed is essentially a fancy Python yield wrapper over a function; Future is essentially a fancy async/await wrapper over a function.
1.2
true
1
5,904
2019-01-19 00:00:55.483
Python how to get labels of a generated adjacency matrix from networkx graph?
If i've an networkx graph from a python dataframe and i've generated the adjacency matrix from it. So basically, how to get labels of that adjacency matrix ?
Assuming you refer to nodes' labels, networkx only keeps the the indices when extracting a graph's adjacency matrix. Networkx represents each node as an index, and you can add more attributes if you wish. All node's attributes except for the index are kept in a dictionary. When generating graph's adjacency matrix only the indices are kept, so if you only wish to keep a single string per node, consider indexing nodes by that string when generating your graph.
1.2
true
2
5,905
2019-01-19 00:00:55.483
Python how to get labels of a generated adjacency matrix from networkx graph?
If i've an networkx graph from a python dataframe and i've generated the adjacency matrix from it. So basically, how to get labels of that adjacency matrix ?
If the adjacency matrix is generated without passing the nodeList, then you can call G.nodes to obtain the default NodeList, which should correspond to the rows of the adjacency matrix.
-0.201295
false
2
5,905
2019-01-20 12:48:34.697
How to wait for some time between user inputs in tkinter?
I am making a GUI program where the user can draw on a canvas in Tkinter. What I want to do is that I want the user to be able to draw on the canvas and when the user releases the Mouse-1, the program should wait for 1 second and clear the canvas. If the user starts drawing within that 1 second, the canvas should stay as it is. I am able to get the user input fine. The draw function in my program is bound to B1-Motion. I have tried things like inducing a time delay but I don't know how to check whether the user has started to draw again. How do I check whether the user has started to draw again?
You can bind the mouse click event to a function that sets a bool to True or False, then using after to call a function after 1 second which depending on that bool clears the screen.
1.2
true
1
5,906
2019-01-21 21:13:07.617
Persistent Machine Learning
I have a super basic machine learning question. I've been working through various tutorials and online classes on machine learning and the various techniques to learning how to use it, but what I'm not seeing is the persistent application piece. So, for example, I train a network to recognize what a garden gnome looks like, but, after I run the training set and validate with test data, how do I persist the network so that I can feed it an individual picture and have it tell me whether the picture is of a garden gnome or not? Every tutorial seems to have you run through the training/validation sets without any notion as of how to host the network in a meaningful way for future use. Thanks!
Use python pickle library to dump your trained model on your hard drive and load model and test for persistent results.
0
false
1
5,907
2019-01-21 23:31:10.607
Is it possible to extract an SSRS report embedded in the body of an email and export to csv?
We currently are receiving reports via email (I believe they are SSRS reports) which are embedded in the email body rather than attached. The reports look like images or snapshots; however, when I copy and paste the "image" of a report into Excel, the column/row format is retained and it pastes into Excel perfectly, with the columns and rows getting pasted into distinct columns and rows accordingly. So it isn't truly an image, as there is a structure to the embedded report. Right now, someone has to manually copy and paste each report into excel (step 1), then import the report into a table in SQL Server (step 2). There are 8 such reports every day, so the manual copy/pasting from the email into excel is very time consuming. The question is: is there a way - any way - to automate step 1 so that we don't have to manually copy and paste each report into excel? Is there some way to use python or some other language to detect the format of the reports in the emails, and extract them into .csv or excel files? I have no code to show as this is more of a question of - is this even possible? And if so, any hints as to how to accomplish it would be greatly appreciated.
The most efficient solution is to have the SSRS administrator (or you, if you have permissions) set the subscription to send as CSV. To change this in SSRS right click the report and then click manage. Select "Subscriptions" on the left and then click edit next to the subscription you want to change. Scroll down to Delivery Options and select CSV in the Render Format dropdown. Viola, you receive your report in the correct format and don't have to do any weird extraction.
0
false
1
5,908
2019-01-22 05:44:57.673
How to install sympy package in python
I am a beginner to python, I wanted to symbolic computations. I came to know with sympy installation into our pc we can do symbolic computation. I have installed python 3.6 and I am using anaconda nagavitor, through which I am using spyder as an editor. now I want to install symbolic package sympy how to do that. I checked some post which says use 'conda install sympy'. but where to type this? I typed this in spyder editor and I am getting syntax error. thankyou
To use conda install, open the Anaconda Prompt and enter the conda install sympy command. Alternatively, navigate to the scripts sub-directory in the Anaconda directory, and run pip install sympy.
0
false
2
5,909
2019-01-22 05:44:57.673
How to install sympy package in python
I am a beginner to python, I wanted to symbolic computations. I came to know with sympy installation into our pc we can do symbolic computation. I have installed python 3.6 and I am using anaconda nagavitor, through which I am using spyder as an editor. now I want to install symbolic package sympy how to do that. I checked some post which says use 'conda install sympy'. but where to type this? I typed this in spyder editor and I am getting syntax error. thankyou
In anaconda navigator: Click Environments (on the left) Choose your environment (if you have more than one) On the middle pick "All" from dropbox ("installed" by default) Write sympy in search-box on the right Check the package that showed out Click apply
0.135221
false
2
5,909
2019-01-22 18:26:43.977
tkinter.root.destroy and cv2.imshow - X Windows system error
I found this rather annoying bug and I couldn’t find anything other than a unanswered question on the opencv website, hopefully someone with more knowledge about the two libraries will be able to point me in the right direction. I won’t provide code because that would be beside the point of learning what causes the crash. If I draw a tkinter window and then root.destroy() it, trying to draw a cv2.imshow window will result in a X Window System error as soon as the cv2.waitKey delay is over. I’ve tried to replicate in different ways and it always gets to the error (error_code 3 request_code 15 minor_code 0). It is worth noting that a root.quit() command won’t cause the same issue (as it is my understanding this method will simply exit the main loop rather than destroying the widgets). Also, while any cv2.imshow call will fail, trying to draw a new tkinter window will work just fine. What resources are being shared among the two libraries? What does root.destroy() cause in the X environment to prevent any cv2 window to be drawn? Debian Jessie - Python 3.4 - OpenCV 3.2.0
When you destroy the root window, it destroys all children windows as well. If cv2 uses a tkinter window or child window of the root window, it will fail if you destroy the root window.
0
false
1
5,910
2019-01-22 23:09:52.430
How do I use Pyinstaller to make a Mac file on Windows?
I am on Windows and I am trying to figure how to use Pyinstaller to make a file (on Windows) for a Mac. I have no trouble with Windows I am just not sure how I would make a file for another OS on it. What I tried in cmd was: pyinstaller -F myfile.py and I am not sure what to change to make a Mac compatible file.
Not Possible without using a Virtual Machine
0
false
1
5,911