Q_CreationDate
stringlengths 23
23
| Title
stringlengths 11
149
| Question
stringlengths 25
6.53k
| Answer
stringlengths 15
5.1k
| Score
float64 -1
1.2
| Is_accepted
bool 2
classes | N_answers
int64 1
17
| Q_Id
int64 0
6.76k
|
---|---|---|---|---|---|---|---|
2018-06-06 11:33:58.087
|
Optimizing RAM usage when training a learning model
|
I have been working on creating and training a Deep Learning model for the first time. I did not have any knowledge about the subject prior to the project and therefor my knowledge is limited even now.
I used to run the model on my own laptop but after implementing a well working OHE and SMOTE I simply couldnt run it on my own device anymore due to MemoryError (8GB of RAM). Therefor I am currently running the model on a 30GB RAM RDP which allows me to do so much more, I thought.
My code seems to have some horribly inefficiencies of which I wonder if they can be solved. One example is that by using pandas.concat my model's RAM usages skyrockets from 3GB to 11GB which seems very extreme, afterwards I drop a few columns making the RAm spike to 19GB but actually returning back to 11GB after the computation is completed (unlike the concat). I also forced myself to stop using the SMOTE for now just because the RAM usage would just go up way too much.
At the end of the code, where the training happens the model breaths its final breath while trying to fit the model. What can I do to optimize this?
I have thought about splitting the code into multiple parts (for exmaple preprocessing and training) but to do so I would need to store massive datasets in a pickle which can only reach 4GB (correct me if I'm wrong). I have also given thought about using pre-trained models but I truely did not understand how this process goes to work and how to use one in Python.
P.S.: I would also like my SMOTE back if possible
Thank you all in advance!
|
Slightly orthogonal to your actual question, if your high RAM usage is caused by having entire dataset in memory for the training, you could eliminate such memory footprint by reading and storing only one batch at a time: read a batch, train on this batch, read next batch and so on.
| 0 | false | 1 | 5,552 |
2018-06-07 17:31:59.093
|
ARIMA Forecasting
|
I have a time series data which looks something like this
Loan_id Loan_amount Loan_drawn_date
id_001 2000000 2015-7-15
id_003 100 2014-7-8
id_009 78650 2012-12-23
id_990 100 2018-11-12
I am trying to build a Arima forecasting model on this data which has round about 550 observations. These are the steps i have followed
Converted the time series data into daily data and replaced NA values with 0. the data look something like this
Loan_id Loan_amount Loan_drawn_date
id_001 2000000 2015-7-15
id_001 0 2015-7-16
id_001 0 2015-7-17
id_001 0 2015-7-18
id_001 0 2015-7-19
id_001 0 2015-7-20
....
id_003 100 2014-7-8
id_003 0 2014-7-9
id_003 0 2014-7-10
id_003 0 2014-7-11
id_003 0 2014-7-12
id_003 0 2014-7-13
....
id_009 78650 2012-12-23
id_009 0 2012-12-24
id_009 0 2012-12-25
id_009 0 2012-12-26
id_009 0 2012-12-27
id_009 0 2012-12-28
...
id_990 100 2018-11-12
id_990 0 2018-11-13
id_990 0 2018-11-14
id_990 0 2018-11-15
id_990 0 2018-11-16
id_990 0 2018-11-17
id_990 0 2018-11-18
id_990 0 2018-11-19
Can Anyone please suggest me how do i proceed ahead with these 0 values now?
Seeing the variance in the loan amount numbers i would take log of the of the loan amount. i am trying to build the ARIMA model for the first time and I have read about all the methods of imputation but there is nothing i can find. Can anyone please tell me how do i proceed ahead in this data
|
I don't know exactly about your specific domain problem, but these things apply usually in general:
If the NA values represent 0 values for your domain specific problem, then replace them with 0 and then fit the ARIMA model (this would for example be the case if you are looking at daily sales and on some days you have 0 sales)
If the NA values represent unknown values for your domain specific problem then do not replace them and fit your ARIMA model. (this would be the case, if on a specific day the employee forgot to write down the amount of sales and it could be any number).
I probably would not use imputation at all. There are methods to fit an ARIMA model on time series that have missing values. Usually these algorithms should probably also implemented somewhere in python. (but I don't know since I am mostly using R)
| 1.2 | true | 1 | 5,553 |
2018-06-08 11:15:45.900
|
Randomizing lists with variables in Python 3
|
I'm looking for a way to randomize lists in python (which I already know how to do) but to then make sure that two things aren't next to each other. For example, if I were to be seating people and numbering the listing going down by 0, 1, 2, 3, 4, 5 based on tables but 2 people couldn't sit next to each other how would I make the list organized in a way to prohibit the 2 people from sitting next to each other.
|
As you say that you know how to shuffle a list, the only requirement is that two elements are not next to each other.
A simple way is to:
shuffle the full list
if the two elements are close, choose a random possible position for the second one
exchange the two elements
Maximum cost: one shuffle, one random choice, one exchange
| 1.2 | true | 1 | 5,554 |
2018-06-09 00:49:48.297
|
how to check the SD card size before mounted and do not require root
|
I want to check the SD card size in bash or python. Right now I know df can check it when the SD card is mounted or fdisk -l if root is available.
But I want to know how to check the SD card size without requiring mounting the card to the file system or requiring the root permission? For example, if the SD card is not mounted and I issue df -h /dev/sdc, this will return a wrong size. In python, os.statvfs this function returns the same content as well. I search on stack overflow but did not find a solution yet.
|
Well, I found the lsblk -l can do the job. It tells the total size of the partitions.
| 0 | false | 1 | 5,555 |
2018-06-09 15:59:07.447
|
How to write a python program that 'scrapes' the results from a website for all possible combinations chosen from the given drop down menus?
|
There is a website that claims to predict the approximate salary of an individual on the basis of the following criteria presented in the form of individual drop-down
Age : 5 options
Education : 3 Options
Sex : 3 Options
Work Experience : 4 Options
Nationality: 12 Options
On clicking the Submit button, the website gives a bunch of text as output on a new page with an estimate of the salary in numerals.
So, there are technically 5*3*3*4*12 = 2160 data points. I want to get that and arrange it in an excel sheet. Then I would run a regression algorithm to guess the function this website has used. This is what I am looking forward to achieve through this exercise. This is entirely for learning purposes since I'm keen on learning these tools.
But I don't know how to go about it? Any relevant tutorial, documentation, guide would help! I am programming in python and I'd love to use it to achieve this task!
Thanks!
|
If you are uncomfortable asking them for database as roganjosh suggested :) use Selenium. Write in Python a script that controls Web Driver and repeatedly sends requests to all possible combinations. The script is pretty simple, just a nested loop for each type of parameter/drop down.
If you are sure that value of each type do not depend on each other, check what request is sent to the server. If it is simple URL encoded, like age=...&sex=...&..., then Selenium is not needed. Just generate such URLa for all possible combinations and call the server.
| 1.2 | true | 1 | 5,556 |
2018-06-09 16:51:02.213
|
Rasa-core, dealing with dates
|
I have a problem with rasa core, let's suppose that I have a rasa-nlu able to detect time
eg "let's start tomorrow" would get the entity time: 2018-06-10:T18:39:155Z
Ok, now I want next branches, or decisions to be conditioned by:
time is in the past
time before one month from now
time is beyond 1
month
I do not know how to do that. I do not know how to convert it to a slot able to influence the dialog. My only idea would be to have an action that converts the date to a categorical slot right after detecting time, but I see two problems with that approach:
one it would already be too late, meaning that if I do it with a
posterior action it means the rasa-core has already decided what
decision to take without using the date
and secondly, I do know how to save it, because if I have a
stories.md that compares a detecting date like in the example with
the current time, maybe in the time of the example it was beyond one
month but now it is in the past, so the reset of that story would be
wrong.
I am pretty lost and I do not know how to deal with this, thanks a lot!!!
|
I think you could have a validation in the custom form.
Where it perform validation on the time and perform next action base on the decision on the time.
Your story will have to train to handle different action paths.
| 0 | false | 1 | 5,557 |
2018-06-10 13:57:31.837
|
Multi crtieria alterative ranking based on mixed data types
|
I am building a recommender system which does Multi Criteria based ranking of car alternatives. I just need to do ranking of the alternatives in a meaningful way. I have ways of asking user questions via a form.
Each car will be judged on the following criteria: price, size, electric/non electric, distance etc. As you can see its a mix of various data types, including ordinal, cardinal(count) and quantitative dat.
My question is as follows:
Which technique should I use for incorporating all the models into a single score Which I can rank. I looked at normalized Weighted sum model, but I have a hard time assigning weights to ordinal(ranked) data. I tried using the SMARTER approach for assigning numerical weights to ordinal data but Im not sure if it is appropriate. Please help!
After someone can help me figure out answer to finding the best ranking method, what if the best ranked alternative isnt good enough on an absolute scale? how do i check that so that enlarge the alternative set further?
3.Since the criterion mention above( price, etc) are all on different units, is there a good method to normalized mixed data types belonging to different scales? does it even make sense to do so, given that the data belongs to many different types?
any help on these problems will be greatly appreciated! Thank you!
|
I am happy to see that you are willing to use multiple criteria decision making tool. You can use Analytic Hierarchy Process (AHP), Analytic Network Process (ANP), TOPSIS, VIKOR etc. Please refer relevant papers. You can also refer my papers.
Krishnendu Mukherjee
| -0.386912 | false | 1 | 5,558 |
2018-06-11 22:00:14.173
|
Security of SFTP packages in Python
|
There is plenty of info on how to use what seems to be third-party packages that allow you to access your sFTP by inputting your credentials into these packages.
My dilemma is this: How do I know that these third-party packages are not sharing my credentials with developers/etc?
Thank you in advance for your input.
|
Thanks everyone for comments.
To distill it: Unless you do a code review yourself or you get a the sftp package from a verified vendor (ie - packages made by Amazon for AWS), you can not assume that these packages are "safe" and won't post your info to a third-party site.
| 1.2 | true | 1 | 5,559 |
2018-06-11 22:56:02.750
|
How to sync 2 streams from separate sources
|
Can someone point me the right direction to where I can sync up a live video and audio stream?
I know it sound simple but here is my issue:
We have 2 computers streaming to a single computer across multiple networks (which can be up to hundreds of miles away).
All three computers have their system clocks synchronized using NTP
Video computer gathers video and streams UDP to the Display computer
Audio computer gathers audio and also streams to the Display computer
There is an application which accepts the audio stream. This application does two things (plays the audio over the speakers and sends network delay information to my application). I am not privileged to the method which they stream the audio.
My application displays the video and two other tasks (which I haven't been able to figure out how to do yet).
- I need to be able to determine the network delay on the video stream (ideally, it would be great to have a timestamp on the video stream from the Video computer which is related to that system clock so I can compare that timestamp to my own system clock).
- I also need to delay the video display to allow it to be synced up with the audio.
Everything I have found assumes that either the audio and video are being streamed from the same computer, or that the audio stream is being done by gstreamer so I could use some sync function. I am not privileged to the actual audio stream. I am only given the amount of time the audio was delayed getting there (network delay).
So intermittently, I am given a number as the network delay for the audio (example: 250 ms). I need to be able to determine my own network delay for the video (which I don't know how to do yet). Then I need to compare to see if the audio delay is more than the video network delay. Say the video is 100ms ... then I would need to delay the video display by 150ms (which I also don't know how to do).
ANY HELP is appreciated. I am trying to pick up where someone else has left off in this design so it hasn't been easy for me to figure this out and move forward. Also being done in Python ... which further limits the information I have been able to find. Thanks.
Scott
|
A typical way to synch audio and video tracks or streams is have a timestamp for each frame or packet, which is relative to the start of the streams.
This way you know that no mater how long it took to get to you, the correct audio to match with the video frame which is 20001999 (for example) milliseconds from the start is the audio which is also timestamped as 20001999 milliseconds from the start.
Trying to synch audio and video based on an estimate of the network delay will be extremely hard as the delay is very unlikely to be constant, especially on any kind of IP network.
If you really have no timestamp information available, then you may have to investigate more complex approaches such as 'markers' in the stream metadata or even some intelligent analysis of the audio and video streams to synch on an event in the streams themselves.
| 0 | false | 1 | 5,560 |
2018-06-12 08:22:14.127
|
Python script as service has not access to asoundrc configuration file
|
I have a python script that records audio from an I2S MEMS microphone, connected to a Raspberry PI 3.
This script runs as supposed to, when accessed from the terminal. The problem appears when i run it as a service in the background.
From what i have seen, the problem is that the script as service, has no access to a software_volume i have configured in asoundrc. The strange thing is that i can see this "device" in the list of devices using the get_device_info_by_index() function.
For audio capturing i use the pyaudio library and for making the script a service i have utilized the supervisor utility.
Any ideas what the problem might be and how i can make my script to have access to asoundrc when it runs as a service?
|
The ~/.asoundrc file is looked for the home directory of the current user (this is what ~ means).
Put it into the home directory of the user as which the service runs, or put the definitions into the global ALSA configuration file /etc/asound.conf.
| 1.2 | true | 1 | 5,561 |
2018-06-12 14:34:32.823
|
Odoo 10 mass mailing configure bounces
|
I'm using Odoo 10 mass mailing module to send newsletters. I have configured it but I don't know how to configure bounced emails. It is registering correctly sent emails, received (except that it is registering bounced as received), opened and clicks.
Can anyone please help me?
Regards
|
I managed to solve this problem. Just configured the 'bounce' system parameter to an email with the same name.
Example:
I created an email [email protected]. Also remember to configure the alias domain in your general settings to 'example.com'
After configuring your email to register bounces you need to configure an incomming mail server for this email (I configured it as an IMAP so I think that should do altough you can also configure it as a POP). That would be it.
Hope this info server for you
| 1.2 | true | 1 | 5,562 |
2018-06-14 15:07:58.413
|
How to predict word using trained skipgram model?
|
I'm using Google's Word2vec and I'm wondering how to get the top words that are predicted by a skipgram model that is trained using hierarchical softmax, given an input word?
For instance, when using negative sampling, one can simply multiply an input word's embedding (from the input matrix) with each of the vectors in the output matrix and take the one with the top value. However, in hierarchical softmax, there are multiple output vectors that correspond to each input word, due to the use of the Huffman tree.
How do we compute the likelihood value/probability of an output word given an input word in this case?
|
I haven't seen any way to do this, and given the way hierarchical-softmax (HS) outputs work, there's no obviously correct way to turn the output nodes' activation levels into a precise per-word likelihood estimation. Note that:
the predict_output_word() method that (sort-of) simulates a negative-sampling prediction doesn't even try to handle HS mode
during training, neither HS nor negative-sampling modes make exact predictions – they just nudge the outputs to be more like the current training example would require
To the extent you could calculate all output node activations for a given context, then check each word's unique HS code-point node values for how close they are to "being predicted", you could potentially synthesize relative scores for each word – some measure of how far the values are from a "certain" output of that word. But whether and how each node's deviation should contribute to that score, and how that score might be indicative of a interpretable liklihood, is unclear.
There could also be issues because of the way HS codes are assigned strictly by word-frequency – so 'neighbor' word sharing mostly-the-same-encoding may be very different semantically. (There were some hints in the original word2vec.c code that it could potentially be beneficial to assign HS-encodings by clustering related words to have similar codings, rather than by strict frequency, but I've seen little practice of that since.)
I would suggest sticking to negative-sampling if interpretable predictions are important. (But also remember, word2vec isn't mainly used for predictions, it just uses the training-attempts-at-prediction to bootstrap a vector-arrangment that turn out to be useful for other tasks.)
| 0 | false | 1 | 5,563 |
2018-06-15 06:29:25.043
|
ImportError: cannot import name _remove_dead_weakref python 2.7
|
I use windows 7 and python 2.7
When I used py2exe to make an .exe file I get the error;
Traceback (most recent call last):
File "mainpy", line 17, in
File "main.py", line 17, in
File "zipextimporter.pyc", line 82, in load_module
File "zipextimporter.pyc", line 82, in load_module
File "logging_init_.pyc", line 26, in
File "zipextimporter.pyc", line 82, in load_module
File "weakref.pyc", line 14, in
ImportError: cannot import name _remove_dead_weakref
The same code could be used to make an .exe file in another computer so there is nothing wrong with the code in main.py. The minor environmental difference may cause this problem. I used pycharm, python 2.7.10 and py2exe 0.6.9. On another computer all other config are the same except using sublimetext instead of pycharm.
Could anyone please tell me how to fix that?
Another tricky thing is that
|
It is possible that the library does not exists for the other computer.Please check whether the library exists or not.
| 0 | false | 1 | 5,564 |
2018-06-15 08:06:21.200
|
finding length of linked list in constant time python
|
I'm trying to write a function which finds the length of a linked list in O(1).
I know how to implement it in O(n) but I can't figure out how to do it in constant time... is that even possible?
|
Its not possible because you have to atleast pass through entire linked list and it takes O(n)
Else you have to use a variable which counts when inserting elements into linked list
| 0 | false | 1 | 5,565 |
2018-06-15 21:13:27.137
|
Accessing Hidden Tabs, Web Scraping With Python 3.6
|
I'm using bs4 and urllib.request in python 3.6 to webscrape. I have to open tabs / be able to toggle an "aria-expanded" in button tabs in order to access the div tabs I need.
The button tab when the tab is closed is as follows with <> instead of --:
button id="0-accordion-tab-0" type="button" class="accordion-panel-title u-padding-ver-s u-text-left text-l js-accordion-panel-title" aria-controls="0-accordion-panel-0" aria-expanded="false"
When opened, the aria-expanded="true" and the div tab appears underneath.
Any idea on how to do this?
Help would be super appreciated.
|
BeautifulSoup is used to parse HTML/XML content. You can't click around on a webpage with it.
I recommend you look through the document to make sure it isn't just moving the content from one place to the other. If the content is loaded through AJAX when the button is clicked then you will have to use something like selenium to trigger the click.
An easier option could be to check what url the content is fetched from when you click the button and make a similar call in your script if possible.
| 0 | false | 1 | 5,566 |
2018-06-16 19:30:32.583
|
How to I close down a python server built using flask
|
When I run this simple code:
from flask import Flask,render_template
app = Flask(__name__)
@app.route('/')
def index():
return 'this is the homepage'
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0",port=8080)
It works fine but when I close it using ctrl+z in the terminal and try to run it again I get OSError: [Errno 98] Address already in use
So I tried changing the port address and re-running it which works for some of the port numbers I enter. But I want to know a graceful way to clear the address being used by previous program so that it is free for the current one.
Also is what is the apt way to shutdown a server and free the port address.
Kindly tell a simple way to do so OR explain the method used fully because I read solutions to similar problems but didn't understand any of it.
When I run
netstat -tulpn
The output is :
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3689 0.0.0.0:* LISTEN 4361/rhythmbox
tcp6 0 0 ::1:631 :::* LISTEN -
tcp6 0 0 :::3689 :::* LISTEN 4361/rhythmbox
udp 0 0 0.0.0.0:5353 0.0.0.0:* 3891/chrome
udp 0 0 0.0.0.0:5353 0.0.0.0:* -
udp 0 0 0.0.0.0:39223 0.0.0.0:* -
udp 0 0 127.0.1.1:53 0.0.0.0:* -
udp 0 0 0.0.0.0:68 0.0.0.0:* -
udp 0 0 0.0.0.0:631 0.0.0.0:* -
udp 0 0 0.0.0.0:58140 0.0.0.0:* -
udp6 0 0 :::5353 :::* 3891/chrome
udp6 0 0 :::5353 :::* -
udp6 0 0 :::41938 :::* -
I'm not sure how to interpret it.
the output of ps aux | grep 8080
is :
shreyash 22402 0.0 0.0 14224 928 pts/2 S+ 01:20 0:00 grep --color=auto 8080
I don't know how to interpret it.
Which one is the the process name and what is it's id?
|
It stays alive because you're not closing it. With Ctrl+Z you're removing the execution from current terminal without killing a process.
To stop the execution use Ctrl+C
| 0.201295 | false | 2 | 5,567 |
2018-06-16 19:30:32.583
|
How to I close down a python server built using flask
|
When I run this simple code:
from flask import Flask,render_template
app = Flask(__name__)
@app.route('/')
def index():
return 'this is the homepage'
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0",port=8080)
It works fine but when I close it using ctrl+z in the terminal and try to run it again I get OSError: [Errno 98] Address already in use
So I tried changing the port address and re-running it which works for some of the port numbers I enter. But I want to know a graceful way to clear the address being used by previous program so that it is free for the current one.
Also is what is the apt way to shutdown a server and free the port address.
Kindly tell a simple way to do so OR explain the method used fully because I read solutions to similar problems but didn't understand any of it.
When I run
netstat -tulpn
The output is :
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.1.1:53 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3689 0.0.0.0:* LISTEN 4361/rhythmbox
tcp6 0 0 ::1:631 :::* LISTEN -
tcp6 0 0 :::3689 :::* LISTEN 4361/rhythmbox
udp 0 0 0.0.0.0:5353 0.0.0.0:* 3891/chrome
udp 0 0 0.0.0.0:5353 0.0.0.0:* -
udp 0 0 0.0.0.0:39223 0.0.0.0:* -
udp 0 0 127.0.1.1:53 0.0.0.0:* -
udp 0 0 0.0.0.0:68 0.0.0.0:* -
udp 0 0 0.0.0.0:631 0.0.0.0:* -
udp 0 0 0.0.0.0:58140 0.0.0.0:* -
udp6 0 0 :::5353 :::* 3891/chrome
udp6 0 0 :::5353 :::* -
udp6 0 0 :::41938 :::* -
I'm not sure how to interpret it.
the output of ps aux | grep 8080
is :
shreyash 22402 0.0 0.0 14224 928 pts/2 S+ 01:20 0:00 grep --color=auto 8080
I don't know how to interpret it.
Which one is the the process name and what is it's id?
|
You will have another process listening on port 8080. You can check to see what that is and kill it. You can find processes listening on ports with netstat -tulpn. Before you do that, check to make sure you don't have another terminal window open with the running instance.
| -0.101688 | false | 2 | 5,567 |
2018-06-18 05:46:38.073
|
How to print all recieved post request include headers in python
|
I am a python newbie and i have a controler that get Post requests.
I try to print to log file the request that it receive, i am able to print the body but how can i extract all the request include the headers?
I am using request.POST.get() to get the body/data from the request.
Thanks
|
request.POST should give you the POST body if it is get use request.GET
if the request body is json use request.data
| -0.201295 | false | 1 | 5,568 |
2018-06-18 09:08:56.737
|
Add conda to my environment variables or path?
|
I am having trouble adding conda to my environment variables on windows. I installed anaconda 3 though I didn't installed python, so neither pip or pip3 is working in my prompt. I viewed a few post online but I didn't find anything regarding how to add conda to my environment variables.
I tried to create a PYTHONPATH variable which contained every single folder in Anaconda 3 though it didn't worked.
My anaconda prompt isn't working too. :(
so...How do I add conda and pip to my environment variables or path ?
|
Thanks guys for helping me out. I solved the problem reinstalling anaconda (several times :[ ), cleaning every log and resetting the path variables via set path= in the windows power shell (since I got some problems reinstalling anaconda adding the folder to PATH[specifically "unable to load menus" or something like that])
| 0 | false | 1 | 5,569 |
2018-06-18 16:13:18.567
|
getting "invalid environment marker" when trying to install my python project
|
I'm trying to set up a beta environment on Heroku for my Django-based project, but when I install I am getting:
error in cryptography setup command: Invalid environment marker:
python_version < '3'
I've done some googling, and it is suggested that I upgrade setuptools, but I can't figure out how to do that. (Putting setuptools in requirements.txt gives me a different error message.)
Sadly, I'm still on Python 2.7, if that matters.
|
The problem ended up being the Heroku "buildpack" that I was using. I had been using the one from "thenovices" for a long time so that I could use numpy, scipy, etc.
Sadly, that buildpack specifies an old version of setuptools and python, and those versions were not understanding some of the new instructions (python_version) in the newer setup files for cryptography.
If you're facing this problem, Heroku's advice is to move to Docker-based Heroku, rather than "traditional" Heroku.
| 1.2 | true | 1 | 5,570 |
2018-06-19 10:47:23.783
|
how to use the Werkzeug debugger in postman?
|
i am building a flask RESTapi and i am using postman to make http post requests to my api , i want to use the werkzeug debugger , but postman wont allow me to put in the debugging pin and debug the code from postman , what can i do ?
|
Never needed any debugger for postman. This is not the tool you need the long blanket of code for one endpoint to test.
It gives a good option - console. I have never experienced any trouble this simple element didn't help me so far.
| 0 | false | 1 | 5,571 |
2018-06-19 13:14:35.270
|
Importing Numpy into Sublime Text 3
|
I'm new to coding and I have been learning it on Jupyter. I have anaconda, Sublime Text 3, and the numpy package installed on my Mac.
On Jupyter, we would import numpy by simply typing
import numpy as np
However, this doesnt seem to work on Sublime as I get the error ModuleNotFoundError: No module named 'numpy'
I would appreciate it if someone could guide me on how to get this working. Thanks!
|
If you have Annaconda, install Spyder.
If you continue to have this problem, you could check all the lib install from anaconda.
I suggest you to install nmpy from anaconda.
| 0.386912 | false | 1 | 5,572 |
2018-06-19 18:36:14.277
|
dataframe from underlying script not updating
|
I have a script called "RiskTemplate.py" which generates a pandas dataframe consisting of 156 columns. I created two additional columns which gives me a total count of 158 columns. However, when I run this "RiskTemplate.py" script in another script using the below code, the dataframe only pulls the original 156 columns I had before the two additional columns were added.
exec(open("RiskTemplate.py").read())
how can I get the reference script to pull in the revised dataframe from the underlying script "RiskTemplate.py"?
here are the lines creating the two additional dataframe columns, they work as intended when I run it directly in the "RiskTemplate.py" script. The original dataframe is pulling from SQL via df = pd.read_sql(query,connection)
df['LMV % of NAV'] = df['longmv']/df['End of Month NAV']*100
df['SMV % of NAV'] = df['shortmv']/df['End of Month NAV']*100
|
I figured it out, sorry for the confusion. I did not save the risktemplate that I updated the dataframe to in the same folder that the other reference script was looking at! Newbie!
| 0.386912 | false | 1 | 5,573 |
2018-06-20 01:59:58.440
|
Python regex to match words not having dot
|
I want to accept only those strings having the pattern 'wild.flower', 'pink.flower',...i.e any word preceding '.flower', but the word should not contain dot. For example, "pink.blue.flower" is unacceptable. Can anyone help how to do this in python using regex?
|
Your case of pink.blue.flower is unclear. There are 2 possibilities:
Match only blue (cut off preceding dot and what was before).
Reject this case altogether (you want to match a word preceding .flower
only if it is not preceded with a dot).
In the first case accept other answers.
But if you want the second solution, use: \b(?<!\.)[a-z]+(?=\.flower).
Description:
\b - Start from a word boundary (but it allows the "after a dot" case).
(?<!\.) - Negative lookbehind - exclude the "after a dot" case.
[a-z]+ - Match a sequence of letters.
(?=\.flower) - Positive lookahead for .flower.
I assumed that you have only lower case letters, but if it is not the case,
then add i (case insensitive) option.
Another remark: Other answers include \w, which matches also digits and
_ or even [^\.] - any char other than a dot (including e.g. \n).
Are you happy with that? If you aren't, change to [a-z] (again, maybe
with i option).
| 0 | false | 2 | 5,574 |
2018-06-20 01:59:58.440
|
Python regex to match words not having dot
|
I want to accept only those strings having the pattern 'wild.flower', 'pink.flower',...i.e any word preceding '.flower', but the word should not contain dot. For example, "pink.blue.flower" is unacceptable. Can anyone help how to do this in python using regex?
|
You are looking for "^\w+\.flower$".
| 0.16183 | false | 2 | 5,574 |
2018-06-20 07:09:48.600
|
Conda package unavailable for my OS
|
I am trying to reproduce the results of someone else's Python code for a project. I have the entire setup - conda on my machine, the virtual environment .yml file, the relevant packages, and the data.
However, the code relies on one package from the conda repo that is only available for Linux, and not MacOS. I'm confused and I'm looking for any ways by which I could still use this package on my Mac. Googling doesn't seem to help. The package does have a gitlab page with the code given there, but I don't know how to use it. Any advice/help would be appreciated!
|
I suggest you to try to install it using pip / pip3.
Create your conda environment, here's a random example:
conda -create -n ENVIRONMENTNAME python=3.6 numpy pandas
and then...
If you are using Python2:
pip install tfbio
If you are using Pyton3:
pip3 install tfbio
| 1.2 | true | 1 | 5,575 |
2018-06-20 13:27:49.297
|
Unable to import sikuli library in RIDE
|
I have to write automation scripts using python and Robot framework. I have installed python, Robotframework, RIDE, wxpython. I have installed sikuli library but when I import it in my project, library is not imported. I have tried 'Import Library Spec XML'. My question is from where do I import this .xml or how do I create it?
|
First check whether Sikuli is installed in python directory's \Lib\site-packages.
Robot test should contain as below:
* Settings *
Documentation Sikuli Library Demo
Library SikuliLibrary mode=NEW
* Test Cases *
Sample_Sikuli_Test
blabh blabh etc
| 0 | false | 1 | 5,576 |
2018-06-21 01:45:25.967
|
Python: Reading Excel and automatically turning a string into a Date object?
|
I'm using the openpyxl library in Python and I'm trying to read in the value of a cell. The cells value is a date in the format MM/DD/YYYY. I would like for the value to be read into my script simply as a string (i.e. "8/6/2014"), but instead Python is somehow automatically reading it as a date object (Result is "2014-08-06 00:00:00") I don't know if this is something I need to fix in Excel or Python, but how do I get the string I'm looking for?
|
I would suggest changing it in your Excel if you want to preserve what is being read in by openpyxl. That said, when a cell has been formatted to a date in Excel, it becomes altered to fit a specified format so you've lost the initial string format in either case.
For example, let's say that the user enters the date 1/1/2018 into a cell that is formatted MM/DD/YYYY, Excel will change the data to 01/01/2018 and you will lose the original string that was entered.
If you only care to see data of the form MM/DD/YYYY, an alternate solution would be to cast the date with date_cell.strftime("%m/%d/%Y")
| 0.386912 | false | 1 | 5,577 |
2018-06-21 13:37:54.770
|
How to handle exceptions in Robot Framework?
|
I want to understand how Exception Handling is done in Robot. I want to handle exceptions from multiple testcases using some generic way.
|
You can use below two keywords for that:
Run Keyword And Continue On Failure
Run Keyword And Ignore Error based on ur requirement, however i suggest to go with 2nd one as you'll be able to store output and status
| 0 | false | 1 | 5,578 |
2018-06-22 13:28:57.437
|
How to send ACK with data payload using python
|
I am trying to send an ACK with a data payload using the socket library but I cannot understand how to do it.
Is this supported and if not, what are the alternatives?
|
you can't do this , Socket library uses high level APIs like :
bind()
listen()
accept() ... etc
the acks will be handled for you ,
| 0 | false | 1 | 5,579 |
2018-06-22 17:43:54.400
|
module can't be installed in Django virtual environment
|
I used pip install django-celeryand pip3 install django-celery in Pycharm.
After that I use import djcelery,but Pycharm reminds me no module named djcelery.
Then I used pip list I can see django-celery 3.2.2 in the list.
But when I went to virtual environment path myenv/lib/site-packages where I can see all the module or app that I have installed such as django-pure-pagination.But I can't find django-celery there.
Any friend has any idea, how to fix it?
|
Seems like you've installed django-celery in another environment. Try to install it with PyCharm:
File > Settings > Project > Project Interpreter.
| 1.2 | true | 1 | 5,580 |
2018-06-22 18:41:42.363
|
Getting IDs from t-SNE plot?
|
Quite simple,
If I perform t-SNE in Python for high-dimensional data then I get 2 or 3 coordinates that reflect each new point.
But how do I map these to the original IDs?
One way that I can think of is if the indices are kept fixed the entire time, then I can do:
Pick a point in t-SNE
See what row it was in t-SNE (e.g. index 7)
Go to original data and pick out row/index 7.
However, I don't know how to check if this actually works. My data is super high-dimensional and it is very hard to make sense of it with a normal "sanity check".
Thanks a lot!
Best,
|
If you are using sklearn's t-SNE, then your assumption is correct. The ordering of the inputs match the ordering of the outputs. So if you do y=TSNE(n_components=n).fit_transform(x) then y and x will be in the same order so y[7] will be the embedding of x[7]. You can trust scikit-learn that this will be the case.
| 0.386912 | false | 1 | 5,581 |
2018-06-22 19:56:07.460
|
how to print the first lines of a large XML?
|
I have this large XML file on my drive. The file is too large to be opened with sublimetext or other text editors.
It is also too large to be loaded in memory by the regular XML parsers.
Therefore, I dont even know what's inside of it!
Is it just possible to "print" a few rows of the XML files (as if it was some sort of text document) so that I have an idea of the nodes/content?
I am suprised not to find an easy solution to that issue.
Thanks!
|
This is one of the few things I ever do on the command line: the "more" command is your friend. Just type
more big.xml
| 0.135221 | false | 1 | 5,582 |
2018-06-25 05:26:40.297
|
Two python3 interpreters on win10 cause misunderstanding
|
I used win10. When I installed Visual Studio2017, I configure the Python3 environment. And then after half year I installed Anaconda(Python3) in another directory. Now I have two interpreters in different directories.
Now, no matter in what IDE I code the codes, after I save it and double click it in the directory, the Python File is run by the interpreter configured by VS2017.
Why do I know that? I use sys.path to get to know it. But when I use VS2017 to run the code, it shows no mistake. The realistic example is that I pip install requests in cmd, then I import it in a Python File. Only when I double click it, the Traceback says I don't have this module. In other cases it works well.
So, how to change the default python interpreter of the cmd.exe?
|
Just change the interpreter order of the python in the PATH is enough.
If you want to use python further more, I suggest you to use virtual environment tools like pipenv to control your python interpreters and modules.
| 0 | false | 1 | 5,583 |
2018-06-25 07:13:06.080
|
How can I update Python version when working on JGRASP on mac os?
|
When I installed the new version of python 3.6.5, JGRASP was using the previous version, how can I use the new version on JGRASP?
|
By default, jGRASP will use the first "python" on the system path.
The new version probably only exists as "python3". If that is the case, install jGRASP 2.0.5 Beta if you are using 2.0.4 or a 2.0.5 Alpha. Then, go to "Settings" > "Compiler Settings" > "Workspace", select language "Python" if not already selected, select environment "Python 3 (python 3) - generic", hit "Use" button, and "OK" the dialog.
| 0 | false | 1 | 5,584 |
2018-06-25 13:30:47.293
|
Passing command line parameters to python script from html page
|
I have a html page with text box and submit button. When somebody enters data in text box and click submit, i have to pass that value to a python script which does some operation and print output. Can someone let me now how to achieve this. I did some research on stackoverflow/google but nothing conclusive. I have python 2.7, Windows 10 and Apache tomcat. Any help would be greatly appreciated.
Thanks,
Jagadeesh.K
|
Short answer: You can't just run a python script in the clients browser. It doesn't work that way.
If you want to execute some python when the user does something, you will have to run a web app like the other answer suggested.
| 0 | false | 1 | 5,585 |
2018-06-26 09:53:17.980
|
How to uninstall (mini)conda entirely on Windows
|
I was surprised to be unable to find any information anywhere on the web on how to do this properly, but I suppose my surprise ought to be mitigated by the fact that normally this can be done via Microsoft's 'Add or Remove Programs' via the Control Panel.
This option is not available to me at this time, since I had installed Python again elsewhere (without having uninstalled it), then uninstalled that installation the standard way. Now, despite no option for uninstalling conda via the Control Panel, conda persists in my command line.
Now, the goal is to remove every trace of it, to end up in a state as though conda never existed on my machine in the first place before I reinstall it to the necessary location.
I have a bad feeling that if I simply delete the files and then reinstall, this will cause problems. Does anyone have any guidance in how to achieve the above?
|
Open the folder where you installed miniconda, and then search for uninstall.exe. Open that it will erase miniconda for you.
| 0.995055 | false | 1 | 5,586 |
2018-06-27 02:35:38.367
|
protobuf, and tensorflow installation, which version to choose
|
I already installed python3.5.2, tensorflow(with python3.5.2).
I want to install protobuf now. However, protobuf supports python3.5.0; 3.5.1; and 3.6.0
I wonder which version should I install.
My question is should I upgrade python3.5.2 to python3.6, or downgrade it to python3.5.1.
I see some people are trying downgrade python3.6 to python3.5
I googled how to change python3.5.2 to python3.5.1, but no valuable information. I guess this is not usual option.
|
So it is version problem
one google post says change python version to a more general version.
I am not sure how to change python3.5.2 to python3.5.1
I just installed procobuf3.6
I hope it works
| 0 | false | 1 | 5,587 |
2018-06-27 06:09:44.330
|
How to Resume Python Script After System Reboot?
|
I'm still new to writing scripts with Python and would really appreciate some guidance.
I'm wondering how to continue executing my Python script from where it left off after a system restart.
The script essentially alternates between restarting and executing a task for example: restart the system, open an application and execute a task, restart the system, open another application and execute another task, etc...
But the issue is that once the system restarts and logs back in, all applications shut down including the terminal so the script stops running and never executes the following task. The program shuts down early without an error so the logs are not really of much use. Is there any way to reopen the script and continue from where it left off or prevent applications from being closed during a reboot ? Any guidance on the issue would be appreciated.
Thanks!
Also, I'm using a Mac running High Sierra for reference.
|
You could write your current progress to a file just before you reboot and read said file on Programm start.
About the automatic restart of the script after reboot: you could have the script to put itself in the Autostart of your system and after everything is done remove itself from it.
| 0 | false | 1 | 5,588 |
2018-06-29 09:49:04.483
|
Incorrect UTC date in MongoDB Compass
|
I package my python (flask) application with docker. Within my app I'm generating UTC date with datetime library using datetime.utcnow().
Unfortunately, when I inspect saved data with MongoDB Compass the UTC date is offset two hours (to my local time zone). All my docker containers have time zone set to Etc/UTC. Morover, mongoengine connection to MongoDB uses tz_aware=False and tzinfo=None, what prevents on fly date conversions.
Where does the offset come from and how to fix it?
|
Finally, after trying to prove myself wrong, and hairless head I found the cause and solution for my problem.
We are living in the world of illusion and what you see is not what you get!!!. I decided to inspect my data over mongo shell client
rather than MongoDB Compass GUI. I figure out that data that arrived to database contained correct UTC date. This narrowed all my previous
assumption that there has to be something wrong with my python application, and environment that the application is living in. What left was MongoDB Compass itself.
After changing time zone on my machine to a random time zone, and refreshing collection within MongoDB Compass, displayed UTC date changed to a date that fits random time zone.
Be aware that MongoDB Copass displays whatever is saved in database Date field, enlarged about your machine's time zone. Example, if you saved UTC time equivalent to 8:00 am,
and your machine's time zone is Europe/Warsaw then MongoDB Compass will display 10:00am.
| 1.2 | true | 1 | 5,589 |
2018-07-01 07:10:49.220
|
How to replace all string in all columns using pandas?
|
In pandas, how do I replace & with '&' from all columns where & could be in any position in a string?
For example, in column Title if there is a value 'Good & bad', how do I replace it with 'Good & bad'?
|
Try this
df['Title'] = titanic_df['Title'].replace("&", "&")
| 0 | false | 1 | 5,590 |
2018-07-01 23:33:29.923
|
Binance API: how to get the USD as the quote asset
|
I'm wondering what the symbol is or if I am even able to get historical price data on BTC, ETH, etc. denominated in United States Dollars.
right now when if I'm making a call to client such as:
Client.get_symbol_info('BTCUSD')
it returns nothing
Does anyone have any idea how to get this info? Thanks!
|
You can not make trades in Binance with dollars but instead with Tether(USDT) that is a cryptocurrency that is backed 1-to-1 with dollar.
To solve that use BTCUSDT
Change BTCUSD to BTCUSDT
| 0.995055 | false | 1 | 5,591 |
2018-07-02 10:22:40.247
|
How can i scale a thickness of a character in image using python OpenCV?
|
I created one task, where I have white background and black digits.
I need to take the largest by thickness digit. I have made my picture bw, recognized all symbols, but I don't understand, how to scale thickness. I have tried arcLength(contours), but it gave me the largest by size. I have tried morphological operations, but as I undestood, it helps to remove noises and another mistakes in picture, right? And I had a thought to check the distance between neighbour points of contours, but then I thought that it would be hard because of not exact and clear form of symbols(I draw tnem on paint). So, that's all Ideas, that I had. Can you help me in this question by telling names of themes in Comp. vision and OpenCV, that could help me to solve this task? I don't need exact algorithm of solution, only themes. And if that's not OpenCV task, so which is? What library? Should I learn some pack of themes and basics before the solution of my task?
|
One possible solution that I can think of is to alternate erosion and find contours till you have only one contour left (that should be the thicker). This could work if the difference in thickness is enough, but I can also foresee many particular cases that can prevent a correct identification, so it depends very much on how is your original image.
| 0.201295 | false | 1 | 5,592 |
2018-07-02 13:55:01.080
|
django inspectdb, how to write multiple table name during inspection
|
When I first execute this command it create model in my model.py but when I call it second time for another table in same model.py file then that second table replace model of first can anyone told the reason behind that because I am not able to find perfect solution for that?
$ python manage.py inspectdb tablename > v1/projectname/models.py
When executing this command second time for another table then it replace first table name.
$ python manage.py inspectdb tablename2 > v1/projectname/models.py
|
python manage.py inspectdb table1 table2 table3... > app_name/models.py
Apply this command for inspection of multiple tables of one database in django.
| 0 | false | 1 | 5,593 |
2018-07-02 17:04:29.297
|
Count Specific Values in Dataframe
|
If I had a column in a dataframe, and that column contained two possible categorical variables, how do I count how many times each variable appeared?
So e.g, how do I count how many of the participants in the study were male or female?
I've tried value_counts, groupby, len etc, but seem to be getting it wrong.
Thanks
|
You could use len([x for x in df["Sex"] if x == "Male"). This iterates through the Sex column of your dataframe and determines whether an element is "Male" or not. If it is, it is appended to a list via list comprehension. The length of that list is the number of Males in your dataframe.
| 0 | false | 1 | 5,594 |
2018-07-03 17:27:42.043
|
Which newline character is in my CSV?
|
We receive a .tar.gz file from a client every day and I am rewriting our import process using SSIS. One of the first steps in my process is to unzip the .tar.gz file which I achieve via a Python script.
After unzipping we are left with a number of CSV files which I then import into SQL Server. As an aside, I am loading using the CozyRoc DataFlow Task Plus.
Most of my CSV files load without issue but I have five files which fail. By reading the log I can see that the process is reading the Header and First line as though there is no HeaderRow Delimiter (i.e. it is trying to import the column header as ColumnHeader1ColumnValue1
I took one of these CSVs, copied the top 5 rows into Excel, used Text-To-Columns to delimit the data then saved that as a new CSV file.
This version imported successfully.
That makes me think that somehow the original CSV isn't using {CR}{LF} as the row delimiter but I don't know how to check. Any suggestions?
|
Seeing that you have EmEditor, you can use EmEditor to find the eol character in two ways:
Use View > Character Code Value... at the end of a line to display a dialog box showing information about the character at the current position.
Go to View > Marks and turn on Newline Characters and CR and LF with Different Marks to show the eol while editing. LF is displayed with a down arrow while CRLF is a right angle.
Some other things you could try checking for are: file encoding, wrong type of data for a field and an inconsistent number of columns.
| 0 | false | 1 | 5,595 |
2018-07-03 18:21:44.653
|
Calling custom C subroutines in a Python application
|
I have two custom-written C routines that I would like to use as a part of a large Python application. I would prefer not to rewrite the C code in pure Python (or Cython, etc.), especially to maintain speed.
What is the cleanest, easiest way that I can use my C code from my Python code? Or, what is the cleanest, easiest way for me to wrap my C code for use in my Python source?
I know "cleanest" and "easiest" will attract opinions, but I really just need some good options for using custom pre-written code, versus many of the other answers/tutorials which describe how to use full-on C libraries as CPython extensions.
EDIT:
Cython and ctypes have both been suggested. Which is a better choice in my case? Each of the two routines I mentioned originally are very computationally intensive. They are used for image calculations and reconstructions, so my plan is to build a Python application around their use (with other functionality in mind that I already have in Python) with the C code run as needed for processing.
|
Use cython to wrap your C code. In other words, create a CPython extension using Cython, that calls your C code.
| 1.2 | true | 1 | 5,596 |
2018-07-04 00:03:40.780
|
kubernetes architecture for microservices application - suggestions
|
I have been asked to create a system which has different functionalities. Assume service 1, service 2 and service 3. I need to run these services per hour to do something.
To make the system of those services I need: database, web interface for seeing the result of the process, caching and etc.
This is what I have thought about so far:
I need kubernetes to orchestrate my services which are packaged as docker containers. I will deploy mySql to save my data and I can use Redis cache for caching.
My service are written by python scripts and Java and need to interact with each other through APIs.
I think I can use AWS EKS for my kubernetes cluster
this is what I need to know:
how to deploy python or Java applications and connect them to each other and also connect them to a database service
I also need to know how to schedule the application to run per hour so I can see the results in the web interface.
Please shoot any ideas or questions you have.
Any help would be appreciated.
|
For python/java applications, create docker images for both applications. If these application run forever to serve traffic then deploy them as deployments.If you need to have only cron like functionality, deploy as Job in kubernetes.
To make services accessible, create services as selector for applications, so these services can route traffic to specific applications.
Database or cache should be exposed as service endpoints so your applications are environment independent.
| 0.386912 | false | 1 | 5,597 |
2018-07-04 12:45:42.993
|
search_s search_ext_s search_s methods of python-ldap library doesn't return any Success response code
|
I am using search_ext_s() method of python-ldap to search results on the basis of filter_query, upon completion of search I get msg_id which I passed in result function like this ldap_object.result(msg_id) this returns tuple like this (100, attributes values) which is correct(I also tried result2, result3, result4 method of LDAP object), But how can I get response code for ldap search request, also if there are no result for given filter_criteria I get empty list whereas in case of exception I get proper message like this
ldap.SERVER_DOWN: {u'info': 'Transport endpoint is not connected', 'errno': 107, 'desc': u"Can't contact LDAP server"}
Can somebody please help me if there exists any attribute which can give result code for successful LDAP search operation.
Thanks,
Radhika
|
An LDAP server simply may not return any results, even if there was nothing wrong with the search operation sent by the client. With python-ldap you get an empty result list. Most times this is due to access control hiding directory content. In general the LDAP server won't tell you why it did not return results.
(There are some special cases where ldap.INSUFFICIENT_ACCESS is raised but you should expect the behaviour to be different when using different LDAP servers.)
In python-ldap if the search operation did not raise an exception the LDAP result code was ok(0). So your application has to deal with an empty search result in some application-specific way, e.g. by also raising a custom exception handled by upper layers.
| 1.2 | true | 1 | 5,598 |
2018-07-06 07:29:16.617
|
How to find dot product of two very large matrices to avoid memory error?
|
I am trying to learn ML using Kaggle datasets. In one of the problems (using Logistic regression) inputs and parameters matrices are of size (1110001, 8) & (2122640, 8) respectively.
I am getting memory error while doing it in python. This would be same for any language I guess since it's too big. My question is how do they multiply matrices in real life ML implementations (since it would usually be this big)?
Things bugging me :
Some ppl in SO have suggested to calculate dot product in parts and then combine. But even then matrix would be still too big for RAM (9.42TB? in this case)
And If I write it to a file wouldn't it be too slow for optimization algorithms to read from file and minimize function?
Even if I do write it to file how would fmin_bfgs(or any opt. function) read from file?
Also Kaggle notebook shows only 1GB of storage available. I don't think anyone would allow TBs of storage space.
In my input matrix many rows have similar values for some columns. Can I use it my advantage to save space? (like sparse matrix for zeros in matrix)
Can anyone point me to any real life sample implementation of such cases. Thanks!
|
I have tried many things. I will be mentioning these here, if anyone needs them in future:
I had already cleaned up data like removing duplicates and
irrelevant records depending on given problem etc.
I have stored large matrices which hold mostly 0s as sparse matrix.
I implemented the gradient descent using mini-batch method instead of plain old Batch method (theta.T dot X).
Now everything is working fine.
| 1.2 | true | 1 | 5,599 |
2018-07-06 17:58:05.770
|
Python Unit test debugging in VS code
|
I use VS code for my Python projects and we have unit tests written using Python's unittest module. I am facing a weird issue with debugging unit tests.
VSCode Version: May 2018 (1.24)
OS Version: Windows 10
Let's say I have 20 unit tests in a particular project.
I run the tests by right clicking on a unit test file and click 'Run all unit tests'
After the run is complete, the results bar displays how many tests are passed and how many are failed. (e.g. 15 passed, 5 failed).
And I can run/debug individual test because there is a small link on every unit test function for that.
If I re-run the tests from same file, then the results bar displays the twice number of tests. (e.g. 30 passed, 10 failed)
Also the links against individual test functions disappear. So I cannot run individual tests.
The only way to be able to run/debug individual tests after this is by re-launching the VS code.
Any suggestions on how to fix this?
|
This was a bug in Python extension for VS code and it is fixed now.
| 1.2 | true | 1 | 5,600 |
2018-07-08 23:33:21.993
|
Wondering how I can delete all of my python related files on Mac
|
So I was trying to install kivy, which lead me to install pip, and I went down a rabbit hole of altering directories. I am using PyCharm for the record.
I would like to remove everything python related (including all libraries like pip) from my computer, and start fresh with empty directories, so when I download pycharm again, there will be no issues.
I am using a Mac, so if any of you could let me know how to do that on a Mac, it would be greatly appreciated.
Could I just open finder, search python, and delete all of the files (there are tons) or would that be too destructive?
I hope I am making my situation clear enough, please comment any questions to clarify things.
Thanks!
|
If you are familiar with the Terminal app, you can use command lines to uninstall Python from your Mac. For this, follow these steps:
Move Python to Trash.
Open the Terminal app and type the following command line in the window: ~ alexa$ sudo rm -rf /Applications/Python\ 3.6/
It will require you to enter your administrator password to confirm the deletion.
And for the PyCharm:
Just remove the ~/Library/Caches/PyCharm20 and
~/Library/Preferences/PyCharm20 directories.
Or if that won't be enough:
Go to Applications > right click PyCharm > move to trash
open a terminal and run the following: find ~/Library/ -iname "pycharm"
verify that all of the results are in fact related to PyCharm and not something else important you need to keep. Then, remove them all
using the command: find ~/Library -iname "pycharm" -exec rm -r "{}"
\;
| 0.386912 | false | 1 | 5,601 |
2018-07-10 09:58:49.683
|
Lost artwork while converting .m4a to .mp3 (Python)
|
I'm trying to convert m4a audio file with artwork (cover) to mp3. I'm using ffmpeg to convert the audio.
Once it copies, the artwork is lost. I'm quite not sure, how to retain the cover. I found some reference about mutagen library but not sure again how to use to copy the artwork.
Any help would be great.
ffmpeg -i source/file -acodec libmp3lame -ab 128k destination.mp3
Update:
I'm reading the artwork and m4a to be able to attache it back.
I can get the artwork by using
artwork = audio.tags['covr']
Now my problem is how do I save the artwork as image in a new file?
I tried the Following:
with open(path/to/write, 'wb') as img:
img.write(artwork)
This gives me an error
'list' does not support the buffer interface line
Any suggestion, how I can save the artwork extracted covr data?
|
If anyone is having the same issue;
I ended up reading the artwork from original file and attaching it back to mp3
if audioFileNameWithM4AExtension.startswith("covr"): #checks if it has cover
cover = audioFileNameWithM4AExtension.tags['covr'][0] #gets the cover
| 0 | false | 1 | 5,602 |
2018-07-10 15:26:20.883
|
Use proxy sentences from cleaned data
|
Gensim's Word2Vec model takes as an input a list of lists with the inner list containing individual tokens/words of a sentence. As I understand Word2Vec is used to "quantify" the context of words within a text using vectors.
I am currently dealing with a corpus of text that has already been split into individual tokens and no longer contains an obvious sentence format (punctuation has been removed). I was wondering how should I input this into the Word2Vec model?
Say if I simply split the corpus into "sentences" of uniform length (10 tokens per sentence for example), would this be a good way of inputting the data into the model?
Essentially, I am wondering how the format of the input sentences (list of lists) affects the output of Word2Vec?
|
That sounds like a reasonable solution. If you have access to data that is similar to your cleaned data you could get average sentence length from that data set. Otherwise, you could find other data in the language you are working with (from wikipedia or another source) and get average sentence length from there.
Of course your output vectors will not be as reliable as if you had the correct sentence boundaries, but it sounds like word order was preserved so there shouldn't be too much noise from incorrect sentence boundaries.
| 0.201295 | false | 1 | 5,603 |
2018-07-10 19:19:58.840
|
Python: ContextualVersionConflict: pandas 0.22.0; Requirement.parse('pandas<0.22,>=0.19'), {'scikit-survival'})
|
I have this issue:
ContextualVersionConflict: (pandas 0.22.0 (...),
Requirement.parse('pandas<0.22,>=0.19'), {'scikit-survival'})
I have even tried to uninstall pandas and install scikit-survival + dependencies via anaconda. But it still does not work....
Anyone with a suggestion on how to fix?
Thanks!
|
Restarting jupyter notebook fixed it. But I am unsure why this would fix it?
| 0.999909 | false | 1 | 5,604 |
2018-07-11 15:01:09.260
|
How do I calculate the percentage of difference between two images using Python and OpenCV?
|
I am trying to write a program in Python (with OpenCV) that compares 2 images, shows the difference between them, and then informs the user of the percentage of difference between the images. I have already made it so it generates a .jpg showing the difference, but I can't figure out how to make it calculate a percentage. Does anyone know how to do this?
Thanks in advance.
|
You will need to calculate this on your own. You will need the count of diferent pixels and the size of your original image then a simple math: (diferentPixelsCount / (mainImage.width * mainImage.height))*100
| 0 | false | 1 | 5,605 |
2018-07-11 21:22:40.900
|
How to import 'cluster' and 'pylab' into Pycharm
|
I would like to use Pycharm to write some data science code and I am using Visual Studio Code and run it from terminal. But I would like to know if I could do it on Pycharm? I could not find some modules such as cluster and pylab on Pycharm? Anyone knows how I could import these modules into Pycharm?
|
Go to the Preferences Tab -> Project Interpreter, there's a + symbol that allows you to view and download packages. From there you should be able to find cluster and pylab and install them to PyCharm's interpreter. After that you can import them and run them in your scripts.
Alternatively, you may switch the project's interpreter to an interpreter that has the packages installed already. This can be done from that same menu.
| 0.135221 | false | 1 | 5,606 |
2018-07-14 17:06:41.383
|
Multiple Inputs for CNN: images and parameters, how to merge
|
I use Keras for a CNN and have two types of Inputs: Images of objects, and one or two more parameters describing the object (e.g. weight). How can I train my network with both data sources? Concatenation doesn't seem to work because the inputs have different dimensions. My idea was to concatenate the output of the image analysis and the parameters somehow, before sending it into the dense layers, but I'm not sure how. Or is it possible to merge two classifications in Keras, i.e. classifying the image and the parameter and then merging the classification somehow?
|
You can use Concatenation layer to merge two inputs. Make sure you're converting multiple inputs into same shape; you can do this by adding additional Dense layer to either of your inputs, so that you can get equal length end layers. Use those same shape outputs in Concatenation layer.
| 1.2 | true | 1 | 5,607 |
2018-07-14 20:27:44.470
|
How to analyse the integrity of clustering with no ground truth labels?
|
I'm clustering data (trying out multiple algorithms) and trying to evaluate the coherence/integrity of the resulting clusters from each algorithm. I do not have any ground truth labels, which rules out quite a few metrics for analysing the performance.
So far, I've been using Silhouette score as well as calinski harabaz score (from sklearn). With these scores, however, I can only compare the integrity of the clustering if my labels produced from an algorithm propose there to be at minimum, 2 clusters - but some of my algorithms propose that one cluster is the most reliable.
Thus, if you don't have any ground truth labels, how do you assess whether the proposed clustering by an algorithm is better than if all of the data was assigned in just one cluster?
|
Don't just rely on some heuristic, that someone proposed for a very different problem.
Key to clustering is to carefully consider the problem that you are working on. What is the proper way of proposing the data? How to scale (or not scale)? How to measure the similarity of two records in a way that it quantifies something meaningful for your domain.
It is not about choosing the right algorithm; your task is to do the math that relates your domain problem to what the algorithm does. Don't treat it as a black box. Choosing the approach based on the evaluation step does not work: it is already too late; you probably did some bad decisions already in the preprocessing, used the wrong distance, scaling, and other parameters.
| 0 | false | 1 | 5,608 |
2018-07-15 06:08:43.183
|
how to run python code in atom in a terminal?
|
I'm a beginner in programming and atom so when try to run my python code written in the atom in terminal I don't know how...i tried installing packages like run-in-terminal,platformio-ide-terminal but I don't know how to use them.
|
I would not try to do it using extensions. I would use the platformio-ide-terminal and just do it from the command line.
Just type: Python script_name.py and it should run fine. Be sure you are in the same directory as your python script.
| 0.135221 | false | 3 | 5,609 |
2018-07-15 06:08:43.183
|
how to run python code in atom in a terminal?
|
I'm a beginner in programming and atom so when try to run my python code written in the atom in terminal I don't know how...i tried installing packages like run-in-terminal,platformio-ide-terminal but I don't know how to use them.
|
Save your Script as a .py file in a directory.
Open the terminal and navigate to the directory containing your script using cd command.
Run python <filename>.py if you are using python2
Run python3 <filename.py> if you are using python3
| 0.135221 | false | 3 | 5,609 |
2018-07-15 06:08:43.183
|
how to run python code in atom in a terminal?
|
I'm a beginner in programming and atom so when try to run my python code written in the atom in terminal I don't know how...i tried installing packages like run-in-terminal,platformio-ide-terminal but I don't know how to use them.
|
"python filename.py" should run your python code. If you wish to specifically run the program using python 3.6 then it would be "python3.6 filename.py".
| 0 | false | 3 | 5,609 |
2018-07-16 08:18:12.017
|
How to measure latency in paho-mqtt network
|
I'm trying measure the latency from my publisher to my subscriber in an MQTT network. I was hoping to use the on_message() function to measure how long this trip takes but its not clear to me whether this callback comes after the broker receives the message or after the subscriber receives it?
Also does anyone else have any other suggestion on how to measure latency across the network?
|
I was involved in similar kind of work where I was supposed measure the latency in wireless sensor networks. There are different ways to measure the latencies.
If the subscriber and client are synchronized.
Fill the payload with the time stamp value at the client and transmit
this packet to subscriber. At the subscriber again take the time
stamp and take the difference between the time stamp at the
subscriber and the timestamp value in the packet.
This gives the time taken for the packet to reach subscriber from
client.
If the subscriber and client are not synchronized.
In this case measurement of latency is little tricky. Assuming the network is symmetrical.
Start the timer at client before sending the packet to subscriber.
Configure subscriber to echo back the message to client. Stop the
timer at the client take the difference in clock ticks. This time
represents the round trip time you divide it by two to get one
direction latency.
| 0.545705 | false | 2 | 5,610 |
2018-07-16 08:18:12.017
|
How to measure latency in paho-mqtt network
|
I'm trying measure the latency from my publisher to my subscriber in an MQTT network. I was hoping to use the on_message() function to measure how long this trip takes but its not clear to me whether this callback comes after the broker receives the message or after the subscriber receives it?
Also does anyone else have any other suggestion on how to measure latency across the network?
|
on_message() is called on the subscriber when the message reaches the subscriber.
One way to measure latency is to do a loop back publish in the same client e.g.
Setup a client
Subscribe to a given topic
Publish a message to the topic and record the current (high resolution) timestamp.
When on_message() is called record the time again
It is worth pointing out that this sort of test assumes that both publisher/subscriber will be on similar networks (e.g. not cellular vs gigabit fibre).
Also latency will be influenced by the load on the broker and the number of subscribers to a given topic.
The other option is to measure latency passively by monitoring the network assuming you can see all the traffic from one location as synchronising clocks across monitoring point is very difficult.
| 0.386912 | false | 2 | 5,610 |
2018-07-16 13:07:02.643
|
Brief explanation on tensorflow object detection working mechanism
|
I've searched for working mechanism of tensorflow object detection in google. I've searched how tensorflow train models with dataset. It give me suggestion about how to implement rather than how it works.
Can anyone explain how dataset are trained in fit into models?
|
You can't "simply" understand how Tensorflow works without a good background on Artificial Intelligence and Machine Learning.
I suggest you start working on those topics. Tensorflow will get much easier to understand and to handle after that.
| 0 | false | 1 | 5,611 |
2018-07-16 16:38:23.357
|
fetch data from 3rd party API - Single Responsibility Principle in Django
|
What's the most elegant way to fetch data from an external API if I want to be faithful to the Single Responsibility Principle? Where/when exactly should it be made?
Assuming I've got a POST /foo endpoint which after being called should somehow trigger a call to the external API and fetch/save some data from it in my local DB.
Should I add the call in the view? Or the Model?
|
I usually add any external API calls into dedicated services.py module (same level as your models.py that you're planning to save results into or common app if any of the existing are not logically related)
Inside that module you can use class called smth like MyExtarnalService and add all needed methods for fetching, posting, removing etc. just like you would do with drf api view.
Also remember to handle exceptions properly (timeouts, connection errors, error response codes) by defining custom error exception classes.
| 0 | false | 1 | 5,612 |
2018-07-16 18:35:21.250
|
What is the window length of moving average trend in seasonal.seasonal_decompose package?
|
I am using seasonal.seasonal_decompose in python.
What is the window length of moving average trend in seasonal.seasonal_decompose package?
Based on my results, I think it is 25. But how can I be sure? how can I change this window length?
|
I found the answer. The "freq" part defines the window of moving average. Still not sure how the program choose the window when we do not declare it.
| 0 | false | 1 | 5,613 |
2018-07-17 10:48:39.477
|
How to retrain model in graph (.pb)?
|
I have model saved in graph (.pb file). But now the model is inaccurate and I would like to develop it. I have pictures of additional data to learn, but I don't if it's possible or if it's how to do it? The result must be the modified of new data pb graph.
|
It's a good question. Actually it would be nice, if someone could explain how to do this. But in addition i can say you, that it would come to "catastrophic forgetting", so it wouldn't work out. You had to train all your data again.
But anyway, i also would like to know that espacially for ssd, just for test reasons.
| 0.545705 | false | 1 | 5,614 |
2018-07-17 10:52:00.203
|
Django - how to send mail 5 days before event?
|
I'm Junior Django Dev. Got my first project. Doing quite well but senior dev that teaches me went on vacations....
I have a Task in my company to create a function that will remind all people in specyfic Group, 5 days before event by sending mail.
There is a TournamentModel that contains a tournament_start_date for instance '10.08.2018'.
Player can join tournament, when he does he joins django group "Registered".
I have to create a function (job?) that will check tournament_start_date and if tournament begins in 5 days, this function will send emails to all people in "Registered" Group... automatically.
How can I do this? What should I use? How to run it and it will automatically check? I'm learning python/django for few months... but I meet jobs fot the first time ;/
I will appreciate any help.
|
You can set this mail send function as cron job。You can schedule it by crontab or Celery if Your team has used it.
| 0.201295 | false | 1 | 5,615 |
2018-07-19 12:11:04.380
|
how to change vs code python extension's language?
|
My computer's system language is zh_cn, so the vs code python extension set the default language to chinese. But i want to change the language to english.
I can't find the reference in the doc or on the internet. Anyone konws how to do it? Thank's for help
PS: vs code's locale is alreay set to english.
|
You probably installed other python extensions for VSCode. Microsoft official python extension will follow the locale setting in user/workspace settings.
Try uninstall other python extensions, you may see it changes to English.
| 0 | false | 2 | 5,616 |
2018-07-19 12:11:04.380
|
how to change vs code python extension's language?
|
My computer's system language is zh_cn, so the vs code python extension set the default language to chinese. But i want to change the language to english.
I can't find the reference in the doc or on the internet. Anyone konws how to do it? Thank's for help
PS: vs code's locale is alreay set to english.
|
When VScode is open go to View menu and select Command Palette. Once the command palette is open type display in the box. This should display the message configure display language. Open that and you should be in a local.json file. The variable local should be set to en for English.
| 0 | false | 2 | 5,616 |
2018-07-19 19:12:26.090
|
Python3 remove multiple hyphenations from a german string
|
I'm currently working on a neural network that evaluates students' answers to exam questions. Therefore, preprocessing the corpora for a Word2Vec network is needed. Hyphenation in german texts is quite common. There are mainly two different types of hyphenation:
1) End of line:
The text reaches the end of the line so the last word is sepa-
rated.
2) Short form of enumeration:
in case of two "elements":
Geistes- und Sozialwissenschaften
more "elements":
Wirtschafts-, Geistes- und Sozialwissenschaften
The de-hyphenated form of these enumerations should be:
Geisteswissenschaften und Sozialwissenschaften
Wirtschaftswissenschaften, Geisteswissenschaften und Sozialwissenschaften
I need to remove all hyphenations and put the words back together. I already found several solutions for the first problem.
But I have absoluteley no clue how to get the second part (in the example above "wissenschaften") of the words in the enumeration problem. I don't even know if it is possible at all.
I hope that I have pointet out my problem properly.
So has anyone an idea how to solve this problem?
Thank you very much in advance!
|
It's surely possible, as the pattern seems fairly regular. (Something vaguely analogous is sometimes seen in English. For example: The new requirements applied to under-, over-, and average-performing employees.)
The rule seems to be roughly, "when you see word-fragments with a trailing hyphen, and then an und, look for known words that begin with the word-fragments, and end the same as the terminal-word-after-und – and replace the word-fragments with the longer words".
Not being a German speaker and without language-specific knowledge, it wouldn't be possible to know exactly where breaks are appropriate. That is, in your Geistes- und Sozialwissenschaften example, without language-specific knowledge, it's unclear whether the first fragment should become Geisteszialwissenschaften or Geisteswissenschaften or Geistesenschaften or Geiestesaften or any other shared-suffix with Sozialwissenschaften. But if you've got a dictionary of word-fragments, or word-frequency info from other text that uses the same full-length word(s) without this particular enumeration-hyphenation, that could help choose.
(If there's more than one plausible suffix based on known words, this might even be a possible application of word2vec: the best suffix to choose might well be the one that creates a known-word that is closest to the terminal-word in word-vector-space.)
Since this seems a very German-specific issue, I'd try asking in forums specific to German natural-language-processing, or to libraries with specific German support. (Maybe, NLTK or Spacy?)
But also, knowing word2vec, this sort of patch-up may not actually be that important to your end-goals. Training without this logical-reassembly of the intended full words may still let the fragments achieve useful vectors, and the corresponding full words may achieve useful vectors from other usages. The fragments may wind up close enough to the full compound words that they're "good enough" for whatever your next regression/classifier step does. So if this seems a blocker, don't be afraid to just try ignoring it as a non-problem. (Then if you later find an adequate de-hyphenation approach, you can test whether it really helped or not.)
| 0.386912 | false | 1 | 5,617 |
2018-07-20 10:26:17.870
|
Can't install tensorflow with pip or anaconda
|
Does anyone know how to properly install tensorflow on Windows?
I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same "Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )
No matching distribution found for tensorflow-gpu" error
I tried installing using pip and anaconda, both don't work for me.
Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.
|
Tensorflow or Tensorflow-gpu is supported only for 3.5.X versions of Python. Try installing with any Python 3.5.X version. This should fix your problem.
| 1.2 | true | 5 | 5,618 |
2018-07-20 10:26:17.870
|
Can't install tensorflow with pip or anaconda
|
Does anyone know how to properly install tensorflow on Windows?
I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same "Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )
No matching distribution found for tensorflow-gpu" error
I tried installing using pip and anaconda, both don't work for me.
Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.
|
You mentioned Anaconda. Do you run your python through there?
If so check in Anaconda Navigator --> Environments, if your current environment have got tensorflow installed.
If not, install tensorflow and run from that environment.
Should work.
| 0 | false | 5 | 5,618 |
2018-07-20 10:26:17.870
|
Can't install tensorflow with pip or anaconda
|
Does anyone know how to properly install tensorflow on Windows?
I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same "Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )
No matching distribution found for tensorflow-gpu" error
I tried installing using pip and anaconda, both don't work for me.
Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.
|
As of July 2019, I have installed it on python 3.7.3 using py -3 -m pip install tensorflow-gpu
py -3 in my installation selects the version 3.7.3.
The installation can also fail if the python installation is not 64 bit. Install a 64 bit version first.
| 0 | false | 5 | 5,618 |
2018-07-20 10:26:17.870
|
Can't install tensorflow with pip or anaconda
|
Does anyone know how to properly install tensorflow on Windows?
I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same "Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )
No matching distribution found for tensorflow-gpu" error
I tried installing using pip and anaconda, both don't work for me.
Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.
|
Actually the easiest way to install tensorflow is:
install python 3.5 (not 3.6 or 3.7) you can check wich version you have by typing "python" in the cmd.
When you install it check in the options that you install pip with it and you add it to variables environnement.
When its done just go into the cmd and tipe "pip install tensorflow"
It will download tensorflow automatically.
If you want to check that it's been installed type "python" in the cmd then some that ">>>" will appear, then you write "import tensorflow" and if there's no error, you've done it!
| 0 | false | 5 | 5,618 |
2018-07-20 10:26:17.870
|
Can't install tensorflow with pip or anaconda
|
Does anyone know how to properly install tensorflow on Windows?
I'm currently using Python 3.7 (also tried with 3.6) and every time I get the same "Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )
No matching distribution found for tensorflow-gpu" error
I tried installing using pip and anaconda, both don't work for me.
Found a solution, seems like Tensorflow doesn't support versions of python after 3.6.4. This is the version I'm currently using and it works.
|
Not Enabling the Long Paths can be the potential problem.To solve that,
Steps include:
Go to Registry Editor on the Windows Laptop
Find the key "HKEY_LOCAL_MACHINE"->"SYSTEM"->"CurrentControlSet"->
"File System"->"LongPathsEnabled" then double click on that option and change the value from 0 to 1.
3.Now try to install the tensorflow it will work.
| 0 | false | 5 | 5,618 |
2018-07-21 10:12:24.710
|
Chatterbot dynamic training
|
I'm using chatter bot for implementing chat bot. I want Chatterbot to training the data set dynamically.
Whenever I run my code it should train itself from the beginning, because I require new data for every person who'll chat with my bot.
So how can I achieve this in python3 and on windows platform ?
what I want to achieve and problem I'm facing:
I've a python program which will create a text file student_record.txt, this will be generate from a data base and almost new when different student signup or login. In the chatter bot, I trained the bot using with giving this file name but it still replay from the previous trained data
|
I got the solution for that, I just deleted the data base on the beginning of the program thus new data base will create during the execution of the program.
I used the following command to delete the data base
import os
os.remove("database_name")
in my case
import os
os.remove("db.sqlite3")
thank you
| 0 | false | 1 | 5,619 |
2018-07-21 11:51:55.627
|
How do I use Google Cloud API's via Anaconda Spyder?
|
I am pretty new to Python in general and recently started messing with the Google Cloud environment, specifically with the Natural Language API.
One thing that I just cant grasp is how do I make use of this environment, running scripts that use this API or any API from my local PC in this case my Anaconda Spyder environment?
I have my project setup, but from there I am not exactly sure, which steps are necessary. Do I have to include the authentication somehow in the Script inside Spyder?
Some insights would be really helpful.
|
First install the API by pip install or conda install in the scripts directory of anaconda and then simply import it into your code and start coding.
| -0.201295 | false | 1 | 5,620 |
2018-07-21 16:20:50.893
|
How to open/create images in Python without using external modules
|
I have a python script which opens an image file (.png or .ppm) using OpenCV, then loads all the RGB values into a multidimensional Python array (or list), performs some pixel by pixel calculations solely on the Python array (OpenCV is not used at all for this stage), then uses the newly created array (containing new RGB values) to write a new image file (.png here) using OpenCV again. Numpy is not used at all in this script. The program works fine.
The question is how to do this without using any external libraries, regardless whether they are for image processing or not (e.g. OpenCV, Numpy, Scipy, Pillow etc.). To summarize, I need to use bare bones Python's internal modules to: 1. open image and read the RGB values and 2. write a new image from pre-calculated RGB values. I will use Pypy instead of CPython for this purpose, to speed things up.
Note: I use Windows 10, if that matters.
|
Working with bare-bones .ppm files is trivial: you have three lines of text (P6, "width height", 255), and then you have the 3*width*height bytes of RGB. As long as you don't need more complicated variants of the .ppm format, you can write a loader and a saver in 5 lines of code each.
| 0.101688 | false | 1 | 5,621 |
2018-07-22 01:51:12.200
|
How run my code in spyder as i used to run it in linux terminal
|
Apologies if my question is stupid.
I am a newbie is all aspects.
I used to run my python code straight from the terminal in Linux Ubuntu,
e.g. I just open the terminal go to my folder and run my command in my Linux terminal
CUDA_VISIBLE_DEVICES=0 python trainval_net.py --dataset pascal_voc --net resnet101 --epochs 7 --bs 1 --nw 4 --lr 1e-3 --lr_decay_step 5 --cuda
now im trying to use Spyder.
So for the same project i have a folder with bunch of functions/folders/stuff inside it.
So i just open that main folder as a new project, then i have noo idea how i can run my code...
There is a console in the right side of spyder which looks like Ipython and i can do stuff in there, but i cannot run the code that i run in terminal there.
In iphython or jupyther i used to usee ! at the begining of the command but here when i do it (e.g. !CUDA_VISIBLE_DEVICES=0 python trainval_net.py --dataset pascal_voc --net resnet101 --epochs 7 --bs 1 --nw 4 --lr 1e-3 --lr_decay_step 5 --cuda) it does not even know the modules and throw errors (e.g. ImportError: No module named numpy`)
Can anyone tell me how should i run my code here in Spyder
Thank you in advance! :)
|
Okay I figured it out.
I need to go to run->configure per file and in the command line options put the configuration (--dataset pascal_voc --net resnet101 --epochs 7 --bs 1 --nw 4 --lr 1e-3 --lr_decay_step 5 --cuda)
| 0 | false | 1 | 5,622 |
2018-07-22 04:44:09.413
|
How to use Midiutil to add multiple notes in one timespot (or how to add chords)
|
I am using Midiutil to recreate a modified Bach contrapuntist melody and I am having difficulty finding a method for creating chords using Midiutil in python. Does anyone know a way to create chords using Midiuitl or if there is a way to create chords.
|
A chord consists of multiple notes.
Just add multiple notes with the same timestamp.
| 1.2 | true | 1 | 5,623 |
2018-07-22 16:11:22.640
|
PyCharm, stop the console from clearing every time you run the program
|
So I have just switched over from Spyder to PyCharm. In Spyder, each time you run the program, the console just gets added to, not cleared. This was very useful because I could look through the console to see how my changes to the code were changing the outputs of the program (obviously the console had a maximum length so stuff would get cleared eventually)
However in PyCharm each time I run the program the console is cleared. Surely there must be a way to change this, but I can't find the setting. Thanks.
|
In Spyder the output is there because you are running iPython.
In PyCharm you can get the same by pressing on View -> Scientific Mode.
Then every time you run you see a the new output and the history there.
| 0.386912 | false | 1 | 5,624 |
2018-07-23 00:44:09.343
|
dateutil 2.5.0 is the minimum required version
|
I'm running the jupyter notebook (Enthought Canopy python distribution 2.7) on Mac OSX (v 10.13.6). When I try to import pandas (import pandas as pd), I am getting the complaint: ImportError: dateutil 2.5.0 is the minimum required version. I have these package versions:
Canopy version 2.1.3.3542 (64 bit)
jupyter version 1.0.0-25
pandas version 0.23.1-1
python_dateutil version 2.6.0-1
I'm not getting this complaint when I run with the Canopy Editor so it must be some jupyter compatibility problem. Does anyone have a solution on how to fix this? All was well a few months ago until I recently (and mindlessly) allowed an update of my packages.
|
The issue is with the pandas lib
downgrade using the command below
pip install pandas==0.22.0
| 0 | false | 3 | 5,625 |
2018-07-23 00:44:09.343
|
dateutil 2.5.0 is the minimum required version
|
I'm running the jupyter notebook (Enthought Canopy python distribution 2.7) on Mac OSX (v 10.13.6). When I try to import pandas (import pandas as pd), I am getting the complaint: ImportError: dateutil 2.5.0 is the minimum required version. I have these package versions:
Canopy version 2.1.3.3542 (64 bit)
jupyter version 1.0.0-25
pandas version 0.23.1-1
python_dateutil version 2.6.0-1
I'm not getting this complaint when I run with the Canopy Editor so it must be some jupyter compatibility problem. Does anyone have a solution on how to fix this? All was well a few months ago until I recently (and mindlessly) allowed an update of my packages.
|
I had this same issue using the newest pandas version - downgrading to pandas 0.22.0 fixes the problem.
pip install pandas==0.22.0
| 0.240117 | false | 3 | 5,625 |
2018-07-23 00:44:09.343
|
dateutil 2.5.0 is the minimum required version
|
I'm running the jupyter notebook (Enthought Canopy python distribution 2.7) on Mac OSX (v 10.13.6). When I try to import pandas (import pandas as pd), I am getting the complaint: ImportError: dateutil 2.5.0 is the minimum required version. I have these package versions:
Canopy version 2.1.3.3542 (64 bit)
jupyter version 1.0.0-25
pandas version 0.23.1-1
python_dateutil version 2.6.0-1
I'm not getting this complaint when I run with the Canopy Editor so it must be some jupyter compatibility problem. Does anyone have a solution on how to fix this? All was well a few months ago until I recently (and mindlessly) allowed an update of my packages.
|
Installed Canopy version 2.1.9. The downloaded version worked without updating any of the packages called out by the Canopy Package Manager. Updated all the packages, but then the "import pandas as pd" failed when using the jupyter notebook. Downgraded the notebook package from 4.4.1-5 to 4.4.1-4 which cascaded to 35 additional package downgrades. Retested the import of pandas and the issue seems to have disappeared.
| 0 | false | 3 | 5,625 |
2018-07-23 17:57:30.150
|
CNN image extraction to predict a continuous value
|
I have images of vehicles . I need to predict the price of the vehicle based on image extraction.
What I have learnt is , I can use CNN to extract the image features but what I am not able to get is, How to predict the prices of vehicles.
I know that the I need to train my CNN model before it predicts the price.
I don't know how to train the model with images along with prices .
In the end what I expect is , I will input an vehicle image and I need to get price of the vehicle.
Can any one provide the approach for this ?
|
I would use the CNN to predict the model of the car and then using a list of all the car prices it's easy enough to get the price, or if you dont care about the car model just use the prices as lables
| 0 | false | 1 | 5,626 |
2018-07-24 11:59:30.057
|
How can I handle Pepper robot shutdown event?
|
I need to handle the event when the shutdown process is started(for example with long press the robot's chest button or when the battery is critically low). The problem is that I didn't find a way to handle the shutdown/poweroff event. Do you have any idea how this can be done in some convenient way?
|
Unfortunately this won't be possible as when you trigger a shutdown naoqi will exit as well and destroy your service.
If you are coding in c++ you could use a destructor, but there is no proper equivalent for python...
An alternative would be to execute some code when your script exits whatever the reason. For this you can start your script as a service and wait for "the end" using qiApplication.run(). This method will simply block until naoqi asks your service to exit.
Note: in case of shutdown, all services are being killed, so you cannot run any command from the robot API (as they are probably not available anymore!)
| 1.2 | true | 1 | 5,627 |
2018-07-24 16:25:19.637
|
Python - pandas / openpyxl: Tips on Automating Reports (Moving Away from VBA).
|
I currently have macros set up to automate all my reports. However, some of my macros can take up to 5-10 minutes due to the size of my data.
I have been moving away from Excel/VBA to Python/pandas for data analysis and manipulation. I still use excel for data visualization (i.e., pivot tables).
I would like to know how other people use python to automate their reports? What do you guys do? Any tips on how I can start the process?
Majority of my macros do the following actions -
Import text file(s)
Paste the raw data into a table that's linked to pivot tables / charts.
Refresh workbook
Save as new
|
When using python to automate reports I fully converted the report from Excel to Pandas. I use pd.read_csv or pd.read_excel to read in the data, and export the fully formatted pivot tables into excel for viewing. doing the 'paste into a table and refresh' is not handled well by python in my experience, and will likely still need macros to handle properly ie, export a csv with the formatted data from python then run a short macro to copy and paste.
if you have any more specific questions please ask, i have done a decent bit of this
| 0 | false | 1 | 5,628 |
2018-07-24 19:41:53.300
|
How to make RNN time-forecast multiple days using Keras?
|
I am currently working on a program that would take the previous 4000 days of stock data about a particular stock and predict the next 90 days of performance.
The way I've elected to do this is with an RNN that makes use of LSTM layers to use the previous 90 days to predict the next day's performance (when training, the previous 90 days are the x-values and the next day is used as the y-value). What I would like to do however, is use the previous 90-180 days to predict all the values for the next 90 days. However, I am unsure of how to implement this in Keras as all the examples I have seen only predict the next day and then they may loop that prediction into the next day's 90 day x-values.
Is there any ways to just use the previous 180 days to predict the next 90? Or is the LSTM restricted to only predicting the next day?
|
I don't have the rep to comment, but I'll say here that I've toyed with a similar task. One could use a sliding window approach for 90 days (I used 30, since 90 is pushing LSTM limits), then predict the price appreciation for next month (so your prediction is for a single value). @Digital-Thinking is generally right though, you shouldn't expect great performance.
| 0 | false | 1 | 5,629 |
2018-07-24 21:28:16.190
|
How do you setup script RELOAD/RESTART upon file changes using bash?
|
I have a Python Kafka worker run by a bash script in a Docker image inside a docker-compose setup that I need to reload and restart whenever a file in its directory changes, as I edit the code. Does anyone know how to accomplish this for a bash script?
Please don't merge this with the several answers about running a script whenever a file in a directory changes. I've seen other answers regarding this, but I can't find a way to run a script once, and then stop, reload and re-run it if any files change.
Thanks!
|
My suggestion is to let docker start a wrapper script that simply starts the real script in the background.
Then in an infinite loop:
using inotifywait the wrapper waits for the appropriate change
then kills/stop/reload/... the child process
starts a new one in the background again.
| 1.2 | true | 1 | 5,630 |
2018-07-25 09:28:59.487
|
Creating an exe file for windows using mac for my Kivy app
|
I've created a kivy app that works perfectly as I desire. It's got a few files in a particular folder that it uses. For the life of me, I don't understand how to create an exe on mac. I know I can use pyinstaller but how do I create an exe from mac.
Please help!
|
This is easy with Pyinstaller. I've used it recently.
Install pyinstaller
pip install pyinstaller
Hit following command on terminal where file.py is path to your main file
pyinstaller -w -F file.py
Your exe will be created inside a folder dist
NOTE : verified on windowns, not on mac
| -0.386912 | false | 2 | 5,631 |
2018-07-25 09:28:59.487
|
Creating an exe file for windows using mac for my Kivy app
|
I've created a kivy app that works perfectly as I desire. It's got a few files in a particular folder that it uses. For the life of me, I don't understand how to create an exe on mac. I know I can use pyinstaller but how do I create an exe from mac.
Please help!
|
For pyinstaller, they have stated that packaging Windows binaries while running under OS X is NOT supported, and recommended to use Wine for this.
Can I package Windows binaries while running under Linux?
No, this is not supported. Please use Wine for this, PyInstaller runs
fine in Wine. You may also want to have a look at this thread in the
mailinglist. In version 1.4 we had build in some support for this, but
it showed to work only half. It would require some Windows system on
another partition and would only work for pure Python programs. As
soon as you want a decent GUI (gtk, qt, wx), you would need to install
Windows libraries anyhow. So it's much easier to just use Wine.
Can I package Windows binaries while running under OS X?
No, this is not supported. Please try Wine for this.
Can I package OS X binaries while running under Linux?
This is currently not possible at all. Sorry! If you want to help out,
you are very welcome.
| 0.201295 | false | 2 | 5,631 |
2018-07-25 12:50:07.533
|
Python Redis on Heroku reached max clients
|
I am writing a server with multiple gunicorn workers and want to let them all have access to a specific variable. I'm using Redis to do this(it's in RAM, so it's fast, right?) but every GET or SET request adds another client. I'm performing maybe ~150 requests per second, so it quickly reaches the 25 connection limit that Heroku has. To access the database, I'm using db = redis.from_url(os.environ.get("REDIS_URL")) and then db.set() and db.get(). Is there a way to lower that number? For instance, by using the same connection over and over again for each worker? But how would I do that? The 3 gunicorn workers I have are performing around 50 queries each per second.
If using redis is a bad idea(which it probably is), it would be great if you could suggest alternatives, but also please include a way to fix my current problem as most of my code is based off of it and I don't have enough time to rewrite the whole thing yet.
Note: The three pieces of code are the only times redis and db are called. I didn't do any configuration or anything. Maybe that info will help.
|
Most likely, your script creates a new connection for each request.
But each worker should create it once and use forever.
Which framework are you using?
It should have some documentation about how to configure Redis for your webapp.
P.S. Redis is a good choice to handle that :)
| 0 | false | 1 | 5,632 |
2018-07-25 18:37:23.550
|
Async HTTP server with scrapy and mongodb in python
|
I am basically trying to start an HTTP server which will respond with content from a website which I can crawl using Scrapy. In order to start crawling the website I need to login to it and to do so I need to access a DB with credentials and such. The main issue here is that I need everything to be fully asynchronous and so far I am struggling to find a combination that will make everything work properly without many sloppy implementations.
I already got Klein + Scrapy working but when I get to implementing DB accesses I get all messed up in my head. Is there any way to make PyMongo asynchronous with twisted or something (yes, I have seen TxMongo but the documentation is quite bad and I would like to avoid it. I have also found an implementation with adbapi but I would like something more similar to PyMongo).
Trying to think things through the other way around I'm sure aiohttp has many more options to implement async db accesses and stuff but then I find myself at an impasse with Scrapy integration.
I have seen things like scrapa, scrapyd and ScrapyRT but those don't really work for me. Are there any other options?
Finally, if nothing works, I'll just use aiohttp and instead of Scrapy I'll do the requests to the websito to scrap manually and use beautifulsoup or something like that to get the info I need from the response. Any advice on how to proceed down that road?
Thanks for your attention, I'm quite a noob in this area so I don't know if I'm making complete sense. Regardless, any help will be appreciated :)
|
Is there any way to make pymongo asynchronous with twisted
No. pymongo is designed as a synchronous library, and there is no way you can make it asynchronous without basically rewriting it (you could use threads or processes, but that is not what you asked, also you can run into issues with thread-safeness of the code).
Trying to think things through the other way around I'm sure aiohttp has many more options to implement async db accesses and stuff
It doesn't. aiohttp is a http library - it can do http asynchronously and that is all, it has nothing to help you access databases. You'd have to basically rewrite pymongo on top of it.
Finally, if nothing works, I'll just use aiohttp and instead of scrapy I'll do the requests to the websito to scrap manually and use beautifulsoup or something like that to get the info I need from the response.
That means lots of work for not using scrapy, and it won't help you with the pymongo issue - you still have to rewrite pymongo!
My suggestion is - learn txmongo! If you can't and want to rewrite it, use twisted.web to write it instead of aiohttp since then you can continue using scrapy!
| 1.2 | true | 1 | 5,633 |
2018-07-25 21:15:26.713
|
Python: How to plot an array of y values for one x value in python
|
I am trying to plot an array of temperatures for different location during one day in python and want it to be graphed in the format (time, temperature_array). I am using matplotlib and currently only know how to graph 1 y value for an x value.
The temperature code looks like this:
Temperatures = [[Temp_array0] [Temp_array1] [Temp_array2]...], where each numbered array corresponds to that time and the temperature values in the array are at different latitudes and longitudes.
|
You can simply repeat the X values which are common for y values
Suppose
[x,x,x,x],[y1,y2,y3,y4]
| 0 | false | 1 | 5,634 |
2018-07-26 21:21:24.690
|
Triggering email out of Spotfire based on conditions
|
Does anyone have experience with triggering an email from Spotfire based on a condition? Say, a sales figure falls below a certain threshold and an email gets sent to the appropriate distribution list. I want to know how involved this would be to do. I know that it can be done using an iron python script, but I'm curious if it can be done based on conditions rather than me hitting "run"?
|
we actually have a product that does exactly this called the Spotfire Alerting Tool. it functions off of Automation Services and allows you to configure various thresholds for any metrics in the analysis, and then can notify users via email or even SMS.
of course there is the possibility of coding this yourself (the tool is simply an extension developed using the Spotfire SDK) but I can't comment on how to code it.
the best way to get this tool is probably to check with your TIBCO sales rep. if you'd like I can try to reach him on your behalf, but I'll need a bit more info from you. please contact me at [email protected].
I hope this kind of answer is okay on SO. I don't have a way to reach you privately and this is the best answer I know how to give :)
| 0.386912 | false | 1 | 5,635 |
2018-07-27 00:49:39.630
|
Scipy interp2d function produces z = f(x,y), I would like to solve for x
|
I am using the 2d interpolation function in scipy to smooth a 2d image. As I understand it, interpolate will return z = f(x,y). What I want to do is find x with known values of y and z. I tried something like this;
f = interp2d(x,y,z)
index = (np.abs(f(:,y) - z)).argmin()
However the interp2d object does not work that way. Any ideas on how to do this?
|
I was able to figure this out. yvalue, zvalue, xmin, and xmax are known values. By creating a linspace out of the possible values x can take on, a list can be created with all of the corresponding function values. Then using argmin() we can find the closest value in the list to the known z value.
f = interp2d(x,y,z)
xnew = numpy.linspace(xmin, xmax)
fnew = f(xnew, yvalue)
xindex = (numpy.abs(fnew - zvalue)).argmin()
xvalue = xnew(xindex)
| 0 | false | 1 | 5,636 |
2018-07-27 04:42:13.823
|
How to set an start solution in Gurobi, when only objective function is known?
|
I have a minimization problem, that is modeled to be solved in Gurobi, via python.
Besides, I can calculate a "good" initial solution for the problem separately, that can be used as an upper bound for the problem.
What I want to do is to set Gurobi use this upper bound, to enhance its efficiency. I mean, if this upper bound can help Gurobi for its search. The point is that I just have the objective value, but not a complete solution.
Can anybody help me how to set this upper bound in the Gurobi?
Thanks.
|
I think that if you can calculate a good solution, you can also know some bound for your variable even you dont have the solution exactly ?
| 0 | false | 1 | 5,637 |
2018-07-28 15:56:50.503
|
Many to many relationship SQLite (studio or sql)
|
Hellow. It seems to me that I just don't understand something quite obvios in databases.
So, we have an author that write books and have books themselves. One author can write many books as well as one book could be written by many authors.
Thus, we have two tables 'Books' and 'Authors'.
In 'Authors' I have an 'ID'(Primary key) and 'Name', for example:
1 - L.Carrol
2 - D.Brown
In 'Books' - 'ID' (pr.key), 'Name' and 'Authors' (and this column is foreign key to the 'Authors' table ID)
1 - Some_name - 2 (L.Carol)
2 - Another_name - 2,1 (D.Brown, L.Carol)
And here is my stumbling block, cause i don't understand how to provide the possibility to choose several values from 'Authors' table to one column in 'Books' table.But this must be so simple, isn't it?
I've red about many-to-many relationship, saw many examples with added extra table to implement that, but still don't understand how to store multiple values from one table in the other's table column. Please, explain the logic, how should I do something like that ? I use SQLiteStudio but clear sql is appropriate too. Help ^(
|
You should have third intermediate table which will have following columns:
id (primary)
author id (from Authors table)
book id (from Books table)
This way you will be able to create a record which will map 1 author to 1 book. So you can have following records:
1 ... Author1ID ... Book1ID
2 ... Author1ID ... Book2ID
3 ... Author2ID ... Book2ID
AuthorXID and BookXID - foreign keys from corresponding tables.
So Book2 has 2 authors, Author1 has 2 books.
Also separate tables for Books and Authors don't need to contain any info about anything except itself.
Authors .. 1---Many .. BOOKSFORAUTHORS .. Many---1 .. Books
| 1.2 | true | 1 | 5,638 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.