Q_CreationDate
stringlengths
23
23
Title
stringlengths
11
149
Question
stringlengths
25
6.53k
Answer
stringlengths
15
5.1k
Score
float64
-1
1.2
Is_accepted
bool
2 classes
N_answers
int64
1
17
Q_Id
int64
0
6.76k
2018-07-28 23:43:19.713
Screen up time in desktop
I might be sounding like a noob while asking this question but I really want to know how can I get the time from when my screen is on. Not the system up time but the screen up time. I want to use this time in a python app. So please tell me if there is any way to get that. Thanks in advance. Edit- I want to get the time from when the display is black due to no activity and we move mouse or press a key and screen comes up, the display is up, the user is able to read and/or able to edit a document or play games. OS is windows .
In Mac OS ioreg might have the information you're looking for. ioreg -n IODisplayWrangler -r IODisplayWrangler -w 0 | grep IOPowerManagement
0
false
1
5,639
2018-07-29 11:14:44.810
Django Queryset find data between date
I don't know what title should be, I just got stuck and need to ask. I have a model called shift and imagine the db_table like this: #table shift +---------------+---------------+---------------+---------------+------------+------------+ | start | end | off_start | off_end | time | user_id | +---------------+---------------+---------------+---------------+------------+------------+ | 2018-01-01 | 2018-01-05 | 2018-01-06 | 2018-01-07 | 07:00 | 1 | | 2018-01-08 | 2018-01-14 | 2018-01-15 | Null | 12:00 | 1 | | 2018-01-16 | 2018-01-20 | 2018-01-21 | 2018-01-22 | 18:00 | 1 | | 2018-01-23 | 2018-01-27 | 2018-01-28 | 2018-01-31 | 24:00 | 1 | | .... | .... | .... | .... | .... | .... | +---------------+---------------+---------------+---------------+------------+------------+ if I use queryset with filter like start=2018-01-01 result will 07:00 but how to get result 12:00 if I Input 2018-01-10 ?... thank you!
Question isnt too clear, but maybe you're after something like start__lte=2018-01-10, end__gte=2018-01-10?
1.2
true
1
5,640
2018-07-31 16:24:41.370
cannot run jupyter notebook from anaconda but able to run it from python
After installing Anaconda to C:\ I cannot open jupyter notebook. Both in the Anaconda Prompt with jupyter notebook and inside the navigator. I just can't make it to work. It doesn't appear any line when I type jupyter notebook iniside the prompt. Neither does the navigator work. Then after that I reinstall Anaconda, didn't work either. But then I try to reinstall jupyter notebook dependently using python -m install jupyter and then run python -m jupyter. It works and connect to the localhost:8888. So my question is that how can I make Jupyter works from Anaconda Also note that my anaconda is not in the environment variable( or %PATH% ) and I have tried reinstalling pyzmq and it didn't solve the problem. I'm using Python 3.7 and 3.6.5 in Anaconda Moreover, the spyder works perfectly
You need to activate the anaconda environment first. In terminal: source activate environment_name, (or activate environment_name on windows?) then jupyter notebook If you don't know the env name, do conda list to restore the default python environment: source deactivate
1.2
true
1
5,641
2018-07-31 16:30:46.247
Handling Error for Continuous Features in a Content-Based Filtering Recommender System
I've got a content-based recommender that works... fine. I was fairly certain it was the right approach to take for this problem (matching established "users" with "items" that are virtually always new, but contain known features similar to existing items). As I was researching, I found that virtually all examples of content-based filtering use articles/movies as an example and look exclusively at using encoded tf-idf features from blocks of text. That wasn't exactly what I was dealing with, but most of my features were boolean features, so making a similar vector and looking at cosine distance was not particularly difficult. I also had one continuous feature, which I scaled and included in the vector. As I said, it seemed to work, but was pretty iffy, and I think I know part of the reason why... The continuous feature that I'm using is a rating (let's call this "deliciousness"), where, in virtually all cases, a better score would indicate an item more favorable for the user. It's continuous, but it also has a clear "direction" (not sure if this is the correct terminology). Error in one direction is not the same as error in another. I have cases where some users have given high ratings to items with mediocre "deliciousness" scores, but logically they would still prefer something that was more delicious. That user's vector might have an average deliciousness of 2.3. My understanding of cosine distance is that in my model, if that user encountered two new items that were exactly the same except that one had a deliciousness of 1.0 and the other had a deliciousness of 4.5, it would actually favor the former because it's a shorter distance between vectors. How do I modify or incorporate some other kind of distance measure here that takes into account that deliciousness error/distance in one direction is not the same as error/distance in the other direction? (As a secondary question, how do I decide how to best scale this continuous feature next to my boolean features?)
There are two basic approaches to solve this: (1) Write your own distance function. The obvious approach is to remove the deliciousness element from each vector, evaluating that difference independently. Use cosine similarity on the rest of the vector. Combine that figure with the taste differential as desired. (2) Transform your deliciousness data such that the resulting metric is linear. This will allow a "normal" distance metric to do its job as expected.
1.2
true
1
5,642
2018-07-31 22:16:11.853
How do i get Mac 10.13 to install modules into a 3.x install instead of 2.7
I'm trying to learn python practically. I installed PIP via easy_install and then I wanted to play with some mp3 files so I installed eyed3 via pip while in the project directory. Issue is that it installed the module into python 2.7 which comes standard with mac. I found this out as it keeps telling me that when a script does not run due to missing libraries like libmagic and no matter what I do, it keeps putting any libraries I install into 2.7 thus not being found when running python3. My question is how to I get my system to pretty much ignore the 2.7 install and use the 3.7 install which I have. I keep thinking I am doing something wrong as heaps of tutorials breeze over it and only one has so far mentioned that you get clashes between the versions. I really want to learn python and would appreciate some help getting past this blockage.
Have you tried pip3 install [module-name]? Then you should be able to check which modules you've installed using pip3 freeze.
0
false
1
5,643
2018-08-01 06:16:42.720
Any way to save format when importing an excel file in Python?
I'm doing some work on the data in an excel sheet using python pandas. When I write and save the data it seems that pandas only saves and cares about the raw data on the import. Meaning a lot of stuff I really want to keep such as cell colouring, font size, borders, etc get lost. Does anyone know of a way to make pandas save such things? From what I've read so far it doesn't appear to be possible. The best solution I've found so far is to use the xlsxwriter to format the file in my code before exporting. This seems like a very tedious task that will involve a lot of testing to figure out how to achieve the various formats and aesthetic changes I need. I haven't found anything but would said writer happen to in any way be able to save the sheet format upon import? Alternatively, what would you suggest I do to solve the problem that I have described?
Separate data from formatting. Have a sheet that contains only the data – that's the one you will be reading/writing to – and another that has formatting and reads the data from the first sheet.
0
false
1
5,644
2018-08-01 10:39:07.337
How backing file works in qcow2?
qcow2 is an image for qemu and it's good to emulate. I know how to write data for qcow2 format, but I don't know how backing files in qcow2 work? I found nothing tutorial said this. Can anyone give me tips?
Backing file is external snapshot for qcow2 and the qemu will write COW data in the new image. For example: You have image A and B, and A is backing file of B. When you mount B to /dev/nbd and check its data, you'll find you can saw data of A. That's because if there's no data in the range of B, qemu will read the same range of A. An important notes: If qemu doesn't find A, you won't be able to mount B on /dev/nbd.
0.386912
false
1
5,645
2018-08-02 13:30:37.763
how to download many pdf files from google at once using python?
I want to download approximately 50 pdf files from the Internet using a python script. Can Google APIs help me anyhow?
I am going to assume that you are downloading from Google drive. You can only download one file at a time. You cant batch download of the actual file itself. YOu could look into some kind of multi threading system and download the files at the same time that way but you man run into quota issues.
0
false
1
5,646
2018-08-03 12:50:35.807
how to use coverage run --source = {dir_name}
I have certain files in a directory named benchmarks and I want to get code coverage by running these source files. I have tried using source flag in the following ways but it doesn't work. coverage3 run --source=benchmarks coverage3 run --source=benchmarks/ On running, I always get Nothing to do. Thanks
coverage run is like python. If you would run a file with python myprog.py, then you can use coverage run myprog.py.
1.2
true
1
5,647
2018-08-04 18:15:06.493
Discord.py get message embed
How can I get the embed of a message to a variable with the ID of the message in discord.py? I get the message with uzenet = await client.get_message(channel, id), but I don't know how to get it's embed.
To get the first Embed of your message, as you said that would be a dict(): embedFromMessage = uzenet.embeds[0] To transfer the dict() into an discord.Embed object: embed = discord.Embed.from_data(embedFromMessage)
1.2
true
1
5,648
2018-08-04 22:59:50.310
How to use Windows credentials to connect remote desktop
In my Python script I want to connect to remote server every time. So how can I use my windows credentials to connect to server without typing user ID and password. By default it should read the userid/password from local system and will connect to remote server. I tried with getuser() and getpass() but I have to enter the password everytime. I don't want to enter the password it should take automatically from local system password. Any suggestions..
I am sorry this is not exactly an answer but I have looked on the web and I do not think you can write a code to automatically open Remote desktop without you having to enter the credentials but can you please edit the question so that I can see the code?
0
false
1
5,649
2018-08-07 05:02:59.393
On project task created do not send email
By default subscribers get email messages once the new task in a project is created. How it can be tailored so that unless the projects has checkbox "Send e-mail on new task" checked it will not send e-mails on new task? I know how to add a custom field to project.project model. But don't know the next step. What action to override to not send the email when a new task is created and "Send e-mail on new task" is not checked for project?
I found that if project has notifications option " Visible by following customers" enabled then one can configure subscription for each follower. To not receive e-mails when new task is added to the project: unmark the checkbox "Task opened" in the "Edit subscription of User" form.
1.2
true
1
5,650
2018-08-08 05:01:22.287
How can I pack python into my project?
I am making a program that will call python. I would like to add python in my project so users don't have to download python in order to use it, also it will be better to use the python that my program has so users don't have to download any dependency. My program it's going to be writing in C++ (but can be any language) and I guess I have to call the python that is in the same path of my project? Let's say that the system where the user is running already has python and he/she calls 'pip' i want the program to call pip provided by the python give it by my program and install it in the program directory instead of the system's python? It's that possible? If it is how can I do it? Real examples: There are programs that offer a terminal where you can execute python to do things in the program like: Maya by Autodesk Nuke by The foundry Houdini by Side Effects Note: It has to be Cross-platform solution
In order to run python code, the runtime is sufficient. Under Windows, you can use py2exe to pack your program code together with the python runtime and all recessary dependencies. But pip cannot be used and it makes no sense, as you don't want to develop, but only use the python part. To distribute the complete python installation, like Panda3D does, you'll have to include it in the chosen installer software.
0.135221
false
1
5,651
2018-08-08 06:15:54.700
Python app to organise ideas by tags
Please give me a hint about how is it better to code a Python application which helps to organise ideas by tags. Add a new idea: Input 1: the idea Input 2: corresponding tags Search for the idea: Input 1: one or multiple tags As far as I understood, it's necessary to create an array with ideas and an array with tags. But how to connect them? For example, idea number 3 corresponds to tags number 1 and 2. So the question is: how to link these two arrays in the most simple and elegant way?
Have two dictionaries: Idea -> Set of Tags Tag -> Set of Ideas When you add a new idea, add it to the first dictionary, and then update all the sets of the tags it uses in the second dictionary. This way you get easy lookup by both tag and idea.
0
false
1
5,652
2018-08-08 13:54:35.003
Does ImageDataGenerator add more images to my dataset?
I'm trying to do image classification with the Inception V3 model. Does ImageDataGenerator from Keras create new images which are added onto my dataset? If I have 1000 images, will using this function double it to 2000 images which are used for training? Is there a way to know how many images were created and now fed into the model?
Let me try and tell u in the easiest way possible with the help of an example. For example: you have a set of 500 images you applied the ImageDataGenerator to the dataset with batch_size = 25 now you run your model for lets say 5 epochs with steps_per_epoch=total_samples/batch_size so , steps_per_epoch will be equal to 20 now your model will run on all 500 images (randomly transformed according to instructions provided to ImageDataGenerator) in each epoch
0
false
2
5,653
2018-08-08 13:54:35.003
Does ImageDataGenerator add more images to my dataset?
I'm trying to do image classification with the Inception V3 model. Does ImageDataGenerator from Keras create new images which are added onto my dataset? If I have 1000 images, will using this function double it to 2000 images which are used for training? Is there a way to know how many images were created and now fed into the model?
Also note that: These augmented images are not stored in the memory, they are generated on the fly while training and lost after training. You can't read again those augmented images. Not storing those images is a good idea because we'd run out of memory very soon storing huge no of images
0.116092
false
2
5,653
2018-08-09 09:03:04.903
Can I use JetBrains MPS in a web application?
I am developing a small web application with Flask. This application needs a DSL, which can express the content of .pdf files. I have developed a DSL with JetBrains MPS but now I'm not sure how to use it in my web application. Is it possible? Or should I consider to switch to another DSL or make my DSL directly in Python.
If you want to use MPS in the web frontend the simple answer is: no. Since MPS is a projectional editor it needs a projection engine so that user can interact with the program/model. The projection engine of MPS is build in Java for desktop applications. There have been some efforts to put MPS on the web and build Java Script/HTML projection engine but none of the work is complete. So unless you would build something like that there is no way to use MPS in the frontend. If your DSL is textual anyway and doesn't leverage the projectional nature of MPS I would go down the text DSL road with specialised tooling for that e.g. python as you suggested or Xtext.
1.2
true
1
5,654
2018-08-09 10:03:54.250
How to solve error Expected singleton: purchase.order.line (57, 58, 59, 60, 61, 62, 63, 64)
I'm using odoo version 9 and I've created a module to customize the reports of purchase order. Among the fields that I want displayed in the reports is the supplier reference for article but when I add the code that displays this field <span> <t t-esc="', '.join([str(x.product_code) for x in o.order_line.product_id.product_tmpl_id.seller_ids])"/> but it displays an error when I want to start printing the report QWebException: "Expected singleton: purchase.order.line(57, 58, 59, 60, 61, 62, 63, 64)" while evaluating "', '.join([str(x.product_code) for x in o.order_line.product_id.product_tmpl_id.seller_ids])" PS: I don't change anything in the module purchase. I don't know how to fix this problem any idea for help please ?
It is because your purchase order got several orderlines and you are hoping that the order will have only one orderline. o.orderline.product_id.product_tmpl_id.seller_ids will work only if there is one orderline otherwise you have loop through each orderline. Here o.orderline will have multiple orderlines and you can get product_id from multiple orderline. If you try o.orderline[0].product_id.product_tmpl_id.seller_ids it will work but will get only first orderline details. Inorder to get all the orderline details you need to loop through it.
1.2
true
1
5,655
2018-08-10 09:07:10.620
how to convert tensorflow .meta .data .index to .ckpt file?
As we know, when using tensorflow to save checkpoint, we have 3 files, for e.g.: model.ckpt.data-00000-of-00001 model.ckpt.index model.ckpt.meta I check on the faster rcnn and found that they have an evaluation.py script which helps evaluate the pre-trained model, but the script only accept .ckpt file (as they provided some pre-trained models above). I have run some finetuning from their pre-trained model And then I wonder if there's a way to convert all the .data-00000-of-00001, .index and .meta into one single .ckpt file to run the evaluate.py script on the checkpoint? (I also notice that the pre-trained models they provided in the repo do have only 1 .ckpt file, how can they do that when the save-checkpoint function generates 3 files?)
These { model.ckpt.data-00000-of-00001 model.ckpt.index model.ckpt.meta } are the more recent checkpoint format while {model.ckpt} is a previous checkpoint format It will be in the same concept as to convert a Nintendo Switch to NES ... Or a 3 pieces CD bundle to a single ROM cartridge...
0
false
1
5,656
2018-08-10 17:54:31.013
How do I write a script that configures an applications settings for me?
I need help on how to write a script that configures an applications (VLC) settings to my needs without having to do it manually myself. The reason for this is because I will eventually need to start this application on boot with the correct settings already configured. Steps I need done in the script. 1) I need to open the application. 2) Open the “Open Network Stream…” tab (Can be done with Ctrl+N). 3) Type a string of characters “String of characters” 4) Push “Enter” twice on the keyboard. I’ve checked various websites across the internet and could not find any information regarding this. I am sure it’s possible but I am new to writing scripts and not too experienced. Are commands like the steps above possible to be completed in a script? Note: Using Linux based OS (Raspbian). Thank you.
Do whichever changes you want manually once on an arbitrary system, then make a copy of the application's configuration files (in this case ~/.config/vlc) When you want to replicate the settings on a different machine, simply copy the settings to the same location.
1.2
true
1
5,657
2018-08-10 22:27:20.097
Python/Tkinter - Making The Background of a Textbox an Image?
Since Text(Tk(), image="somepicture.png") is not an option on text boxes, I was wondering how I could make bg= a .png image. Or any other method of allowing a text box to stay a text box, with an image in the background so it can blend into a its surroundings.
You cannot use an image as a background in a text widget. The best you can do is to create a canvas, place an image on the canvas, and then create a text item on top of that. Text items are editable, but you would have to write a lot of bindings, and you wouldn't have nearly as many features as the text widget. In short, it would be a lot of work.
1.2
true
1
5,658
2018-08-11 06:44:26.587
how to uninstall pyenv(installed by homebrew) on Mac
I used to install pyenv by homebrew to manage versions of python, but now, I want to use anaconda.But I don't know how to uninstall pyenv.Please tell me.
None work for me (under brew) under Mac Cataline. They have a warning about file missing under .pyenv. (After I removed the bash_profile lines and also rm -rf ~/.pyenv, I just install Mac OS version of python under python.org and seems ok. Seems get my IDLE work and ...
0.386912
false
2
5,659
2018-08-11 06:44:26.587
how to uninstall pyenv(installed by homebrew) on Mac
I used to install pyenv by homebrew to manage versions of python, but now, I want to use anaconda.But I don't know how to uninstall pyenv.Please tell me.
Try removing it using the following command: brew remove pyenv
0.386912
false
2
5,659
2018-08-11 08:48:32.293
How to install pandas for sublimetext?
I cannot find the way to install pandas for sublimetext. Do you might know how? There is something called pandas theme in the package control, but that was not the one I needed; I need the pandas for python for sublimetext.
For me, "pip install pandas" was not working, so I used pip3 install pandas which worked nicely. I would advise using either pip install pandas or pip3 install pandas for sublime text
0
false
2
5,660
2018-08-11 08:48:32.293
How to install pandas for sublimetext?
I cannot find the way to install pandas for sublimetext. Do you might know how? There is something called pandas theme in the package control, but that was not the one I needed; I need the pandas for python for sublimetext.
You can install this awesome theme through the Package Control. Press cmd/ctrl + shift + p to open the command palette. Type “install package” and press enter. Then search for “Panda Syntax Sublime” Manual installation Download the latest release, extract and rename the directory to “Panda Syntax”. Move the directory inside your sublime Packages directory. (Preferences > Browse packages…) Activate the theme Open you preferences (Preferences > Setting - User) and add this lines: "color_scheme": "Packages/Panda Syntax Sublime/Panda/panda-syntax.tmTheme" NOTE: Restart Sublime Text after activating the theme.
-0.201295
false
2
5,660
2018-08-11 14:25:40.620
Can I get a list of all urls on my site from the Google Analytics API?
I have a site www.domain.com and wanted to get all of the urls from my entire website and how many times they have been clicked on, from the Google Analytics API. I am especially interested in some of my external links (the ones that don't have www.mydomain.com). I will then match this against all of the links on my site (I somehow need to get these from somewhere so may scrape my own site). I am using Python and wanted to do this programmatically. Does anyone know how to do this?
I have a site www.domain.com and wanted to get all of the urls from my entire website and how many times they have been clicked on I guess you need parameter Page and metric Pageviews I am especially interested in some of my external links You can get list of external links if you track they as events. Try to use some crawler, for example Screaming Frog. It allows to get internal and external links. Free use up to 500 pages.
1.2
true
1
5,661
2018-08-12 10:05:41.443
Data extraction from wef output file
I have a wrf output netcdf file.File have variables temp abd prec.Dimensions keys are time, south-north and west-east. So how I select different lat long value in region. The problem is south-north and west-east are not variable. I have to find index value of four lat long value
1) Change your Registry files (I think it is Registry.EM_COMMON) so that you print latitude and longitude in your wrfout_d01_time.nc files. 2) Go to your WRFV3 map. 3) Clean, configure and recompile. 4) Run your model again the way you are used to.
0
false
1
5,662
2018-08-12 19:39:13.970
Cosmic ray removal in spectra
Python developers I am working on spectroscopy in a university. My experimental 1-D data sometimes shows "cosmic ray", 3-pixel ultra-high intensity, which is not what I want to analyze. So I want to remove this kind of weird peaks. Does anybody know how to fix this issue in Python 3? Thanks in advance!!
The answer depends a on what your data looks like: If you have access to two-dimensional CCD readouts that the one-dimensional spectra were created from, then you can use the lacosmic module to get rid of the cosmic rays there. If you have only one-dimensional spectra, but multiple spectra from the same source, then a quick ad-hoc fix is to make a rough normalisation of the spectra and remove those pixels that are several times brighter than the corresponding pixels in the other spectra. If you have only one one-dimensional spectrum from each source, then a less reliable option is to remove all pixels that are much brighter than their neighbours. (Depending on the shape of your cosmics, you may even want to remove the nearest 5 pixels or something, to catch the wings of the cosmic ray peak as well).
0
false
1
5,663
2018-08-13 21:59:31.640
PyCharm running Python file always opens a new console
I initially started learning Python in Spyder, but decided to switch to PyCharm recently, hence I'm learning PyCharm with a Spyder-like mentality. I'm interested in running a file in the Python console, but every time I rerun this file, it will run under a newly opened Python console. This can become annoying after a while, as there will be multiple Python consoles open which basically all do the same thing but with slight variations. I would prefer to just have one single Python console and run an entire file within that single console. Would anybody know how to change this? Perhaps the mindset I'm using isn't very PyCharmic?
To allow only one instance to run, go to "Run" in the top bar, then "Edit Configurations...". Finally, check "Single instance only" at the right side. This will run only one instance and restart every time you run.
0.067922
false
3
5,664
2018-08-13 21:59:31.640
PyCharm running Python file always opens a new console
I initially started learning Python in Spyder, but decided to switch to PyCharm recently, hence I'm learning PyCharm with a Spyder-like mentality. I'm interested in running a file in the Python console, but every time I rerun this file, it will run under a newly opened Python console. This can become annoying after a while, as there will be multiple Python consoles open which basically all do the same thing but with slight variations. I would prefer to just have one single Python console and run an entire file within that single console. Would anybody know how to change this? Perhaps the mindset I'm using isn't very PyCharmic?
You have an option to Rerun the program. Simply open and navigate to currently running app with: Alt+4 (Windows) ⌘+4 (Mac) And then rerun it with: Ctrl+R (Windows) ⌘+R (Mac) Another option: Show actions popup: Ctrl+Shift+A (Windows) ⇧+⌘+A (Mac) And type Rerun ..., IDE then hint you with desired action, and call it.
0
false
3
5,664
2018-08-13 21:59:31.640
PyCharm running Python file always opens a new console
I initially started learning Python in Spyder, but decided to switch to PyCharm recently, hence I'm learning PyCharm with a Spyder-like mentality. I'm interested in running a file in the Python console, but every time I rerun this file, it will run under a newly opened Python console. This can become annoying after a while, as there will be multiple Python consoles open which basically all do the same thing but with slight variations. I would prefer to just have one single Python console and run an entire file within that single console. Would anybody know how to change this? Perhaps the mindset I'm using isn't very PyCharmic?
One console is one instance of Python being run on your system. If you want to run different variations of code within the same Python kernel, you can highlight the code you want to run and then choose the run option (Alt+Shift+F10 default).
0
false
3
5,664
2018-08-14 03:28:58.627
What is Killed:9 and how to fix in macOS Terminal?
I have a simple Python code for a machine learning project. I have a relatively big database of spontaneous speech. I started to train my speech model. Since it's a huge database I let it work overnight. In the morning I woke up and saw a mysterious Killed: 9 line in my Terminal. Nothing else. There is no other error message or something to work with. The code run well for about 6 hours which is 75% of the whole process so I really don't understand whats went wrong. What is Killed:9 and how to fix it? It's very frustrating to lose hours of computing time... I'm on macOS Mojave beta if it's matter. Thank you in advance!
Try to change the node version. In my case, that helps.
-0.201295
false
1
5,665
2018-08-15 17:19:50.610
Identifying parameters in HTTP request
I am fairly proficient in Python and have started exploring the requests library to formulate simple HTTP requests. I have also taken a look at Sessions objects that allow me to login to a website and -using the session key- continue to interact with the website through my account. Here comes my problem: I am trying to build a simple API in Python to perform certain actions that I would be able to do via the website. However, I do not know how certain HTTP requests need to look like in order to implement them via the requests library. In general, when I know how to perform a task via the website, how can I identify: the type of HTTP request (GET or POST will suffice in my case) the URL, i.e where the resource is located on the server the body parameters that I need to specify for the request to be successful
This has nothing to do with python, but you can use a network proxy to examine your requests. Download a network proxy like Burpsuite Setup your browser to route all traffic through Burpsuite (default is localhost:8080) Deactivate packet interception (in the Proxy tab) Browse to your target website normally Examine the request history in Burpsuite. You will find every information you need
1.2
true
1
5,666
2018-08-16 03:16:46.443
Why there is binary type after writing to hive table
I read the data from oracle database to panda dataframe, then, there are some columns with type 'object', then I write the dataframe to hive table, these 'object' types are converted to 'binary' type, does any one know how to solve the problem?
When you read data from oracle to dataframe it's created columns with object datatypes. You can ask pandas dataframe try to infer better datatypes (before saving to Hive) if it can: dataframe.infer_objects()
0
false
1
5,667
2018-08-16 04:22:51.340
What is the use of Jupyter Notebook cluster
Can you tell me what is the use of jupyter cluster. I created jupyter cluster,and established its connection.But still I'm confused,how to use this cluster effectively? Thank you
With Jupyter Notebook cluster, you can run notebook on the local machine and connect to the notebook on the cluster by setting the appropriate port number. Example code: Go to Server using ssh username@ip_address to server. Set up the port number for running notebook. On remote terminal run jupyter notebook --no-browser --port=7800 On your local terminal run ssh -N -f -L localhost:8001:localhost:7800 username@ip_address of server. Open web browser on local machine and go to http://localhost:8001/
1.2
true
1
5,668
2018-08-16 12:03:34.353
How to decompose affine matrix?
I have a series of points in two 3D systems. With them, I use np.linalg.lstsq to calculate the affine transformation matrix (4x4) between both. However, due to my project, I have to "disable" the shear in the transform. Is there a way to decompose the matrix into the base transformations? I have found out how to do so for Translation and Scaling but I don't know how to separate Rotation and Shear. If not, is there a way to calculate a transformation matrix from the points that doesn't include shear? I can only use numpy or tensorflow to solve this problem btw.
I'm not sure I understand what you're asking. Anyway If you have two sets of 3D points P and Q, you can use Kabsch algorithm to find out a rotation matrix R and a translation vector T such that the sum of square distances between (RP+T) and Q is minimized. You can of course combine R and T into a 4x4 matrix (of rotation and translation only. without shear or scale).
1.2
true
1
5,669
2018-08-16 13:00:32.667
Jupyter notebook kernel does not want to interrupt
I was running a cell in a Jupyter Notebook for a while and decided to interrupt. However, it still continues to run and I don't know how to proceed to have the thing interrupted... Thanks for help
Sometimes this happens, when you are on a GPU accelerated machine, where the Kernel is waiting for some GPU operation to be finished. I noticed this even on AWS instances. The best thing you can do is just to wait. In the most cases it will recover and finish at some point. If it does not, at least it will tell you the kernel died after some minutes and you don´t have to copy paste your notebook, to back up your work. In rare cases, you have to kill your python process manually.
1.2
true
1
5,670
2018-08-17 02:02:19.417
find token between two delimiters - discord emotes
i am trying to recognise discord emotes. They are always between two : and don't contain space. e.g. :smile: I know how to split strings at delimiters, but how do i only split tokens that are within exactly two : and contain no space? Thanks in advance!
Thanks to @G_M i found the following solution: regex = re.compile(r':[A-Za-z0-9]+:') result = regex.findall(message.content) Will give me a list with all the emotes within a message, independent of where they are within the message.
1.2
true
1
5,671
2018-08-17 14:49:24.567
Post file from one server to another
I have an Apache server A set up that currently hosts a webpage of a bar chart (using Chart.js). This data is currently pulled from a local SQLite database every couple seconds, and the web chart is updated. I now want to use a separate server B on a Raspberry Pi to send data to the server to be used for the chart, rather than using the database on server A. So one server sends a file to another server, which somehow realises this and accepts it and processes it. The data can either be sent and placed into the current SQLite database, or bypass the database and have the chart update directly from the Pi's sent information. I have come across HTTP Post requests, but not sure if that's what I need or quite how to implement it. I have managed to get the Pi to simply host a json file (viewable from the external ip address) and pull the data from that with a simple requests.get('ip_address/json_file') in Python, but this doesn't seem like the most robust or secure solution. Any help with what I should be using much appreciated, thanks!
Maybe I didn't quite understand your request but this is the solution I imagined: You create a Frontend with WebSocket support that connects to Server A Server B (the one running on the raspberry) sends a POST request with the JSON to Server A Server A accepts the JSON and sends it to all clients connected with the WebSocket protocol Server B ----> Server A <----> Frontend This way you do not expose your Raspberry directly and every request made by the Frontend goes only to Server A. To provide a better user experience you could also create a GET endpoint on Server A to retrieve the latest received JSON, so that when the user loads the Frontend for the first time it calls that endpoint and even if the Raspberry has yet to update the data at least the user can have an insight of the latest available data.
0
false
1
5,672
2018-08-17 15:42:47.703
How to display a pandas Series in Python?
I have a variable target_test (for machine learning) and I'd like to display just one element of target_test. type(target_test) print the following statement on the terminal : class 'pandas.core.series.Series' If I do print(target_test) then I get the entire 2 vectors that are displayed. But I'd like to print just the second element of the first column for example. So do you have an idea how I could do that ? I convert target_test to frame or to xarray but it didn't change the error I get. When I write something like : print(targets_test[0][0]) I got the following output : TypeError: 'instancemethod' object has no attribute '__getitem__'
For the first column, you can use targets_test.keys()[i], for the second one targets_test.values[i] where i is the row starting from 0.
1.2
true
1
5,673
2018-08-18 22:38:40.803
django-storages boto3 accessing file url of a private file
I'm trying to get the generated URL of a file in a test model I've created, and I'm trying to get the correct url of the file by: modelobject.file.url which does give me the correct url if the file is public, however if the file is private it does not automatically generate a signed url for me, how is this normally done with django-storages? Is the API supposed to automatically generate a signed url for private files? I am getting the expected Access Denied Page for 'none' signed urls currently, and need to get the signed 'volatile' link to the file. Thanks in advance
I've figured out what I needed to do, in the Private Storage class, I forgot to put custom_domain = False originally left this line off, because I did not think I needed it however you absolutely do in order to generate signed urls automatically.
0.999988
false
1
5,674
2018-08-19 22:55:22.463
Django - DRF (django-rest-framework-social-oauth2) and React creating a user
I'm using the DRF and ReactJS and I am trying to login with Patreon using django-rest-framework-social-oauth2. In React, I send a request to the back-end auth/login/patreon/ and I reach the Patreon OAuth screen where I say I want to login with PAtreon. Patreon then returns with a request to the back-end at accounts/profile. At this point a python-social-oauth user has also been created. At this point I'm confused. How do I make a request to Patreon to login, create a user in the back-end, and return the session information to the react front-end so that I can include the session information in all following requests from the front-end? I don't want the returned request to be at the backend/accounts/profile, do I? Update I now realize I can set the redirect url with LOGIN_REDIRECT_URL but still, how do I now retrieve the session id, pass it to the front-end, and include it with all requests?
Once you receive the user profile email, unique id, and other details from Patreon then create a user at the Database level. Now after creating a user at the Database level you have to log in the user using the Django login function or any other login mechanism before redirecting the user to the frontend with a session. The redirect URL for the home/ landing page is provided by the Frontend side where they want to land the user after being successfully logged with session-id being set in cookies. Onward Frontend side can use session id in cookies for other requests. Here is the flow: React JS -> auth/login/patreon/ -> redirected to Patreon -> Redirected back to the Backend with user information -> Create User (DB level) -> Login user -> Redirect back to Frontend (React JS on a specific URL provided by Front end)
0
false
1
5,675
2018-08-20 15:34:25.450
In Visual Studio Code, how do I load my python code to a read-print-eval loop?
I am teaching a class that uses VScode. I am used to teaching using IDLE, and it is very nice for the students to be able to call their defined functions and run snippets of code in a python terminal, for debugging purposes. In VScode, they I have been unable to do the same in a satisfactory way. Option1: I can select all code, right click and run selection/line on terminal. This works for small snippets, but I cannot do it for the whole file (even after selecting the whole file with ctrl-A). On linux, this works, but on windows, it does not, unfortunately (and my students use windows) Option2: I can use the debug console. This requires adding a breakpoint in one of the last lines of the file, and does not offer tab completion. It works, but is less convenient than IDLE. Option 3: I can also add the commands to run to the bottom of the file (which is a least preferred alternative, given that is forgoes the interativity of the read-print-eval loop). Is there any better solution? Installing a VScode extension would not be a problem.
Visual Code is just a text editor like your traditional notepad. to run and debug any kind program you need to install the particular extension for the programming language. In your case you are using python so you need to install the extension of it. the best one is the "Python" which is developed by microsoft itself. go to your extensions manager and install this extension. right click and click "run python file in terminal" and you are all set. this will run exactly as they run from the idle(which is default IDE provided by python itself) you can enter the arguments from the console itself. according to me this is the best way to run and debug python programs in VScode. another way is that VScode shows which python version is installed on your computer on the left bottom side, click on it and the programs will use this interpreter. out of all the ways listed here and many others, the best method is to run the program in the terminal which is the recommend by python itself and many other programmers. this method is very simple. what you have to do is open up your command prompt and type the path where python.exe is installed and the type the path of the your program as the argument and press enter. you are done ! ex : C:\Python27\python.exe C:\Users\Username\Desktop\my_python_script.py You can also pass your arguments of your program in the command prompt itself. if you do not want to type all this and then just use the solution mentioned above. hope that your query is solved. regards
0.995055
false
1
5,676
2018-08-20 22:16:15.047
Maximum files size for Pyspark RDD
I’m practicing Pyspark (standalone) in the Pyspark shell at work and it’s pretty new to me. Is there a rule of thumb regarding max file size and the RAM (or any other spec) on my machine? What about when using a cluster? The file I’m practicing with is about 1200 lines. But I’m curious to know how large of a file size can be read into an RDD in regards to machine specifications or cluster specifications.
There is no hard limit on the Data size you can process, however when your RDD (Resilient Distributed Dataset) size exceeds the size of your RAM then the data will be moved to Disk. Even after the data is moved to the Disk spark will be equally capable of processing it. For example if your data is 12GB and available memory is 8GB spark will distribute the leftover data to disk and takes care of all transformations / actions seamlessly. Having said that you can process the data appropriately equal to size of disk. There are of-course size limitation on size of single RDD which is 2GB. In other words the maximum size of a block will not exceed 2GB.
1.2
true
1
5,677
2018-08-22 12:17:01.487
Abaqus: parametric geometry/assembly in Inputfile or Python script?
i want to do something as a parametric study in Abaqus, where the parameter i am changing is a part of the assembly/geometry. Imagine the following: A cube is hanging on 8 ropes. Each two of the 8 ropes line up in one corner of a room. the other ends of the ropes merge with the room diagonal of the cube. It's something like a cable-driven parallel robot/rope robot. Now, i want to calculate the forces in the ropes in different positions of the cube, while only 7 of the 8 ropes are actually used. That means i have 8 simulations for each position of my cube. I wrote a matlab script to generate the nodes and wires of the cube in different positions and angle of rotations so i can copy them into an input file for Abaqus. Since I'm new to Abaqus scripting etc, i wonder which is the best way to make this work. would you guys generate 8 input files for one position of the cube and calculate them manually or is there a way to let abaqus somehow iterate different assemblys? I guess i should wright a python script, but i don't know how to make the ropes the parameter that is changing. Any help is appreciated! Thanks, Tobi
In case someon is interested, i was able to do it the following way: I created a model in abaqus till the point, i could have started the job. Then i took the .jnl file (which is created automaticaly by abaqus) and saved it as a .py file. Then i modified this script by defining every single point as a variable and every wire for the parts as tuples, consisting out of the variables. Than i made for loops and for every 9 cases unique wire definitions, which i called during the loop. During the loop also the constraints were changed and the job were started. I also made a field output request for the endnodes of the ropes (representing motors) for there coordinates and reaction force (the same nodes are the bc pinned) Then i saved the fieldoutput in a certain simple txt file which i was able to analyse via matlab. Then i wrote a matlab script which created the points, attached them to the python script, copied it to a unique directory and even started the job. This way, i was able to do geometric parametric studies in abaqus using matlab and python. Code will be uploaded soon
1.2
true
1
5,678
2018-08-22 12:57:46.077
Pandas DataFrame Display in Jupyter Notebook
I want to make my display tables bigger so users can see the tables better when that are used in conjunction with Jupyter RISE (slide shows). How do I do that? I don't need to show more columns, but rather I want the table to fill up the whole width of the Jupyter RISE slide. Any idea on how to do that? Thanks
If df is a pandas.DataFrame object. You can do: df.style.set_properties(**{'max-width': '200px', 'font-size': '15pt'})
0
false
1
5,679
2018-08-22 13:38:01.097
Will making a Django website public on github let others get the data in its database ? If so how to prevent it?
I have a locally made Django website and I hosted it on Heroku, at the same time I push changes to anathor github repo. I am using built in Database to store data. Will other users be able to get the data that has been entered in the database from my repo (like user details) ? If so how to prevent it from happening ? Solutions like adding files to .gitignore will also prevent pushing to Heroku.
The code itself wouldn't be enough to get access to the database. For that you need the db name and password, which shouldn't be in your git repo at all. On Heroku you use environment variables - which are set automatically by the postgres add-on - along with the dj_database_url library which turns that into the relevant values in the Django DATABASES setting.
0
false
1
5,680
2018-08-22 15:24:11.663
Uploading an image to S3 and manipulating with Python in Lambda - best practice
I'm building my first web application and I've got a question around process and best practice, I'm hoping the expertise on this website might be give me a bit of direction. Essentially, all the MVP is doing is going to be writing an overlay onto an image and presenting this back to the user, as follows; User uploads picture via web form (into AWS S3) - to do Python script executes (in lambda) and creates image overlay, saves new image back into S3 - complete User is presented back with new image to download - to do I've been running this locally as sort of a proof of concept and was planning on linking up with S3 today but then suddenly realised, what happens when there are two concurrent users and two images being uploaded with different filenames with two separate lambda functions working? The only solution I could think of is having the image renamed upon upload with a record inserted into an RDS, then the lambda function to run upon record insertion against the new image, which would resolve half of it, but then how would I get the correct image relayed back to the user? I'll be clear, I have next to no experience in web development, I want the front end to be as dumb as possible and run everything in Python (I'm a data scientist, I can write Python for data analysis but no experience as a software dev!)
You don't really need an RDS, just invoke your lambda synchronously from the browser. So Upload file to S3, using a randomized file name Invoke your lambda synchronously, passing it the file name Have your lambda read the file, convert it, and respond with either the file itself (binary responses aren't trivial), or a path to the converted file in S3.
0
false
1
5,681
2018-08-23 12:03:16.460
How to install twilio via pip
how to install twilio via pip? I tried to install twilio python module but i can't install it i get following error no Module named twilio When trying to install twilio pip install twilio I get the following error. pyopenssl 18.0.0 has requirement six>=1.5.2, but you'll have six 1.4.1 which is incompatible. Cannot uninstall 'pyOpenSSL'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. i got the answer and installed pip install --ignore-installed twilio but i get following error Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/Library/Python/2.7/site-packages/pytz-2018.5.dist-info' Consider using the `--user` option or check the permissions. i have anaconda installed is this a problem?
step1:download python-2.7.15.msi step 2:install and If your system does not have Python added to your PATH while installing "add python exe to path" step 3:go C:\Python27\Scripts of your system step4:in command prompt C:\Python27\Scripts>pip install twilio step 5:after installation is done >python command line import twilio print(twilio.version) step 6:if u get the version ...you are done
-0.201295
false
1
5,682
2018-08-23 14:53:44.523
How to retrieve objects from the sotlayer saved quote using Python API
I'm trying to retrieve the objects/items (server name, host name, domain name, location, etc...) that are stored under the saved quote for a particular Softlayer account. Can someone help how to retrieve the objects within a quote? I could find a REST API (Python) to retrieve quote details (quote ID, status, etc..) but couldn't find a way to fetch objects within a quote. Thanks! Best regards, Khelan Patel
Thanks Albert getRecalculatedOrderContainer is the thing I was looking for.
0
false
1
5,683
2018-08-23 23:45:21.277
Can I debug Flask applications in IntelliJ?
I know how to debug a flask application in Pycharm. The question is whether this is also possible in IntelliJ. I have my flask application debugging in Pycharm but one thing I could do in IntelliJ was evaluate expressions inline by pressing the alt + left mouse click. This isn't available in Pycharm so I wanted to run my Flask application in IntelliJ but there isn't a Flask template. Is it possible to add a Flask template to the Run/Debug configuration? I tried looking for a plugin but couldn't find that either.
Yes, you can. Just setup the proper parameters for Run script into PyCharm IDE. After that you can debug it as usual py script. In PyCharm you can evaluate any line in debug mode too.
0
false
1
5,684
2018-08-24 14:36:02.090
how to add the overall "precision" and "recall" metrics to "tensorboard" log file, after training is finished?
After the training is finished and I did the prediction on my network, I want to calculate "precision" and "recall" of my model, and then send it to log file of "tensorboard" to show the plot. while training, I send "tensorboard" function as a callback to keras. but after training is finished, I dont know how to add some more data to tensorboard to be plotted. I use keras for coding and tensorflow as its backend.
I believe that you've already done that work: it's the same process as the validation (prediction and check) step you do after training. You simply tally the results of the four categories (true/false pos/neg) and plug those counts into the equations (ratios) for precision and recall.
0
false
1
5,685
2018-08-27 21:20:09.317
Convolutional neural network architectures with an arbitrary number of input channels (more than RGB)
I am very new to image recognition with CNNs and currently using several standard (pre-trained) architectures available within Keras (VGG and ResNet) for image classification tasks. I am wondering how one can generalise the number of input channels to more than 3 (instead of standard RGB). For example, I have an image which was taken through 5 different (optic) filters and I am thinking about passing these 5 images to the network. So, conceptually, I need to pass as an input (Height, Width, Depth) = (28, 28, 5), where 28x28 is the image size and 5 - the number of channels. Any easy way to do it with ResNet or VGG please?
If you retrain the models, that's not a problem. Only if you want to use a trained model, you have to keep the input the same.
1.2
true
1
5,686
2018-08-28 02:21:26.060
How to use Docker AND Conda in PyCharm
I want to run python in PyCharm by using a Docker image, but also with a Conda environment that is set up in the Docker image. I've been able to set up Docker and (locally) set up Conda in PyCharm independently, but I'm stumped as to how to make all three work together. The problem comes when I try to create a new project interpreter for the Conda environment inside the Docker image. When I try to enter the python interpreter path, it throws an error saying that the directory/path doesn't exist. In short, the question is the same as the title: how can I set up PyCharm to run on a Conda environment inside a Docker image?
I'm not sure if this is the most eloquent solution, but I do have a solution to this now! Start up a container from the your base image and attach to it Install the Conda env yaml file inside the docker container From outside the Docker container stream (i.e. a new terminal window), commit the existing container (and its changes) to a new image: docker commit SOURCE_CONTAINER NEW_IMAGE Note: see docker commit --help for more options here Run the new image and start a container for it From PyCharm, in preferences, go to Project > Project Interpreter Add a new Docker project interpreter, choosing your new image as the image name, and set the path to wherever you installed your Conda environment on the Docker image (ex: /usr/local/conda3/envs/my_env/bin/python) And just like that, you're good to go!
1.2
true
1
5,687
2018-08-28 13:52:18.727
how to detect upside down face?
I would like to detect upright and upside-down faces, however faces weren't recognized in upside-down images. I used the dlib library in Python with shape_predictor_68_face_landmarks.dat. Is there a library that can recognize upright and upside-down faces?
You could use the same library to detect upside down faces. If the library is unable to detect the face initially, transform it 180° and check again. If it is recognized in this condition, you know it was an upside down face.
1.2
true
1
5,688
2018-08-29 10:28:25.480
How to have cfiles in python code
I'm using the Geany IDE and I've wrote a python code that makes a GUI. Im new to python and i'm better with C. I've done research on the web and its too complicated because theres so much jargon involved. Behind each button I want C to be the backbone of it (So c to execute when clicked). So, how can i make a c file and link it to my code?
I too had a question like this and I found a website that described how to do it step by step but I can’t seem to find it. If you think about it, all these ‘import’ files are just code thats been made separately and thats why you import them. So, in order to import your ‘C File’ do the following. Create the file you want to put in c (e.g bloop.c) Then open the terminal and assuming you saved your file to the desktop, type ‘cd Desktop’. If you put it somewhere else other than the desktop, then type cd (insert the directory). Now, type in gcc -shared -Wl,-soname,adder -o adder.so -fPIC bloop.c into the terminal. After that, go into you python code and right at the very top of your code, type ‘import ctypes’ or ‘from ctypes import *’ to import the ctypes library. Below that type adder = CDLL(‘./adder.so’). if you want to add a instance for the class you need to type (letter or word)=adder.main(). For example, ctest = adder.main() Now lets say you have a method you want to use from your c program you can type your charater or word (dot) method you created in c. For example ‘ctest.beans()’ (assuming you have a method in your code called beans).
1.2
true
1
5,689
2018-08-29 13:57:38.713
Cannot update svg file(s) for saleor framework + python + django
I would like to know how should i could manage to change the static files use by the saelor framework. I've tried to change the logo.svg but failed to do so. I'm still learning python program while using the saleor framework for e-commerce. Thank you.
Here is how it should be done. You must put your logo in the saleor/static/images folder then change it in base.html file in footer and navbar section.
1.2
true
1
5,690
2018-08-29 20:22:17.757
Determining "SystemFaceButton" RBG Value At RunTime
I am using tkinter and the PIL to make a basic photo viewer (mostly for learning purposes). I have the bg color of all of my widgets set to the default which is "systemfacebutton", whatever that means. I am using the PIL.Image module to view and rotate my images. When an image is rotated you have to choose a fillcolor for the area behind the image. I want this fill color to be the same as the default system color but I have no idea how to get a the rgb value or a supported color name for this. It has to be calculated by python at run time so that it is consistent on anyone's OS. Does anyone know how I can do this?
You can use w.winfo_rgb("systembuttonface") to turn any color name to a tuple of R, G, B. (w is any Tkinter widget, the root window perhaps. Note that you had the color name scrambled.) The values returned are 16-bit for some unknown reason, you'll likely need to shift them right by 8 bits to get the 0-255 values commonly used for specifying colors.
1.2
true
1
5,691
2018-08-30 01:29:02.027
In tf.layers.conv2d, with use_bias=True, are the biases tied or untied?
One more question: If they are tied biases, how can I implement untied biases? I am using tensorflow 1.10.0 in python.
tied biases is used in tf.layers.conv2d. If you want united biases, just turn off use_bias and create bias variable manually with tf.Variable or tf.get_variable same shape with following feature map, finally sum them up.
1.2
true
1
5,692
2018-08-30 19:43:08.963
Reading all the image files in a folder in Django
I am trying to create a picture slideshow which will show all the png and jpg files of a folder using django. Problem is how do I open windows explorer through django and prompt user to choose a folder name to load images from. Once this is done, how do I read all image files from this folder? Can I store all image files from this folder inside a list and pass this list in template views through context?
This link “https://github.com/csev/dj4e-samples/tree/master/pics” shows how to store data into to database(sqlite is the database used here) using Django forms. But you cannot upload an entire folder at once, so you have to create a one to many model between display_id(This is just a field name in models you can name it anything you want) and pics. Now you can individually upload all pics in the folder to the same display _id and access all of them using this display_id. Also make sure to pass content_type for jpg and png separately while retrieving the pics.
0
false
1
5,693
2018-08-31 00:05:09.460
How can I get SMS verification code in my Python program?
I'm writing a Python script to do some web automation stuff. In order to log in the website, I have to give it my phone number and the website will send out an SMS verification code. Is there a way to get this code so that I can use it in my Python program? Right now what I can think of is that I can write an Android APP and it will be triggered once there are new SMS and it will get the code and invoke an API so that the code will be stored somewhere. Then I can grab the stored code from within my Python program. This is doable but a little bit hard for me as I don't know how to develop a mobile APP. I want to know is there any other methods so that I can get this code? Thanks. BTW, I have to use my own phone number and can't use other phone to receive the verification code. So it may not possible to use some services.
Answer my own question. I use IFTTT to forward the message to Slack and use Slack API to access the message.
0
false
1
5,694
2018-08-31 16:13:31.870
How to list available policies for an assumed AWS IAM role
I am using python and boto to assume an AWS IAM role. I want to see what policies are attached to the role so i can loop through them and determine what actions are available for the role. I want to do this so I can know if some actions are available instead of doing this by calling them and checking if i get an error. However I cannot find a way to list the policies for the role after assuming it as the role is not authorised to perform IAM actions. Is there anyone who knows how this is done or is this perhaps something i should not be doing.
To obtain policies, your AWS credentials require permissions to retrieve the policies. If such permissions are not associated with the assumed role, you could use another set of credentials to retrieve the permissions (but those credentials would need appropriate IAM permissions). There is no way to ask "What policies do I have?" without having the necessary permissions. This is an intentional part of AWS security because seeing policies can reveal some security information (eg "Oh, why am I specifically denied access to the Top-Secret-XYZ S3 bucket?").
0.386912
false
1
5,695
2018-08-31 19:23:27.853
Creating "zero state" migration for existing db with sqlalchemy/alembic and "faking" zero migration for that existing db
I want to add alembic to an existing ,sqlalchemy using, project, with a working production db. I fail to find what's the standard way to do a "zero" migration == the migration setting up the db as it is now (For new developers setting up their environment) Currently I've added import the declarative base class and all the models using it to the env.py , but first time alembic -c alembic.dev.ini revision --autogenerate does create the existing tables. And I need to "fake" the migration on existing installations - using code. For django ORM I know how to make this work, but I fail to find what's the right way to do this with sqlalchemy/alembic
alembic revision --autogenerate inspects the state of the connected database and the state of the target metadata and then creates a migration that brings the database in line with metadata. If you are introducing alembic/sqlalchemy to an existing database, and you want a migration file that given an empty, fresh database would reproduce the current state- follow these steps. Ensure that your metadata is truly in line with your current database(i.e. ensure that running alembic revision --autogenerate creates a migration with zero operations). Create a new temp_db that is empty and point your sqlalchemy.url in alembic.ini to this new temp_db. Run alembic revision --autogenerate. This will create your desired bulk migration that brings a fresh db in line with the current one. Remove temp_db and re-point sqlalchemy.url to your existing database. Run alembic stamp head. This tells sqlalchemy that the current migration represents the state of the database- so next time you run alembic upgrade head it will begin from this migration.
1
false
1
5,696
2018-09-02 16:24:08.867
Django send progress back to client before request has ended
I am working on an application in Django where there is a feature which lets the user share a download link to a public file. The server downloads the file and processes the information within. This can be a time taking task therefore I want to send periodic feedbacks to the user before operations has completed. For instances, I would like to inform the user that file has downloaded successfully or if some information was missing from one of the record e.t.c. I was thinking that after the client app has sent the upload request, I could get client app to periodically ask the server about the status. But I don't know how can I track the progress a different request.How can I implement this?
At first the progress task information can be saved in rdb or redis。 You can return the id of the task when uses submit the request to start task and the task can be executed in the background context。 The background task can save the task progress info in the db which you selected. The app client get the progress info by the task id which the backend returned and the backend get the progress info from the db and push it in the response. The interval of the request can be defined by yourself.
0
false
1
5,697
2018-09-03 02:29:05.750
Numpy array size different when saved to disk (compared to nbytes)
Is it possible that a flat numpy 1d array's size (nbytes) is 16568 (~16.5kb) but when saved to disk, has a size of >2 mbs? I am saving the array using numpy's numpy.save method. Dtype of array is 'O' (or object). Also, how do I save that flat array to disk such that I get approx similar size to nbytes when saved on disk? Thanks
For others references, From numpy documentation: numpy.ndarray.nbytes attribute ndarray.nbytes Total bytes consumed by the elements of the array. Notes Does not include memory consumed by non-element attributes of the array object. So, the nbytes just considers elements of the array.
0
false
1
5,698
2018-09-05 10:27:35.310
Regex to match all lowercase character except some words
I would like to write a RE to match all lowercase characters and words (special characters and symbols should not match), so like [a-z]+ EXCEPT the two words true and false. I'm going to use it with Python. I've written (?!true|false\b)\b[a-z]+, it works but it does not recognise lowercase characters following an uppercase one (e.g. with "This" it doesn't match "his"). I don't know how to include also this kind of match. For instance: true & G(asymbol) & false should match only asymbol true & G(asymbol) & anothersymbol should match only [asymbol, anothersymbol] asymbolUbsymbol | false should match only [asymbol, bsymbol] Thanks
I would create two regexes (you want to mix word boundary matching with optionally splitting words apart, which is, AFAIK not straighforward mixable, you would have to re-phrase your regex either without word boundaries or without splitting): first regex: [a-z]+ second regex: \b(?!true|false)[a-z]+
0
false
1
5,699
2018-09-06 08:27:52.960
How to use double as the default type for floating numbers in PyTorch
I want all the floating numbers in my PyTorch code double type by default, how can I do that?
You should use for that torch.set_default_dtype. It is true that using torch.set_default_tensor_type will also have a similar effect, but torch.set_default_tensor_type not only sets the default data type, but also sets the default values for the device where the tensor is allocated, and the layout of the tensor.
0.386912
false
1
5,700
2018-09-06 20:34:52.430
how to change directory in Jupyter Notebook with Special characters?
When I created directory under the python env, it has single quote like (D:\'Test Directory'). How do I change to this directory in Jupyter notebook?
I could able to change the directory using escape sequence like this.. os.chdir('C:\\'Test Directory\')
0
false
1
5,701
2018-09-08 02:24:30.413
Graph traversal, maybe another type of mathematics?
Let’s say you have a set/list/collection of numbers: [1,3,7,13,21,19] (the order does not matter). Let’s say for reasons that are not important, you run them through a function and receive the following pairs: (1, 13), (1, 19), (1, 21), (3,19), (7, 3), (7,13), (7,19), (21, 13), (21,19). Again order does not matter. My question involves the next part: how do I find out the minimum amount of numbers that can be part of a pair without being repeated? For this particular sequence it is all six. For [1,4,2] the pairs are (1,4), (1,2), (2,4). In this case any one of the numbers could be excluded as they are all in pairs, but they each repeat, therefore it would be 2 (which 2 do not matter). At first glance this seems like a graph traversal problem - the numbers are nodes, the pairs edges. Is there some part of mathematics that deals with this? I have no problem writing up a traversal algorithm, I was just wondering if there was a solution with a lower time complexity. Thanks!
In case anyone cares in the future, the solution is called a blossom algorithm.
0
false
2
5,702
2018-09-08 02:24:30.413
Graph traversal, maybe another type of mathematics?
Let’s say you have a set/list/collection of numbers: [1,3,7,13,21,19] (the order does not matter). Let’s say for reasons that are not important, you run them through a function and receive the following pairs: (1, 13), (1, 19), (1, 21), (3,19), (7, 3), (7,13), (7,19), (21, 13), (21,19). Again order does not matter. My question involves the next part: how do I find out the minimum amount of numbers that can be part of a pair without being repeated? For this particular sequence it is all six. For [1,4,2] the pairs are (1,4), (1,2), (2,4). In this case any one of the numbers could be excluded as they are all in pairs, but they each repeat, therefore it would be 2 (which 2 do not matter). At first glance this seems like a graph traversal problem - the numbers are nodes, the pairs edges. Is there some part of mathematics that deals with this? I have no problem writing up a traversal algorithm, I was just wondering if there was a solution with a lower time complexity. Thanks!
If you really intended to find the minimum amount, the answer is 0, because you don't have to use any number at all. I guess you meant to write "maximal amount of numbers". If I understand your problem correctly, it sounds like we can translated it to the following problem: Given a set of n numbers (1,..,n), what is the maximal amount of numbers I can use to divide the set into pairs, where each number can appear only once. The answer to this question is: when n = 2k f(n) = 2k for k>=0 when n = 2k+1 f(n) = 2k for k>=0 I'll explain, using induction. if n = 0 then we can use at most 0 numbers to create pairs. if n = 2 (the set can be [1,2]) then we can use both numbers to create one pair (1,2) Assumption: if n=2k lets assume we can use all 2k numbers to create 2k pairs and prove using induction that we can use 2k+2 numbers for n = 2k+2. Proof: if n = 2k+2, [1,2,..,k,..,2k,2k+1,2k+2], we can create k pairs using 2k numbers (from our assomption). without loss of generality, lets assume out pairs are (1,2),(3,4),..,(2k-1,2k). we can see that we still have two numbers [2k+1, 2k+2] that we didn't use, and therefor we can create a pair out of two of them, which means that we used 2k+2 numbers. You can prove on your own the case when n is odd.
0
false
2
5,702
2018-09-08 12:37:37.387
error in missingno module import in Jupyter Notebook
Getting error in missingno module import in Jupyter Notebook . It works fine in IDLE . But showing "No missingno module exist" in Jupyter Notebook . Can anybody tell me how to resolve this ?
Installing missingno through anaconda solved the problem for me
0.545705
false
2
5,703
2018-09-08 12:37:37.387
error in missingno module import in Jupyter Notebook
Getting error in missingno module import in Jupyter Notebook . It works fine in IDLE . But showing "No missingno module exist" in Jupyter Notebook . Can anybody tell me how to resolve this ?
This command helped me: conda install -c conda-forge/label/gcc7 missingno You have to make sure that you run Anaconda prompt as Administrator.
0.386912
false
2
5,703
2018-09-08 18:25:29.300
Lazy loading with python and flask
I’ve build a web based data dashboard that shows 4 graphs - each containing a large amount of data points. When the URL endpoint is visited Flask calls my python script that grabs the data from a sql server and then starts manipulating it and finally outputs the bokeh graphs. However, as these graphs get larger/there becomes more graphs on the screen the website takes long to load - since the entire function has to run before something is displayed. How would I go about lazy loading these? I.e. it loads the first (most important graph) and displays it while running the function for the other graphs, showing them as and when they finish running (showing a sort of loading bar where each of the graphs are or something). Would love some advice on how to implement this or similar. Thanks!
I had the same problem as you. The problem with any kind of flask render is that all data is processed and passed to the page (i.e. client) simultaneously, often at large time cost. Not only that, but the the server web process is quite heavily loaded. The solution I was forced to implement as the comment suggested was to load the page with blank charts and then upon mounting them access a flask api (via JS ajax) that returns chart json data to the client. This permits lazy loading of charts, as well as allowing the data manipulation to possibly be performed on a worker and not web server.
0.995055
false
1
5,704
2018-09-09 08:36:10.063
I can't import tkinter in pycharm community edition
I've been trying for a few days now, to be able to import the library tkinter in pycharm. But, I am unable to do so. ,I tried to import it or to install some packages but still nothing, I reinstalled python and pycharm again nothing. Does anyone know how to fix this? I am using pycharm community edition 2018 2.3 and python 3.7 . EDIT:So , I uninstalled python 3.7 and I installed python 3.6 x64 ,I tried changing my interpreter to the new path to python and still not working... EDIT 2 : I installed pycharm pro(free trial 30 days) and it's actually works and I tried to open my project in pycharm community and it's not working... EDIT 3 : I installed python 3.6 x64 and now it's working. Thanks for the help.
Thanks to vsDeus for asking this question. I had the same problem running Linux Mint Mate 19.1 and nothing got tkinter and some other modules working in Pycharm CE. In Eclipse with Pydev all worked just fine but for some reason I would rather work in Pycharm when coding than Eclipse. The steps outlined here did not work for me but the steps he took handed me the solution. Basically I had to uninstall Pycharm, remove all its configuration files, then reinstall pip3, tkinter and then reinstall Pycharm CE. Finally I reopened previously saved projects and then set the correct interpreter. When I tried to change the python interpreter before no alternatives appeared. After all these steps the choice became available. Most importantly now tkinter, matplotlib and other modules I wanted to use are available in Pycharm.
0
false
1
5,705
2018-09-10 11:25:39.227
how to use Tensorflow seq2seq.GreedyEmbeddingHelper first parameter Embedding in case of using normal one hot vector instead of embedding?
I am trying to decode one character (represented as c-dimensional one hot vectors) at a time with tensorflow seq2seq model implementations. I am not using any embedding in my case. Now I am stuck with tf.contrib.seq2seq.GreedyEmbeddingHelper. It requires "embedding: A callable that takes a vector tensor of ids (argmax ids), or the params argument for embedding_lookup. The returned tensor will be passed to the decoder input." How I will define callable? What are inputs (vector tensor if ids(argmax ids)) and outputs of this callable function? Please explain using examples.
embedding = tf.Variable(tf.random_uniform([c-dimensional , EMBEDDING_DIM])) here you can create the embedding for you own model. and this will be trained during your training process to give a vector for your own input. if you don't want to use it you just can create a matrix where is every column of it is one hot vector represents the character and pass it as embedding. it will be some thing like that: [[1,0,0],[0,1,0],[0,0,1]] here if you have vocabsize of 3 .
0
false
1
5,706
2018-09-10 11:43:29.133
one server, same domain, different apps (example.com/ & example.com/tickets )?
I want advice on how to do the following: On the same server, I want to have two apps. One WordPress app and one Python app. At the same time, I want the root of my domain to be a static landing page. Url structure I want to achieve: example.com/ => static landing page example.com/tickets => wordpress example.com/pythonapp => python app I have never done something like this before and searching for solutions didn't help. Is it even possible? Is it better to use subdomains? Is it better to use different servers? How should I approach this? Thanks in advance!
It depends on the webserver you want to use. Let's go with apache as it is one of the most used web servers on the internet. You install your wordpress installation into the /tickets subdirectory and install word-press as you normally would. This should install wordpress into the subdirectory. Configure your Python-WSGI App with this configuration: WSGIScriptAlias /pythonapp /var/www/path/to/my/wsgi.py
0.201295
false
1
5,707
2018-09-12 02:15:34.913
How to saving plots and model results to pdf in python?
I know how to save model results to .txt files and saving plots to .png. I also found some post which shows how to save multiple plots on a single pdf file. What I am looking for is generating a single pdf file which can contain both model results/summary and it's related plots. So at the end I can have something like auto generated model report. Can someone suggest me how I can do this?
I’ve had good results with the fpdf module. It should do everything you need it to do and the learning curve isn’t bad. You can install with pip install fpdf.
0
false
1
5,708
2018-09-12 06:55:00.637
Error configuring: unknown option "-ipadx"
I want to add InPadding to my LabelFrame i'm using AppJar GUI. I try this: self.app.setLabelFrameInPadding(self.name("_content"), [20, 20]) But i get this error: appJar:WARNING [Line 12->3063/configureWidget]: Error configuring _content: unknown option "-ipadx" Any ideas how to fix it?
Because of the way containers are implemented in appJar, padding works slightly differently for labelFrames. Try calling: app.setLabelFramePadding('name', [20,20])
0
false
1
5,709
2018-09-12 13:04:34.793
Two flask Apps same domain IIS
I want to deploy same flask application as two different instances lets say sandbox instance and testing instance on the same iis server and same machine. having two folders with different configurations (one for testing and one for sandbox) IIS runs whichever is requested first. for example I want to deploy one under www.example.com/test and the other under www.example.com/sandbox. if I requested www.example.com/test first then this app keeps working correctly but whenever I request www.example.com/sandbox it returns 404 and vice versa! question bottom line: how can I make both apps run under the same domain with such URLs? would using app factory pattern solve this issue? what blocks both apps from running side by side as I am trying to do? thanks a lot in advance
been stuck for a week before asking this question and the neatest way I found was to assign each app a different app pool and now they are working together side by side happily ever after.
1.2
true
1
5,710
2018-09-13 06:51:37.370
Sharing PonyORM's db session across different python module
I initially started a small python project (Python, Tkinter amd PonyORM) and became larger that is why I decided to divide the code (used to be single file only) to several modules (e.g. main, form1, entity, database). Main acting as the main controller, form1 as an example can contain a tkinter Frame which can be used as an interface where the user can input data, entity contains the db.Enttiy mappings and database for the pony.Database instance along with its connection details. I think problem is that during import, I'm getting this error "pony.orm.core.ERDiagramError: Cannot define entity 'EmpInfo': database mapping has already been generated". Can you point me to any existing code how should be done.
Probably you import your modules in a wrong order. Any module which contains entity definitions should be imported before db.generate_mapping() call. I think you should call db.generate_mapping() right before entering tk.mainloop() when all imports are already done.
1.2
true
1
5,711
2018-09-13 08:55:49.327
Python3 - How do I stop current versions of packages being over-ridden by other packages dependencies
Building Tensorflow and other such packages from source and especially against GPU's is a fairly long task and often encounters errors, so once built and installed I really dont want to mess with them. I regularly use virtualenvs, but I am always worried about installing certain packages as sometimes their dependencies will overwrite my own packages I have built from source... I know I can remove, and then rebuild from my .wheels, but sometimes this is a time consuming task. Is there a way that if I attempt to pip install a package, it first checks against current package versions and doesn't continue before I agree to those changes? Even current packages dependencies don't show versions with pip show
Is there a way that if I attempt to pip install a package, it first checks against current package versions and doesn't continue before I agree to those changes? No. But pip install doesn't touch installed dependencies until you explicitly run pip install -U. So don't use -U/--upgrade option and upgrade dependencies when pip fails with unmet dependencies.
0
false
1
5,712
2018-09-14 02:32:31.807
how do I connect sys.argv into my float value?
I must use "q" (which is a degree measure) from the command line and then convert "q" to radians and have it write out the value of sin(5q) + sin(6q). Considering that I believe I have to use sys.argv's for this I have no clue where to even begin
you can use following commands q=sys.argv[1] #you can give the decimal value too in your command line now q will be string eg. "1.345" so you have convert this to float[ using function q=float(q) .
0
false
1
5,713
2018-09-14 10:30:59.240
Scrapy: Difference between simple spider and the one with ItemLoader
I've been working on scrapy for 3 months. for extracting selectors I use simple response.css or response.xpath.. I'm asked to switch to ItemLoaders and use add_xpath add_css etc. I know how ItemLoaders work and ho convinient they are but can anyone compare these 2 w.r.t efficiency? which way is efficient and why ??
Item loaders do exactly the same thing underneath that you do when you don't use them. So for every loader.add_css/add_xpath call there will be responce.css/xpath executed. It won't be any faster and the little amount of additional work they do won't really make things any slower (especially in comparison to xml parsing and network/io load).
0
false
1
5,714
2018-09-15 01:56:10.107
Possible to get a file descriptor for Python's StringIO?
From a Python script, I want to feed some small string data to a subprocess, but said subprocess non-negotiably accepts only a filename as an argument, which it will open and read. I non-negotiably do not want to write this data to disk - it should reside only in memory. My first instinct was to use StringIO, but I realize that StringIO has no fileno(). mmap(-1, ...) also doesn't seem to create a file descriptor. With those off the table, I'm at a loss as to how to do this. Is this even achievable? The fd would be OS-level visible, but (I would expect) only to the process's children. tl;dr how to create private file descriptor to a python string/memory that only a child process can see? P.S. This is all on Linux and doesn't have to be portable in any way.
Reifying @user4815162342's comment as an answer: The direct way to do this is: pass /dev/stdin as the file argument to the process; use stdin=subprocess.PIPE; finally, Popen.communicate(<your input>) to feed the desired contents
0.673066
false
1
5,715
2018-09-17 15:44:04.130
how to modify txt file properties with python
I am trying to make a python program that creates and writes in a txt file. the program works, but I want it to cross the "hidden" thing in the txt file's properties, so that the txt can't be seen without using the python program I made. I have no clues how to do that, please understand I am a beginner in python.
I'm not 100% sure but I don't think you can do this in Python. I'd suggest finding a simple Visual Basic script and running it from your Python file.
0
false
1
5,716
2018-09-18 15:03:23.547
How can I run code for a certain amount of time?
I want to play a sound (from a wav file) using winsound's winsound.PlaySound function. I know that winsound.Beep allows me to specify the time in milliseconds, but how can I implement that behavior with winsound.PlaySound? I tried to use the time.sleep function, but that only delays the function, not specifies the amount of time. Any help would be appreciated.
Create a thread to play the sound, start it. Create a thread that sleeps the right amount of time and has a handle to the first thread. Have the second thread terminate the first thread when the sleep is over.
1.2
true
1
5,717
2018-09-18 16:35:17.860
Do I need two instances of python-flask?
I am building a web-app. One part of the app calls a function that starts a tweepy StreamListener on certain track. That functions process a tweet and then it writes a json object to a file or mongodb. On the other hand I need a process that is reading the file or mongodb and paginates the tweet if some property is in it. The thing is that I don't know how to do that second part. Do I need different threads? What solutions could there be?
You can certainly do it with a thread or spinning up a new process that will perform the pagination. Alternatively you can look into a task queue service (Redis queue, celery, as examples). Your web-app can add a task to this queue and your other program can listen to this queue and perform the pagination tasks as they come in.
0
false
1
5,718
2018-09-19 22:34:46.480
Celery - how to stop running task when using distributed RabbitMQ backend?
If I am running Celery on (say) a bank of 50 machines all using a distributed RabbitMQ cluster. If I have a task that is running and I know the task id, how in the world can Celery figure out which machine its running on to terminate it? Thanks.
I am not sure if you can actually do it, when you spawn a task you will have a worker, somewhere in you 50 boxes, that executes that and you technically have no control on it as it s a separate process and the only thing you can control is either the asyncResult or the amqp message on the queue.
0
false
1
5,719
2018-09-19 23:16:35.920
how to run periodic task in high frequency in flask?
I want my flask APP to pull updates from a local txt file every 200ms, is it possible to do that? P.S. I've considered BackgroundScheduler() from apschedulerler, but the granularity of is 1s.
Couldn't you just start a loop in a thread that sleeps for 200 ms before the next iteration?
0.201295
false
1
5,720
2018-09-20 06:14:37.797
How to search for all existing mongodbs for single GET request
Suppose I have multiple mongodbs like mongodb_1, mongodb_2, mongodb_3 with same kind of data like employee details of different organizations. When user triggers GET request to get employee details from all the above 3 mongodbs whose designation is "TechnicalLead". then first we need to connect to mongodb_1 and search and then disconnect with mongodb_1 and connect to mongodb_2 and search and repeat the same for all dbs. Can any one suggest how can we achieve above using python EVE Rest api framework. Best Regards, Narendra
First of all, it is not a recommended way to run multiple instances (especially when the servers might be running at the same time) as it will lead to usage of the same config parameters like for example logpath and pidfilepath which in most cases is not what you want. Secondly for getting the data from multiple mongodb instances you have to create separate get requests for fetching the data. There are two methods of view for the model that can be used: query individual databases for data, then assemble the results for viewing on the screen. Query a central database that the two other databases continously update.
0
false
1
5,721
2018-09-20 17:05:30.047
python asyncronous images download (multiple urls)
I'm studying Python for 4/5 months and this is my third project built from scratch, but im not able to solve this problem on my own. This script downloads 1 image for each url given. Im not able to find a solution on how to implement Thread Pool Executor or async in this script. I cannot figure out how to link the url with the image number to the save image part. I build a dict of all the urls that i need to download but how do I actually save the image with the correct name? Any other advise? PS. The urls present at the moment are only fake one. Synchronous version: import requests import argparse import re import os import logging from bs4 import BeautifulSoup parser = argparse.ArgumentParser() parser.add_argument("-n", "--num", help="Book number", type=int, required=True) parser.add_argument("-p", dest=r"path_name", default=r"F:\Users\123", help="Save to dir", ) args = parser.parse_args() logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.ERROR) logger = logging.getLogger(__name__) def get_parser(url_c): url = f'https://test.net/g/{url_c}/1' logger.info(f'Main url: {url_c}') responce = requests.get(url, timeout=5) # timeout will raise an exeption if responce.status_code == 200: page = requests.get(url, timeout=5).content soup = BeautifulSoup(page, 'html.parser') return soup else: responce.raise_for_status() def get_locators(soup): # take get_parser # Extract first/last page num first = int(soup.select_one('span.current').string) logger.info(f'First page: {first}') last = int(soup.select_one('span.num-pages').string) + 1 # Extract img_code and extension link = soup.find('img', {'class': 'fit-horizontal'}).attrs["src"] logger.info(f'Locator code: {link}') code = re.search('galleries.([0-9]+)\/.\.(\w{3})', link) book_code = code.group(1) # internal code extension = code.group(2) # png or jpg # extract Dir book name pattern = re.compile('pretty":"(.*)"') found = soup.find('script', text=pattern) string = pattern.search(found.text).group(1) dir_name = string.split('"')[0] logger.info(f'Dir name: {dir_name}') logger.info(f'Hidden code: {book_code}') print(f'Extension: {extension}') print(f'Tot pages: {last}') print(f'') return {'first_p': first, 'last_p': last, 'book_code': book_code, 'ext': extension, 'dir': dir_name } def setup_download_dir(path, dir): # (args.path_name, locator['dir']) # Make folder if it not exist filepath = os.path.join(f'{path}\{dir}') if not os.path.exists(filepath): try: os.makedirs(filepath) print(f'Directory created at: {filepath}') except OSError as err: print(f"Can't create {filepath}: {err}") return filepath def main(locator, filepath): for image_n in range(locator['first_p'], locator['last_p']): url = f"https://i.test.net/galleries/{locator['book_code']}/{image_n}.{locator['ext']}" logger.info(f'Url Img: {url}') responce = requests.get(url, timeout=3) if responce.status_code == 200: img_data = requests.get(url, timeout=3).content else: responce.raise_for_status() # raise exepetion with open((os.path.join(filepath, f"{image_n}.{locator['ext']}")), 'wb') as handler: handler.write(img_data) # write image print(f'Img {image_n} - DONE') if __name__ == '__main__': try: locator = get_locators(get_parser(args.num)) # args.num ex. 241461 main(locator, setup_download_dir(args.path_name, locator['dir'])) except KeyboardInterrupt: print(f'Program aborted...' + '\n') Urls list: def img_links(locator): image_url = [] for num in range(locator['first_p'], locator['last_p']): url = f"https://i.test.net/galleries/{locator['book_code']}/{num}.{locator['ext']}" image_url.append(url) logger.info(f'Url List: {image_url}') return image_url
I found the solution in the book fluent python. Here the snippet: def download_many(cc_list, base_url, verbose, concur_req): counter = collections.Counter() with futures.ThreadPoolExecutor(max_workers=concur_req) as executor: to_do_map = {} for cc in sorted(cc_list): future = executor.submit(download_one, cc, base_url, verbose) to_do_map[future] = cc done_iter = futures.as_completed(to_do_map) if not verbose: done_iter = tqdm.tqdm(done_iter, total=len(cc_list)) for future in done_iter: try: res = future.result() except requests.exceptions.HTTPError as exc: error_msg = 'HTTP {res.status_code} - {res.reason}' error_msg = error_msg.format(res=exc.response) except requests.exceptions.ConnectionError as exc: error_msg = 'Connection error' else: error_msg = '' status = res.status if error_msg: status = HTTPStatus.error counter[status] += 1 if verbose and error_msg: cc = to_do_map[future] print('*** Error for {}: {}'.format(cc, error_msg)) return counter
1.2
true
1
5,722
2018-09-23 11:46:47.050
How to put a list of arbitrary integers on screen (from lowest to highest) in pygame proportionally?
Let's say I have a list of 887123, 123, 128821, 9, 233, 9190902. I want to put those strings on screen using pygame (line drawing), and I want to do so proportionally, so that they fit the screen. If the screen is 1280x720, how do I scale the numbers down so that they keep their proportions to each other but fit the screen? I did try with techniques such as dividing every number by two until they are all smaller than 720, but that is skewed. Is there an algorithm for this sort of mathematical scaling?
I used this algorithm: x = (x / (maximum value)) * (720 - 1)
0.386912
false
1
5,723
2018-09-23 16:36:12.867
Python3.6 and singletons - use case and parallel execution
I have several unit-tests (only python3.6 and higher) which are importing a helper class to setup some things (eg. pulling some Docker images) on the system before starting the tests. The class is doing everything while it get instantiate. It needs to stay alive because it holds some information which are evaluated during the runtime and needed for the different tests. The call of the helper class is very expensive and I wanna speedup my tests the helper class only once. My approach here would be to use a singleton but I was told that in most cases a singleton is not needed. Are there other options for me or is a singleton here actually a good solution? The option should allow executing all tests at all and every test on his own. Also I would have some theoretical questions. If I use a singleton here how is python executing this in parallel? Is python waiting for the first instance to be finish or can there be a race condition? And if yes how do I avoid them?
I can only given an answer on the "are there other options for me" part of your question... The use of such a complex setup for unit-tests (pulling docker images etc.) makes me suspicious: It can mean that your tests are in fact integration tests rather than unit-tests. Which could be perfectly fine if your goal is to find the bugs in the interactions between the involved components or in the interactions between your code and its system environment. (The fact that your setup involves Docker images gives the impression that you intend to test your system-under-test against the system environment.) If this is the case I wish you luck to get the other aspects of your question answered (parallelization of tests, singletons and thread safety). Maybe it makes sense to tag your question "integration-testing" rather than "unit-testing" then, in order to attract the proper experts. On the other hand your complex setup could be an indication that your unit-tests are not yet designed properly and/or the system under test is not yet designed to be easily testable with unit-tests: Unit-tests focus on the system-under-test in isolation - isolation from depended-on-components, but also isolation from the specifics of the system environment. For such tests of a properly isolated system-under-test a complex setup using Docker would not be needed. If the latter is true you could benefit from making yourself familiar with topics like "mocking", "dependency injection" or "inversion of control", which will help you to design your system-under-test and your unit test cases such that they are independent of the system environment. Then, your complex setup would no longer be necessary and the other aspects of your question (singleton, parallelization etc.) may no longer be relevant.
0
false
1
5,724
2018-09-24 09:39:59.467
How to increase the error limit in flake8 and pylint VS Code?
As mentioned above I would like to know how I can increase the no of errors shown in flake8 and pylint. I have installed both and they work fine when I am working with small files. I am currently working with a very large file (>18k lines) and there is no error highlighting done at the bottom part of the file, I believe the current limit is set to 100 and would like to increase it. If this isn't possible is there any way I can just do linting for my part of the code? I am just adding a function in this large file and would like to monitor the same.
Can use "python.linting.maxNumberOfProblems": 2000 to increase the no of problems being displayed but the limit seems to be set to 1001 so more than 1001 problems can't be displayed.
0
false
1
5,725
2018-09-24 11:25:27.520
Knowledge graph in python for NLP
how do I build a knowledge graph in python from structured texts? Do I need to know any graph databases? Any resources would be of great help.
Knowledge Graph (KG) is just a virtual representation and not an actual graph stored as it is. To store the data you can use any of the present databases like SQL, MongoDB, etc. But to benefit the fact that we are storing graphs here, I'll suggest better use graph-based databases like node4js.
0
false
1
5,726
2018-09-25 08:12:35.473
How to view Opendaylight topology on external webgui
I'm exploring ODL and mininet and able to run both and populate the network nodes over ODL and I can view the topology via ODL default webgui. I'm planning to create my own webgui and to start with simple topology view. I need advise and guideline on how I can achieve topology view on my own webgui. Plan to use python and html. Just a simple single page html and python script. Hopefully someone could lead me the way. Please assist and thank you.
If a web GUI for ODL would provide value for you, please consider working to contribute that upstream. The previous GUI (DLUX) has recently been deprecated because no one was supporting it, although it seems many people were using it.
0
false
1
5,727
2018-09-26 04:22:21.250
Python3, calling super's __init__ from a custom exception
I have created custom exception in python 3 and the over all code works just fine. But there is one thing I am not able to wrap my head around is that why do I need to send my message to the Exception class's __init__() and how does it convert the Custom exception into that string message when I try to print the exception since the code in the Exception or even the BaseException does not do much. Not quite able to understand why call the super().__init__() from custom exception?
This is so that your custom exceptions can start off with the same instance attributes as a BaseException object does, including the value attribute, which stores the exception message, which is needed by certain other methods such as __str__, which allows the exception object to be converted to a string directly. You can skip calling super().__init__ in your subclass's __init__ and instead initialize all the necessary attributes on your own if you want, but then you would not be taking advantage of one of the key benefits of class inheritance. Always call super().__init__ unless you have very specific reasons not to reuse any of the parent class's instance attributes.
0.386912
false
1
5,728
2018-09-26 21:19:30.580
Interpreter problem (apparently) with a project in PyCharm
I recently upgraded PyCharm (community version). If it matters, I am running on a Mac OSX machine. After the upgrade, I have one project in which PyCharm cannot find any python modules. It can't find numpy, matplotlib, anything ... I have checked a couple of other projects and they seem to be fine. I noticed that somehow the interpreter for the project in question was not the same as for the others. So I changed it to match the others. But PyCharm still can't find the modules. Any ideas what else I can do? More generally, something like this happens every time I upgrade to a new PyCharm version. The fix each time is a little different. Any ideas on how I can prevent this in the first place? EDIT: FWIW, I just now tried to create a new dummy project. It has the same problem. I notice that my two problem projects are created with a "venv" sub-directory. My "good" projects don't have this thing. Is this a clue to what is going on? EDIT 2: OK, just realized that when creating a new project, I can select "New environment" or "Existing interpreter", and I want "Existing interpreter". However, I would still like to know how one project that was working fine before is now hosed, and how I can fix it. Thanks.
It seems, when you are creating a new project, you also opt to create a new virtual environment, which then is created (default) in that venv sub-directory. But that would only apply to new projects, what is going on with your old projects, changing their project interpreter environment i do not understand. So what i would say is you have some corrupt settings (e.g. in ~/Library/Preferences/PyCharm2018.2 ), which are copied upon PyCharm upgrade. You might try newly configure PyCharm by moving away those PyCharm preferences, so you can put them back later. The Project configuration mainly, special the Project interpreter on the other hand is stored inside $PROJECT_ROOT/.idea and thus should not change.
1.2
true
2
5,729
2018-09-26 21:19:30.580
Interpreter problem (apparently) with a project in PyCharm
I recently upgraded PyCharm (community version). If it matters, I am running on a Mac OSX machine. After the upgrade, I have one project in which PyCharm cannot find any python modules. It can't find numpy, matplotlib, anything ... I have checked a couple of other projects and they seem to be fine. I noticed that somehow the interpreter for the project in question was not the same as for the others. So I changed it to match the others. But PyCharm still can't find the modules. Any ideas what else I can do? More generally, something like this happens every time I upgrade to a new PyCharm version. The fix each time is a little different. Any ideas on how I can prevent this in the first place? EDIT: FWIW, I just now tried to create a new dummy project. It has the same problem. I notice that my two problem projects are created with a "venv" sub-directory. My "good" projects don't have this thing. Is this a clue to what is going on? EDIT 2: OK, just realized that when creating a new project, I can select "New environment" or "Existing interpreter", and I want "Existing interpreter". However, I would still like to know how one project that was working fine before is now hosed, and how I can fix it. Thanks.
Your project is most likely pointing to the wrong interpreter. E.G. Using a virtual environment when you want to use a global one. You must point PyCharm to the correct interpreter that you want to use. "File/Settings(Preferences On Mac)/Project: ... /Project Interpreter" takes you to the settings associated with the interpreters. This window shows all of the modules within the interpreter. From here you can click the settings wheel in the top right and configure your interpreters. (add virtual environments and what not) or you can select an existing interpreter from the drop down to use with your project.
0.201295
false
2
5,729
2018-09-27 04:54:42.700
how can i check all the values of dataframe whether have null values in them without a loop
if all(data_Window['CI']!=np.nan): I have used the all() function with if so that if column CI has no NA values, then it will do some operation. But i got syntax error.
This gives you all a columns and how many null values they have. df = pd.DataFrame({0:[1,2,None,],1:[2,3,None]) df.isnull().sum()
0
false
1
5,730