Q_CreationDate
stringlengths
23
23
Title
stringlengths
11
149
Question
stringlengths
25
6.53k
Answer
stringlengths
15
5.1k
Score
float64
-1
1.2
Is_accepted
bool
2 classes
N_answers
int64
1
17
Q_Id
int64
0
6.76k
2019-01-23 03:02:55.387
Parsing list of URLs with regex patterns
I have a large text file of URLs (>1 million URLs). The URLs represent product pages across several different domains. I'm trying to parse out the SKU and product name from each URL, such as: www.amazon.com/totes-Mens-Mike-Duck-Boot/dp/B01HQR3ODE/ totes-Mens-Mike-Duck-Boot B01HQR3ODE www.bestbuy.com/site/apple-airpods-white/5577872.p?skuId=5577872 apple-airpods-white 5577872 I already have the individual regex patterns figured out for parsing out the two components of the URL (product name and SKU) for all of the domains in my list. This is nearly 100 different patterns. While I've figured out how to test this one URL/pattern at a time, I'm having trouble figuring out how to architect a script which will read in my entire list, then go through and parse each line based on the relevant regex pattern. Any suggestions how to best tackle this? If my input is one column (URL), my desired output is 4 columns (URL, domain, product_name, SKU).
While it is possible to roll this all into one massive regex, that might not be the easiest approach. Instead, I would use a two-pass strategy. Make a dict of domain names to the regex pattern that works for that domain. In the first pass, detect the domain for the line using a single regex that works for all URLs. Then use the discovered domain to lookup the appropriate regex in your dict to extract the fields for that domain.
0.201295
false
1
5,912
2019-01-24 09:30:09.097
Python Azure function processing blob storage
I am trying to make a pipeline using Data Factory In MS Azure of processing data in blob storage and then running a python processing code/algorithm on the data and then sending it to another source. My question here is, how can I do the same in Azure function apps? Or is there a better way to do it? Thanks in advance. Shyam
I created a Flask API and called my python code through that. And then put it in Azure as a web app and called the blob.
0
false
1
5,913
2019-01-24 11:46:02.647
Django Admin Interface - Privileges On Development Server
I have an old project running (Django 1.6.5, Python 2.7) live for several years. I have to make some changes and have set up a working development environment with all the right django and python requirements (packages, versions, etc.) Everything is running fine, except when I am trying to make changes inside the admin panel. I can log on fine and looking at the database (sqlite3) I see my user has superuser privileges. However django says "You have no permissions to change anything" and thus not even displaying any of the models registered for the admin interface. I am using the same database that is running on the live server. There I have no issues at all (Live server also running in development mode with DEBUG=True has no issues) -> I can only see the history (My Change Log) - Nothing else I have also created a new superuser - but same problem here. I'd appreciate any pointers (Maybe how to debug this?)
Finally, I found the issue: admin.autodiscover() was commented out in the project's urls.py for some reason. (I may have done that trying to get the project to work in a more recent version of django) - So admin.site.register was never called and the app_dict never filled. index.html template of django.contrib.admin then returns You don't have permission to edit anything. or it's equivalent translation (which I find confusing, given that the permissions are correct, only no models were added to the admin dictionary. I hope this may help anyone running into a similar problem
0
false
1
5,914
2019-01-24 19:31:18.407
How to handle EULA pop-up window that appears only on first login?
I am new to Selenium. The web interface of our product pops up a EULA agreement which the user has to scroll down and accept before proceeding. This happens ONLY on initial login using that browser for that user. I looked at the Selenium API but I am unable to figure out which one to use and how to use it. Would much appreciate any suggestions in this regard. I have played around with the IDE for Chrome but even over there I don't see anything that I can use for this. I am aware there is an 'if' command but I don't know how to use it to do something like: if EULA-pops-up: Scroll down and click 'accept' proceed with rest of test.
You may disable the EULA if that is an option for you, I am sure there is a way to do it in registries as well : C:\Program Files (x86)\Google\Chrome\Application there should be a file called master_preferences. Open the file and setting: require_eula to false
0
false
1
5,915
2019-01-25 09:21:41.077
Predicting values using trained MNB Classifier
I am trying to train a model for sentiment analysis and below is my trained Multinomial Naive Bayes Classifier returning an accuracy of 84%. I have been unable to figure out how to use the trained model to predict the sentiment of a sentence. For example, I now want to use the trained model to predict the sentiment of the phrase "I hate you". I am new to this area and any help is highly appreciated.
I don't know the dataset and what is semantic of individual dictionaries, but you are training your model on a dataset which has form as follows: [[{"word":True, "word2": False}, 'neg'], [{"word":True, "word2": False}, 'pos']] That means your input is in form of a dictionary, and output in form of 'neg' label. If you want to predict you need to input a dictionary in a form: {"I": True, "Hate": False, "you": True}. Then: MNB_classifier.classify({"love": True}) >> 'neg' or MNB_classifier.classify_many([{"love": True}]) >> ['neg']
1.2
true
1
5,916
2019-01-25 11:29:23.027
Deliver python external libraries with script
I want to use my script that uses pandas library on another linux machine where is no internet access or pip installed. Is there a way how to deliver the script with all dependencies? Thanks
or set needed dependices in script manually by appending sys.modules and pack together all the needed files.
0
false
1
5,917
2019-01-26 14:14:21.693
importing an entire folder of .py files into google colab
I have a folder of . py files(a package made by me) which i have uploaded into my google drive. I have mounted my google drive in colab but I still can not import the folder in my notebook as i do in my pc. I know how to upload a single .py file into google colab and import it into my code, but i have no idea about how to upload a folder of .py files and import it in notebook and this is what i need to do. This is the code i used to mount drive: from google.colab import drive drive.mount('/content/drive') !ls 'drive/My Drive'
I found how to do it. after uploading all modules and packages into the directory which my notebook file is in, I changed colab's directory from "/content" to this directory and then i simply imported the modules and packages(folder of .py files) into my code
1.2
true
1
5,918
2019-01-27 06:38:41.497
How to redirect -progress option output of ffmpeg to stderr?
I'm writing my own wraping for ffmpeg on Python 3.7.2 now and want to use it's "-progress" option to read current progress since it's highly machine-readable. The problem is "-progress" option of ffmpeg accepts as its parameter file names and urls only. But I don't want to create additional files not to setup the whole web-server for this purpose. I've google a lot about it, but all the "progress bars for ffmpeg" projects rely on generic stderr output of ffmpeg only. Other answers here on Stackoverflow and on Superuser are being satisfied with just "-v quiet -stats", since "progress" is not very convenient name for parameter to google exactly it's cases. The best solution would be to force ffmpeg write it's "-progress" output to separate pipe, since there is some useful data in stderr as well regarding file being encoded and I don't want to throw it away with "-v quiet". Though if there is a way to redirect "-progress" output to stderr, it would be cool as well! Any pipe would be ok actually, I just can't figure out how to make ffmpeg write it's "-progress" not to file in Windows. I tried "ffmpeg -progress stderr ...", but it just create the file with this name.
-progress pipe:1 will write out to stdout, pipe:2 to stderr. If you aren't streaming from ffmpeg, use stdout.
1.2
true
1
5,919
2019-01-28 14:38:40.990
How can I check how often all list elements from a list B occur in a list A?
I have a python list A and a python list B with words as list elements. I need to check how often the list elements from list B are contained in list A. Is there a python method or how can I implement this efficient? The python intersection method only tells me that a list element from list B occurs in list A, but not how often.
You could convert list B to a set, so that checking if the element is in B is faster. Then create a dictionary to count the amount of times that the element is in A if the element is also in the set of B As mentioned in the comments collections.Counter does the "heavy lifting" for you
0
false
1
5,920
2019-01-29 07:42:00.640
Can't install packages via pip or npm
I'm trying to install some packages globally on my Mac. But I'm not able to install them via npm or pip, because I'll always get the message that the packages does not exist. For Python, I solved this by always using a virtualenv. But now I'm trying to install the @vue/cli via npm, but I'm not able to access it. The commands are working fine, but I'm just not able to access it. I think it has something to do with my $PATH, but I don't know how to fix that. If I look in my Finder, I can find the @vue folder in /users/.../node_modules/. Does someone know how I can access this folder with the vue command in Terminal?
If it's a PATH problem: 1) Open up Terminal. 2) Run the following command: sudo nano /etc/paths 3) Enter your password, when prompted. 4) Check if the correct paths exist in the file or not. 5) Fix, if needed 6) Hit Control-X to quit. 7) Enter “Y” to save the modified buffer. Everything, should work fine now. If it doesn't try re-installing NPM/PIP.
1.2
true
1
5,921
2019-01-31 10:19:40.180
How to get disk space total, used and free using Python 2.7 without PSUtil
Is there a way I get can the following disk statistics in Python without using PSUtil? Total disk space Used disk space Free disk space All the examples I have found seem to use PSUtil which I am unable to use for this application. My device is a Raspberry PI with a single SD card. I would like to get the total size of the storage, how much has been used and how much is remaining. Please note I am using Python 2.7.
You can do this with the os.statvfs function.
0.201295
false
1
5,922
2019-02-01 14:09:13.800
How can a same entity function as a parameter as well as an object?
In the below operation, we are using a as an object as well as an argument. a = "Hello, World!" print(a.lower()) -> a as an object print(len(a)) -> a as a parameter May I know how exactly each operations differs in the way they are accessing a?
Everything in python (everything that can go on the rhs of an assignment) is an object, so what you can pass as an argument to a function IS an object, always. Actually, those are totally orthogonal concepts: you don't "use" something "as an object" - it IS an object - but you can indeed "use it" (pass it) as an argument to a function / method / whatever callable. May I know how exactly each operations differs in the way they are accessing a? Not by much actually (except for the fact they do different things with a)... a.lower() is only syntactic sugar for str.lower(a) (obj.method() is syntactic sugar for type(obj).method(obj), so in both cases you are "using a as an argument".
0.386912
false
1
5,923
2019-02-02 02:41:43.413
Loading and using a trained TensorFlow model in Python
I trained a model in TensorFlow using the tf.estimator API, more specifically using tf.estimator.train_and_evaluate. I have the output directory of the training. How do I load my model from this and then use it? I have tried using the tf.train.Saver class by loading the most recent ckpt file and restoring the session. However, then to call sess.run() I need to know what the name of the output node of the graph is so I can pass this to the fetches argument. What is the name/how can I access this output node? Is there a better way to load and use the trained model? Note that I have already trained and saved the model in a ckpt file, so please do not suggest that I use the simple_save function.
(Answering my own question) I realized that the easiest way to do this was to use the tf.estimator API. By initializing an estimator that warm starts from the model directory, it's possible to just call estimator.predict and pass the correct args (predict_fn) and get the predictions immediately. It's not required to deal with the graph variables in any way.
0
false
1
5,924
2019-02-02 08:14:24.520
Best way to map words with multiple spellings to a list of key words?
I have a pile of ngrams of variable spelling, and I want to map each ngram to it's best match word out of a list of known desired outputs. For example, ['mob', 'MOB', 'mobi', 'MOBIL', 'Mobile] maps to a desired output of 'mobile'. Each input from ['desk', 'Desk+Tab', 'Tab+Desk', 'Desktop', 'dsk'] maps to a desired output of 'desktop' I have about 30 of these 'output' words, and a pile of about a few million ngrams (much fewer unique). My current best idea was to get all unique ngrams, copy and paste that into Excel and manually build a mapping table, took too long and isn't extensible. Second idea was something with fuzzy (fuzzy-wuzzy) matching but it didn't match well. I'm not experienced in Natural Language terminology or libraries at all so I can't find an answer to how this might be done better, faster and more extensibly when the number of unique ngrams increases or 'output' words change. Any advice?
The classical approach would be, to build a "Feature Matrix" for each ngram. Each word maps to an Output which is a categorical value between 0 and 29 (one for each class) Features can for example be the cosine similarity given by fuzzy wuzzy but typically you need many more. Then you train a classification model based on the created features. This model can typically be anything, a neural network, a boosted tree, etc.
0.135221
false
1
5,925
2019-02-04 21:09:00.383
Use VRAM (graphics card memory) in pygame for images
I'm programming a 2D game with Python and Pygame and now I want to use my internal graphics memory to load images to. I have an Intel HD graphics card (2GB VRAM) and a Nvidia GeForce (4GB VRAM). I want to use one of them to load images from the hard drive to it (to use the images from there). I thought it might be a good idea as I don't (almost) need the VRAM otherwise. Can you tell me if and how it is possible? I do not need GPU-Acceleration.
You have to create your window with the FULLSCREEN, DOUBLEBUF and HWSURFACE flags. Then you can create and use a hardware surface by creating it with the HWSURFACE flag. You'll also have to use pygame.display.flip() instead of pygame.display.update(). But even pygame itself discourages using hardware surfaces, since they have a bunch of disadvantages, like - no mouse cursor - only working in fullscreen (at least that's what pygame's documentation says) - you can't easily manipulate the surfaces - they may not work on all platforms (and I never got transparency to work with them). And it's not even clear if you really get a notable performance boot. Maybe they'll work better in a future pygame release when pygame switches to SDL 2 and uses SDL_TEXTURE instead of SDL_HWSURFACE, who knows....
1.2
true
1
5,926
2019-02-05 02:42:03.343
Installed Anaconda to macOS that has Python2.7 and 3.7. Pandas only importing to 2.7; how can I import to 3.7?
New to coding; I just downloaded the full Anaconda package for Python 3.7 onto my Mac. However, I can't successfully import Pandas into my program on SublimeText when running my Python3.7 build. It DOES work though, when I change the build to Python 2.7. Any idea how I can get it to properly import when running 3.7 on SublimeText? I'd just like to be able to execute the code within Sublime. Thanks!
Uninstall python 2.7. Unless you use it, its better to uninstall it.
0
false
1
5,927
2019-02-05 12:40:24.703
How to check learning feasibility on a binary classification problem with Hoeffding's inequality/VC dimension with Python?
I have a simple binary classification problem, and I want to assess the learning feasibility using Hoeffding's Inequality and also if possible VC dimension. I understand the theory but, I am still stuck on how to implement it in Python. I understand that In-sample Error (Ein) is the training Error. Out of sample Error(Eout) is the error on the test subsample I guess. But how do I plot the difference between these two errors with the Hoeffdings bound?
Well here is how I handled it : I generate multiple train/test samples, run the algorithm on them, calculate Ein as the train set error, Eout estimated by the test set error, calculate how many times their differnces exceeds the value of epsilon (for a range of epsilons). And then I plot the curve of these rates of exceeding epsilon and the curve of the right side of the Hoeffding's /VC inequality so I see if the differences curve is always under the Hoeffding/VC's Bound curve, this informs me about the learning feasiblity.
1.2
true
1
5,928
2019-02-06 20:20:54.933
python keeps saying that 'imput is undefined. how do I fix this?
Please help me with this. I'd really appreciate it. I have tried alot of things but nothing is working, Please suggest any ideas you have. This is what it keeps saying: name = imput('hello') NameError: name 'imput' is not defined
You misspelled input as imput. imput() is not a function that python recognizes - thus, it assumes it's the name of some variable, searches for wherever that variable was declared, and finds nothing. So it says "this is undefined" and raises an error.
1.2
true
1
5,929
2019-02-07 02:36:18.047
Understanding each component of a web application architecture
Here is a scenario for a system where I am trying to understand what is what: I'm Joe, a novice programmer and I'm broke. I've got a Flask app and one physical machine. Since I'm broke, I cannot afford another machine for each piece of my system, thus the web server, application and database all live on my one machine. I've never deployed an app before, but I know that a server can refer to a machine or software. From here on, lets call the physical machine the Rack. I've loaded an instance of MongoDB on my machine and I know that is the Database Server. In order to handle API requests, I need something on the rack that will handle HTTP/S requests, so I install and run an instance of NGINX on it and I know that this is the Web Server. However, my web server doesnt know how to run the app, so I do some research and learn about WSGI and come to find out I need another component. So I install and run an instance of Gunicorn and I know that this is the WSGI Server. At this point I have a rack that is home to a web server to handle API calls (really just acts as a reverse proxy and pushes requests to the WSGI server), a WSGI server that serves up dynamic content from my app and a database server that stores client information used by the app. I think I've got my head on straight, then my friend asks "Where is your Application Server?" Is there an application server is this configuration? Do I need one?
Any basic server architecture has three layers. On one end is the web server, which fulfills requests from clients. The other end is the database server, where the data resides. In between these two is the application server. It consists of the business logic required to interact with the web server to receive the request, and then with the database server to perform operations. In your configuration, the WSGI serve/Flask app is the application server. Most application servers can double up as web servers.
0
false
1
5,930
2019-02-07 04:21:01.713
How keras model H5 works in theory
After training the trained model will be saved as H5 format. But I didn't know how that H5 file can be used as classifier to classifying new data. How H5 model works in theory when classifying new data?
When you save your model as h5-file, you save the model structure, all its parameters and further informations like state of your optimizer and so on. It is just an efficient way to save huge amounts of information. You could use json or xml file formats to do this as well. You can't classifiy anything only using this file (it is not executable). You have to rebuild the graph as a tensorflow graph from this file. To do so you simply use the load_model() function from keras, which returns a keras.models.Model object. Then you can use this object to classifiy new data, with keras predict() function.
0.201295
false
1
5,931
2019-02-07 19:36:54.707
Using pyautogui with multiple monitors
I'm trying to use the pyautogui module for python to automate mouse clicks and movements. However, it doesn't seem to be able to recognise any monitor other than my main one, which means i'm not able to input any actions on any of my other screens, and that is a huge problem for the project i am working on. I've searched google for 2 hours but i can't find any straight answers on whether or not it's actually possible to work around. If anyone could either tell me that it is or isn't possible, tell me how to do it if it is, or suggest an equally effective alternative (for python) i would be extremely grateful.
not sure if this is clear but I subtracted an extended monitor's horizontal resolution from 0 because my 2nd monitor is on the left of my primary display. That allowed me to avoid the out of bounds warning. my answer probably isn't the clearest but I figured I would chime in to let folks know it actually can work.
0
false
1
5,932
2019-02-07 21:14:35.190
How to encrypt(?) a document to prove it was made at a certain time?
So, a bit of a strange question, but let's say that I have a document (jupyter notebook) and I want to be able to prove to someone that it was made before a certain date, or that it was created on a certain date - does anyone have any ideas as to how I'd achieve that? It would need to be a solution that couldn't be technically re-engineered after the fact (faking the creation date). Keen to hear your thoughts :) !
email it to yourself or a trusted party – dandavis yesterday Good solution. Thanks!
0
false
1
5,933
2019-02-08 03:38:25.450
How to reset Colab after the following CUDA error 'Cuda assert fails: device-side assert triggered'?
I'm running my Jupyter Notebook using Pytorch on Google Colab. After I received the 'Cuda assert fails: device-side assert triggered' I am unable to run any other code that uses my pytorch module. Does anyone know how to reset my code so that my Pytorch functions that were working before can still run? I've already tried implementing CUDA_LAUNCH_BLOCKING=1but my code still doesn't work as the Assert is still triggered!
You need to reset the Colab notebook. To run existing Pytorch modules that used to work before, you have to do the following: Go to 'Runtime' in the tool bar Click 'Restart and Run all' This will reset your CUDA assert and flush out the module so that you can have another shot at avoiding the error!
1.2
true
1
5,934
2019-02-08 07:38:41.967
How change hostpython for use python3 on MacOS for compile Python+Kivy project for Xcode
I use toolchain from Kivy for compile Python + Kivy project on MacOS, but by default, toolchain use python2 recipes but I need change to python3. I´m googling but I don't find how I can do this. Any idea? Thanks
For example, recipe "ios" and "pyobjc" dependency is changed from depends = ["python"] to depends = ["python3"]. (__init__.py in each packages in receipe folder in kivy-ios package) These recipes are loaded from your request implicitly or explicitly This description of the problem recipes is equal to require hostpython2/python2. then conflict with python3. The dependency of each recipe can be traced from output of kivy-ios. "hostpython" or "python" in output(console) were equaled to hostpython2 or python2.(now ver.)
0
false
2
5,935
2019-02-08 07:38:41.967
How change hostpython for use python3 on MacOS for compile Python+Kivy project for Xcode
I use toolchain from Kivy for compile Python + Kivy project on MacOS, but by default, toolchain use python2 recipes but I need change to python3. I´m googling but I don't find how I can do this. Any idea? Thanks
your kivy installation is likely fine already. Your kivy-ios installation is not. Completely remove your kivy-ios folder on your computer, then do git clone git://github.com/kivy/kivy-ios to reinstall kivy-ios. Then try using toolchain.py to build python3 instead of python 2 This solution work for me. Thanks very much Erik.
1.2
true
2
5,935
2019-02-09 15:50:20.647
How to reach streaming learning in Neural network?
As title, I know there're some model supporting streaming learning like classification model. And the model has function partial_fit() Now I'm studying regression model like SVR and RF regressor...etc in scikit. But most of regression models doesn't support partial_fit . So I want to reach the same effect in neural network. If in tensorflow, how to do like that? Is there any keyword?
There is no some special function for it in TensorFlow. You make a single training pass over a new chunk of data. And then another training pass over another new chunk of data, etc till you reach the end of the data stream (which, hopefully, will never happen).
0
false
1
5,936
2019-02-10 09:38:54.947
How to pickle or save a WxPython FontData Object
I've been coding a text editor, and it has the function to change the default font displayed in the wx.stc.SyledTextCtrl. I would like to be able to save the font as a user preference, and I have so far been unable to save it. The exact object type is <class 'wx._core.Font'>. Would anyone know how to pickle/save this?
Probably due to its nature, you cannot pickle a wx.Font. Your remaining option is to store its constituent parts. Personally, I store facename, point size, weight, slant, underline, text colour and background colour. How you store them is your own decision. I use 2 different options depending on the code. Store the entries in an sqlite3 database, which allows for multiple indexed entries. Store the entries in an .ini file using configobj Both sqlite3 and configobj are available in the standard python libraries.
1.2
true
1
5,937
2019-02-10 09:51:41.193
how to decode gzip string in JS
I have one Django app and in the view of that I am using gzip_str(str) method to compress data and send back to the browser. Now I want to get the original string back in the browser. How can I decode the string in JS. P.S. I have found few questions here related to the javascript decode of gzip string but I could not figure out how to use those. Please tell me how can I decode and get the original string.
Serve the string with an appropriate Content-Encoding, then the browser will decode it for you.
0
false
1
5,938
2019-02-10 15:03:18.307
How to remove unwanted python packages from the Base environment in Anaconda
I am using Anaconda. I would like to know how to remove or uninstall unwanted packages from the base environment. I am using another environment for my coding purpose. I tried to update my environment by using yml file (Not base environment). Unexpectedly some packages installed by yml into the base environment. So now it has 200 python packages which have another environment also. I want to clear unwanted packages in the base environment and I am not using any packages in the base environment. Also, my memory is full because of this. Please give me a solution to remove unwanted packages in the base environment in anaconda. It is very hard to remove one by one each package, therefore, I am looking for a better solution.
Please use the below code: conda uninstall -n base <Package_name>
0
false
1
5,939
2019-02-11 00:05:55.277
Pythonic way to split project into modules?
Say, there is a module a which, among all other stuff, exposes some submodule a.b. AFAICS, it is desired to maintain modules in such a fashion that one types import a, import a.b and then invokes something b-specific in a following way: a.b.b_specific_function() or a.a_specific_function(). The questions I'd like to ask is how to achive such effect? There is directory a and there is source-code file a.py inside of it. Seems to be logical choice, thought it would look like import a.a then, rather than import a. The only way I see is to put a.py's code to the __init__.py in the a directory, thought it is definitely wrong... So how do I keep my namespaces clean?
You can put the code into __init__.py. There is nothing wrong with this for a small subpackage. If the code grows large it is also common to have a submodule with a repeated name like a/a.py and then inside __init__.py import it using from .a import *.
1.2
true
1
5,940
2019-02-11 11:28:57.127
Fastest way in numpy to sum over upper triangular elements with the least memory
I need to perform a summation of the kind i<j on symmetric matrices. This is equivalent to sum over the upper triangular elements of a matrix, diagonal excluded. Given A a symmetric N x N array, the simplest solution is np.triu(A,1).sum() however I was wondering if faster methods exist that require less memory. It seems that (A.sum() - np.diag(A).sum())/2 is faster on large array, but how to avoid creating even the N x 1 array from np.diag? A doubly nested for loop would require no additional memory, but it is clearly not the way to go in Python.
The fastest method with the least memory, in pure numpy is going to be to sum the entire thing and subtract the diagonal. It may feel wasteful in terms of FLOPS, but note that the theoretical savings relative to that implementation are only a factor 2. If that means anything to you, you probably should not be using numpy in the first place. Also, numpy fundamentally deals with blocks of memory addressable as strided views. If you could get a single strided view onto your triangle, it might lead to an efficient numpy implementation. But you cant (proof left as exercise to the reader), so you can safely forget about any true numpy solution that isnt a call to an optimized C-routine that solves your problem for you. And none exist that I am aware. But even that 'optimized' C loop may in practice get its ass kicked by A.sum(). If A is contiguous, that sum has the potential to dispatch a maximally cache-optimized and SIMD-optimized codepath. Likely, any vanilly-C youd write yourself would get absolutely demolished by A.sum() in a benchmark.
0.101688
false
2
5,941
2019-02-11 11:28:57.127
Fastest way in numpy to sum over upper triangular elements with the least memory
I need to perform a summation of the kind i<j on symmetric matrices. This is equivalent to sum over the upper triangular elements of a matrix, diagonal excluded. Given A a symmetric N x N array, the simplest solution is np.triu(A,1).sum() however I was wondering if faster methods exist that require less memory. It seems that (A.sum() - np.diag(A).sum())/2 is faster on large array, but how to avoid creating even the N x 1 array from np.diag? A doubly nested for loop would require no additional memory, but it is clearly not the way to go in Python.
You can replace np.diag(A).sum() with np.trace(A); this will not create the temporary Nx1 array
1.2
true
2
5,941
2019-02-11 17:01:42.323
How to create communication between python(not web) and angularjs
I have an angularjs and a python project. The angularjs for the frontend and the python part is for the trainings made for face recognition. I wanted to know if there is a way that my angular can communicate with python and if it can, how to use the functionalities of python project in the angular.
Create an API in python console application call http get put etc from angular .
0
false
1
5,942
2019-02-12 01:46:25.987
How can I convert .mat files to NumPy files in Python?
So I have a .mat file It is a little over 1 GB but I don't know how much data or lines of code is on it. I want to convert this .mat file to a NumPy file in Python so I can look at the data and see what is in it. How do I do this conversion?
I think you have two options to read it. Reading it in python: import scipy.io mat = scipy.io.loadmat('fileName.mat') Converting it to .csv in MATLAB in order to read it in python later: FileData = load('FileName.mat'); csvwrite('FileName.csv', FileData.M);
0.386912
false
1
5,943
2019-02-12 12:10:46.883
Does pandas read the whole file even when usecols is used?
I'm using pandas to read a file inside a rest service. The file is huge with more than 100 columns. But I only want to read just first two columns. I know I can use usecols in read_csv but I was wondering how exactly it works? Does pandas read the whole file and filter out the required columns? Or does it only read the required columns? I'm asking because I don't want to overload the memory.
According to the documentation, it will read the whole file (no way to only read columns from disk), but will only parse and store the columns given in the use_cols variable (emphasize mine): usecols : list-like or callable, optional Return a subset of the columns... Using this parameter results in much faster parsing time and lower memory usage.
1.2
true
1
5,944
2019-02-13 19:34:47.163
How to use multiple threads to execute the same code and speed up it?
I'm facing some performance issues to execute a fuzzy match based on Leveinshtein distance algorithm. I'm comparing two lists, a small one with 1k lines and a second one with 10k lines. I have splitted the bigger list in 10 files of 1000 lines to check speed performance, but I checked that Python is using only 1 thread. I have googled for many articles and people says how to execute TWO different functions in paralel. I would like to know how to execute the SAME code in multiple threads. For example: it's taking 1 second to compare 1 word in a 1000 lines. I would like to split this time in 4 threads. Is it possible? Sorry for the long text and thanks a lot for your help!
Running the same code in two or more threads won't assist performance. You could potentially split up the task so each handles 250, then have each thread handle 1 of those tasks. Then compare the results at the end.
0
false
1
5,945
2019-02-14 14:37:17.357
Create tables from Excel column headers using Python and load data?
Background of our environment: Data Warehouse system is running with SQL Server 2012. Data Sources are Excel files and other APIs Issue: The business metrics are changing frequently and source file is changing frequently and data load failing for multiple reasons. Column mismatch Data type mismatch Wrong files Old or same file, updated twice Some of the above issues are managed via process guidelines and others at SQL level. But, whenever, there is a new file / column added, developer has to manually add the Column / table for that change to be impacted. Most of the times, the changes came to light only after the job failed or huge data quality / mismatch issue identified. Question: Is there any way, this can automated using Python / Powershell / Any Other scripting languages? In a way, whenever source files are ready, it can read and do the below steps: Read the column headers. Generate SQL for table structure with identified column headers and create temporary (Staging) table. Load the data into the newly created temporary table. After some basic data processing, load data into main table (presentation area) mostly through SQL. Challenges: There are 18 unique files, and each file columns are different and it may modified or added anytime according to the business requirement. When there is an addition of column, how do add that column on main table - altering a table is a good idea here? is it okay to done via script? Note: We have control only from source data file, we cannot do anything with how source file is generated or when can be new column added to source file. I am not sure, whether to ask this question on SO OR DBA SE, so if it is not fit here, please move it appropriate forum.
1.I'm guessing you can identify the file types based on file_names or header.You could create a SSIS package with a Source Script within a foreach loop , for the script define input and output columns manually and give Generic Names and fixed string length , ColumnNr1,ColumnNr2,ColumnNrN (Where N is max number of Columns from your files +10 for safety) .Create a staging table using the same logic as above,ColumnNr1,2... this will be used for all the files, if the file load is sequencial(As i have assumed), in your script you will read the header and insert it into a data table or list , compare the numbers of columns between file header and Final Table, create Alter Table statements for new columns based on differences and execute it , send column data from file to OutputBuffer columns . 2. Create dynamic SQL procedure based on data processing needs .
0
false
1
5,946
2019-02-14 23:36:33.280
Block indent jupyter notebook
Does anyone know how to get a command shortcut to work for block indenting and un-indenting in Jupyter notebooks? 
 In the Jupiter notebook command group there is a command “automatically indent selection”. When I put in a command mode control-/ for that command the notebook does block commenting. 
 
 I don’t see any other command that refers to indenting. 
 I can’t seem to figure this
it's Tab If you want to unindent, then Shift + Tab You need to have selected more rows or it's intellisense...
1
false
2
5,947
2019-02-14 23:36:33.280
Block indent jupyter notebook
Does anyone know how to get a command shortcut to work for block indenting and un-indenting in Jupyter notebooks? 
 In the Jupiter notebook command group there is a command “automatically indent selection”. When I put in a command mode control-/ for that command the notebook does block commenting. 
 
 I don’t see any other command that refers to indenting. 
 I can’t seem to figure this
In JupyterLab,ctrl+[ and ctrl+] work for indentation/deindentation. These solutions also are guaranteed to work, whereas Tab and Shift+Tab can trigger actions like intellisense if your cursor is in the middle of a line.
0
false
2
5,947
2019-02-15 00:17:46.583
How to create a 2 value data table for keras
I am trying to make my first neural network in keras (python) that takes in the x and y distances to the next pipe and outputs whether or not the bird should flap. How would I go about creating an input data set from the game and then turning that into something keras can use for training? I don't have very much knowledge in this area and my high school computer science teachers don't know either, therefore I'm not quite sure where to start. I have a very, very basic understanding of Keras and NN concepts. I have tried using .csv files with pandas but I am not sure how to turn that into useable data.
Sir even i'm new to neural network but i have some knowledge if you want to do this in exactly this manner then i'm no help but your can try doing this by genetic algo which will surely work for this
0
false
1
5,948
2019-02-16 07:10:43.833
How to make functions appear purple
So from just learning how to make functions, I thought of if I could turn the function purple, just like a normal print() or str() function. And with that in mind, it may seem pretty obvious that I am still a beginner when it comes to coding. From what I know, it may have something to do with sys.stdin.write, but I don't know. This will help me to make different languages for others who don't speak or write English.
In Options => Configure IDLE => Settings => Highlights there is a highlight setting for builtin names (default purple), including a few non-functions like Ellipsis. There is another setting for the names in def (function) and class statements (default blue). You can make def (and class) names be purple also. This will not make function names purple when used because the colorizer does not know what the name will be bound to when the code is run.
1.2
true
1
5,949
2019-02-17 13:30:30.583
Count number of Triggers in a given Span of Time
I've been working for a while with some cheap PIR modules and a raspberry pi 3. My aim is to use 4 of these guys to understand if a room is empty, and turn off some lights in case. Now, this lovely sensors aren't really precise. They false trigger from time to time, and they don't trigger right after their status has changed, and this makes things much harder. I thought I could solve the problem measuring a sort of "density of triggers", meaning how many triggers occurred during the last 60 seconds or something. My question is how could I implement effectively this solution? I thought to build a sort of container and fill it with elements with a timer or something, but I'm not really sure this would do the trick. Thank you!
How are you powering PIR sensors? They should be powered with 5V. I had similar problem with false triggers when I was powered PIR sensor with only 3.3V.
0
false
1
5,950
2019-02-18 02:33:10.543
While debugging in pycharm, how to debug only through a certain iteration of the for loop?
I have a for loop in Python in Pycharm IDE. I have 20 iterations of the for loop. However, the bug seems to be coming from the dataset looped during the 18th iteration. Is it possible to skip the first 17 values of the for loop, and solely jump to debug the 18th iteration? Currently, I have been going through all 17 iterations to reach the 18th. The logic encompassed in the for loop is quite intricate and long. Hence, every cycle of debug through each iteration takes a very long. Is there some way to skip to the desired iteration in Pycharm without going in in-depth debugging of the previous iterations?
You can set a break point with a condition (i == 17 [right click on the breakpoint to put it]) at the start of the loop.
-0.135221
false
1
5,951
2019-02-18 17:11:25.750
How to evaluate the path to a python script to be executed within Jupyter Notebook
Note: I am not simply asking how to execute a Python script within Jupyter, but how to evaluate a python variable which would then result in the full path of the Python script I was to execute. In my particular scenario, some previous cell on my notebook generates a path based on some condition. Example on two possible cases: script_path = /project_A/load.py script_path = /project_B/load.py Then some time later, I have a cell where I just want to execute the script. Usually, I would just do: %run -i /project_A/load.py but I want to keep the cell's code generic by doing something like: %run -i script_path where script_path is a Python variable whose value is based on the conditions that are evaluated earlier in my Jupyter notebook. The above would not work because Jupyter would then complain that it cannot find script_path.py. Any clues how I can have a Python variable passed to the %run magic?
One hacky way would be to change the directory via %cd path and then run the script with %run -i file.py E: I know that this is not exactly what you were asking but maybe it helps with your problem.
0
false
1
5,952
2019-02-19 09:11:19.870
How to use pretrained word2vec vectors in doc2vec model?
I am trying to implement doc2vec, but I am not sure how the input for the model should look like if I have pretrained word2vec vectors. The problem is, that I am not sure how to theoretically use pretrained word2vec vectors for doc2vec. I imagine, that I could prefill the hidden layer with the vectors and the rest of the hidden layer fill with random numbers Another idea is to use the vector as input for word instead of a one-hot-encoding but I am not sure if the output vectors for docs would make sense. Thank you for your answer!
You might think that Doc2Vec (aka the 'Paragraph Vector' algorithm of Mikolov/Le) requires word-vectors as a 1st step. That's a common belief, and perhaps somewhat intuitive, by analogy to how humans learn a new language: understand the smaller units before the larger, then compose the meaning of the larger from the smaller. But that's a common misconception, and Doc2Vec doesn't do that. One mode, pure PV-DBOW (dm=0 in gensim), doesn't use conventional per-word input vectors at all. And, this mode is often one of the fastest-training and best-performing options. The other mode, PV-DM (dm=1 in gensim, the default) does make use of neighboring word-vectors, in combination with doc-vectors in a manner analgous to word2vec's CBOW mode – but any word-vectors it needs will be trained-up simultaneously with doc-vectors. They are not trained 1st in a separate step, so there's not a easy splice-in point where you could provide word-vectors from elsewhere. (You can mix skip-gram word-training into the PV-DBOW, with dbow_words=1 in gensim, but that will train word-vectors from scratch in an interleaved, shared-model process.) To the extent you could pre-seed a model with word-vectors from elsewhere, it wouldn't necessarily improve results: it could easily send their quality sideways or worse. It might in some lucky well-managed cases speed model convergence, or be a way to enforce vector-space-compatibility with an earlier vector-set, but not without extra gotchas and caveats that aren't a part of the original algorithms, or well-described practices.
1.2
true
1
5,953
2019-02-21 02:24:51.223
How to convert every other character in a string to ascii in Python?
I know how to convert characters to ascii and stuff, and I'm making my first encryption algorithm just as a little fun project, nothing serious. I was wondering if there was a way to convert every other character in a string to ascii, I know this is similar to some other questions but I don't think it's a duplicate. Also P.S. I'm fairly new to Python :)
Use ord() function to get ascii value of a character. You can then do a chr() of that value to get the character.
0
false
1
5,954
2019-02-21 05:36:13.653
Run python script by PHP from another server
I am making APIs. I'm using CentOS for web server, and another windows server 2016 for API server. I'm trying to make things work between web server and window server. My logic is like following flow. 1) Fill the data form and click button from web server 2) Send data to windows server 3) Python script runs and makes more data 4) More made data must send back to web server 5) Web server gets more made datas 6) BAMM! Datas append on browser! I had made python scripts. but I can't decide how to make datas go between two servers.. Should I use ajax Curl in web server? I was planning to send a POST type request by Curl from web server to Windows server. But I don't know how to receipt those datas in windows server. Please help! Thank you in advance.
First option: (Recommended) You can create the python side as an API endpoint and from the PHP server, you need to call the python API. Second option: You can create the python side just like a normal webpage and whenever you call that page from PHP server you pass the params along with HTTP request, and after receiving data in python you print the data in JSON format.
1.2
true
1
5,955
2019-02-21 11:00:17.487
Kivy Android App - Switching screens with a swipe
Every example I've found thus-far for development with Kivy in regards to switching screens is always done using a button, Although the user experience doesn't feel very "native" or "Smooth" for the kind of app I would like to develop. I was hoping to incorperate swiping the screen to change the active screen. I can sort of imagine how to do this by tracking the users on_touch_down() and on_touch_up() cords (spos) and if the difference is great enough, switch over to the next screen in a list of screens, although I can't envision how this could be implemented within the kv language perhaps some examples could help me wrap my head around this better? P.S. I want to keep as much UI code within the kv language file as possible to prevent my project from producing a speghetti-code sort of feel to it. I'm also rather new to Kivy development altogether so I appologize if this question has an official answer somewhere and I just missed it.
You might want to use a Carousel instead of ScreenManager, but if you want that logic while using the ScreenManager, you'll certainly have to write some python code to manage that in a subclass of it, then use it in kv as a normal ScreenManager. Using previous and next properties to get the right screen to switch to depending on the action. This kind of logic is better done in python, and that doesn't prevent using the widgets in kv after.
1.2
true
1
5,956
2019-02-21 14:37:29.177
is it possible to code in python inside android studio?
is it possible to code in python inside android studio? how can I do it. I have an android app that I am try to develop. and I want to code some part in python. Thanks for the help how can I do it. I have an android app that I am try to develop. and I want to code some part in python. Thanks for the help
If you mean coding part of your Android application in python (and another part for example in Java) it's not possible for now. However, you can write Python script and include it in your project, then write in your application part that will invoke it somehow. Also, you can use Android Studio as a text editor for Python scripts. To develop apps for Android in Python you have to use a proper library for it.
1.2
true
1
5,957
2019-02-22 09:08:55.793
How to create .cpython-37 file, within __pycache__
I'm working on a project with a few scripts in the same directory, a pychache folder has been created within that directory, it contains compiled versions of two of my scripts. This has happened by accident I do not know how I did it. One thing I do know is I have imported functions between the two scripts that have been compiled. I would like a third compiled python script for a separate file however I do not want to import any modules(if this is even the case). Does anyone know how I can manually create a .cpython-37 file? Any help is appreciated.
There is really no reason to worry about __pycache__ or *.pyc files - these are created and managed by the Python interpreter when it needs them and you cannot / should not worry about manually creating them. They contain a cached version of the compiled Python bytecode. Creating them manually makes no sense (and I am not aware of a way to do that), and you should probably let the interpreter decide when it makes sense to cache the bytecode and when it doesn't. In Python 3.x, __pycache__ directories are created for modules when they are imported by a different module. AFAIK Python will not create a __pycache__ entry when a script is ran directly (e.g. a "main" script), only when it is imported as a module.
1.2
true
1
5,958
2019-02-22 10:05:07.620
Install python packages in windows server 2016 which has no internet connection
I need to install python packages in a windows server 2016 sandbox for running a developed python model in production.This doesn't have internet connection. My laptop is windows 2010 and the model is now running in my machine and need to push this to the server. My question is how can i install all the required packages in my server which has no internet connection. Thanks Mithun
A simply way is to install the same python version on another machine having internet access, and use normally pip on that machine. This will download a bunch of files and installs them cleanly under Lib\site_packages of your Python installation. You can they copy that folder to the server Python installation. If you want to be able to later add packages, you should keep both installations in sync: do not add or remove any package on the laptop without syncing with the server.
0
false
1
5,959
2019-02-22 18:47:07.843
How to write unit tests for text parser?
For background, I am somewhat of a self-taught Python developer with only some formal training with a few CS courses in school. In my job right now, I am working on a Python program that will automatically parse information from a very large text file (thousands of lines) that's a output result of a simulation software. I would like to be doing test driven development (TDD) but I am having a hard time understanding how to write proper unit tests. My trouble is, the output of some of my functions (units) are massive data structures that are parsed versions of the text file. I could go through and create those outputs manually and then test but it would take a lot of time. The whole point of a parser is to save time and create structured outputs. Only testing I've been doing so far is trial and error manually which is also cumbersome. So my question is, are there more intuitive ways to create tests for parsers? Thank you in advance for any help!
Usually parsers are tested using a regression testing system. You create sample input sets and verify that the output is correct. Then you put the input and output in libraries. Each time you modify the code, you run the regression test system over the library to see if anything changes.
0.673066
false
1
5,960
2019-02-22 20:17:16.640
Specific reasons to favor pip vs. conda when installing Python packages
I use miniconda as my default python installation. What is the current (2019) wisdom regarding when to install something with conda vs. pip? My usual behavior is to install everything with pip, and only using conda if a package is not available through pip or the pip version doesn't work correctly. Are there advantages to always favoring conda install? Are there issues associated with mixing the two installers? What factors should I be considering? OBJECTIVITY: This is not an opinion-based question! My question is when I have the option to install a python package with pip or conda, how do I make an informed decision? Not "tell me which is better, but "Why would I use one over the other, and will oscillating back & forth cause problems / inefficiencies?"
This is what I do: Activate your conda virutal env Use pip to install into your virtual env If you face any compatibility issues, use conda I recently ran into this when numpy / matplotlib freaked out and I used the conda build to resolve the issue.
0.327599
false
1
5,961
2019-02-24 14:21:54.997
how can I use python 3.6 if I have python 3.7?
I'm trying to make a discord bot, and I read that I need to have an older version of Python so my code will work. I've tried using "import discord" on IDLE but an error message keeps on coming up. How can I use Python 3.6 and keep Python 3.7 on my Windows 10 computer?
Install in different folder than your old Python 3.6 then update path Using Virtualenv and or Pyenv Using Docker Hope it help!
0
false
2
5,962
2019-02-24 14:21:54.997
how can I use python 3.6 if I have python 3.7?
I'm trying to make a discord bot, and I read that I need to have an older version of Python so my code will work. I've tried using "import discord" on IDLE but an error message keeps on coming up. How can I use Python 3.6 and keep Python 3.7 on my Windows 10 computer?
Just install it in different folder (e.g. if current one is in C:\Users\noob\AppData\Local\Programs\Python\Python37, install 3.6. to C:\Users\noob\AppData\Local\Programs\Python\Python36). Now, when you'll want to run a script, right click the file and under "edit with IDLE" will be multiple versions to choose. Works on my machine :)
0
false
2
5,962
2019-02-25 15:00:24.023
Is a Pyramid "model" also a Pyramid "resource"?
I'm currently in the process of learning how to use the Python Pyramid web framework, and have found the documentation to be quite excellent. I have, however, hit a stumbling block when it comes to distinguishing the idea of a "model" (i.e. a class defined under SQLAlchemy's declarative system) from the idea of a "resource" (i.e. a means of defining access control lists on views for use with Pyramid's auth system). I understand the above statements seem to show that I already understand the difference, but I'm having trouble understanding whether I should be making models resources (by adding the __acl__ attribute directly in the model class) or creating a separate resource class (which has the proper __parent__ and __name__ attributes) which represents the access to a view which uses the model. Any guidance is appreciated.
I'm having trouble understanding whether I should be making models resources (by adding the acl attribute directly in the model class) or creating a separate resource class The answer depends on what level of coupling you want to have. For a simple app, I would recommend making models resources just for simplicity sake. But for a complex app with a high level of cohesion and low level of coupling it would be better to have models separated from resources.
0.201295
false
1
5,963
2019-02-25 22:42:31.903
Python Gtk3 - Custom Statusbar w/ Progressbar
Currently I am working to learn how to use Gtk3 with Python 3.6. So far I have been able to use a combination of resources to piece together a project I am working on, some old 2.0 references, some 3.0 shallow reference guides, and using the python3 interpreters help function. However I am stuck at how I could customise the statusbar to display a progressbar. Would I have to modify the contents of the statusbar to add it to the end(so it shows up at the right side), or is it better to build my own statusbar? Also how could I modify the progressbars color? Nothing in the materials list a method/property for it.
GtkStatusbar is a subclass of GtkBox. You can use any GtkBox method including pack_start and pack_end or even add, which is a method of GtkContainer. Thus you can simply add you progressbar to statusbar.
1.2
true
1
5,964
2019-02-26 04:59:25.937
Can a consumer read records from a partition that stores data of particular key value?
Instead of creating many topics I'm creating a partition for each consumer and store data using a key. So is there a way to make a consumer in a consumer group read from partition that stores data of a specific key. If so can you suggest how it can done using kafka-python (or any other library).
Instead of using the subscription and the related consumer group logic, you can use the "assign" logic (it's provided by the Kafka consumer Java client for example). While with subscription to a topic and being part of a consumer group, the partitions are automatically assigned to consumers and re-balanced when a new consumer joins or leaves, it's different using assign. With assign, the consumer asks to be assigned to a specific partition. It's not part of any consumer group. It's also mean that you are in charge of handling rebalancing if a consumer dies: for example, if consumer 1 get assigned partition 1 but at some point it crashes, the partition 1 won't be reassigned automatically to another consumer. It's up to you writing and handling the logic for restarting the consumer (or another one) for getting messages from partition 1.
0
false
1
5,965
2019-02-26 08:57:02.207
how to increase fps for raspberry pi for object detection
I'm having low fps for real-time object detection on my raspberry pi I trained the yolo-darkflow object detection on my own data set using my laptop windows 10 .. when I tested the model for real-time detection on my laptop with webcam it worked fine with high fps However when trying to test it on my raspberry pi, which runs on Raspbian OS, it gives very low fps rate that is about 0.3 , but when I only try to use the webcam without the yolo it works fine with fast frames.. also when I use Tensorflow API for object detection with webcam on pi it also works fine with high fps can someone suggest me something please? is the reason related to the yolo models or opencv or phthon? how can I make the fps rate higher and faster for object detection with webcam?
My detector on raspberry pi without any accelerator can reach 5 FPS. I used SSD mobilenet, and quantize it after training. Tensorflow Lite supplies a object detection demo can reach about 8 FPS on raspberry pi 4.
0
false
2
5,966
2019-02-26 08:57:02.207
how to increase fps for raspberry pi for object detection
I'm having low fps for real-time object detection on my raspberry pi I trained the yolo-darkflow object detection on my own data set using my laptop windows 10 .. when I tested the model for real-time detection on my laptop with webcam it worked fine with high fps However when trying to test it on my raspberry pi, which runs on Raspbian OS, it gives very low fps rate that is about 0.3 , but when I only try to use the webcam without the yolo it works fine with fast frames.. also when I use Tensorflow API for object detection with webcam on pi it also works fine with high fps can someone suggest me something please? is the reason related to the yolo models or opencv or phthon? how can I make the fps rate higher and faster for object detection with webcam?
The raspberry pi not have the GPU procesors and because of that is very hard for it to do image recognition at a high fps .
0
false
2
5,966
2019-02-26 10:41:52.910
Python3: FileNotFoundError: [Errno 2] No such file or directory: 'train.txt', even with complete path
I'm currently working with Python3 on Jupyter Notebook. I try to load a text file which is in the exact same directory as my python notebook but it still doesn't find it. My line of code is: text_data = prepare_text('train.txt') and the error is a typical FileNotFoundError: [Errno 2] No such file or directory: 'train.txt' I've already tried to enter the full path to my text file but then I still get the same error. Does anyone know how to solve this?
I found the answer. Windows put a secont .txt at the end of the file name, so I should have used train.txt.txt instead.
0.201295
false
1
5,967
2019-02-26 16:48:18.673
Write own stemmer for stemming
I have a dataset of 27 files, each containing opcodes. I want to use stemming to map all versions of similar opcodes into the same opcode. For example: push, pusha, pushb, etc would all be mapped to push. My dictionary contains 27 keys and each key has a list of opcodes as a value. Since the values contain opcodes and not normal english words, I cannot use the regular stemmer module. I need to write my own stemmer code. Also I cannot hard-code a custom dictionary that maps different versions of the opcodes to the root opcode because I have a huge dataset. I think regex expression would be a good idea but I do not know how to use it. Can anyone help me with this or any other idea to write my own stemmer code?
I would recommend looking at the levenshtein distance metric - it measures the distance between two words in terms of character insertions, deletions, and replacements (so push and pusha would be distance 1 apart if you do the ~most normal thing of weighing insertions = deletions = replacements = 1 each). Based on the example you wrote, you could try just setting up categories that are all distance 1 from each other. However, I don't know if all of your equivalent opcodes will be so similar - if they're not leven might not work.
0
false
1
5,968
2019-02-26 18:30:41.820
Elementree Fromstring and iterparse in Python 3.x
I am able to parse from file using this method: for event, elem in ET.iterparse(file_path, events=("start", "end")): But, how can I do the same with fromstring function? Instead of from file, xml content is stored in a variable now. But, I still want to have the events as before.
From the documentation for the iterparse method: ...Parses an XML section into an element tree incrementally, and reports what’s going on to the user. source is a filename or file object containing XML data... I've never used the etree python module, but "or file object" says to me that this method accepts an open file-like object as well as a file name. It's an easy thing to construct a file-like object around a string to pass as input to a method like this. Take a look at the StringIO module.
0
false
1
5,969
2019-02-26 21:57:40.743
Why should I use tf.data?
I'm learning tensorflow, and the tf.data API confuses me. It is apparently better when dealing with large datasets, but when using the dataset, it has to be converted back into a tensor. But why not just use a tensor in the first place? Why and when should we use tf.data? Why isn't it possible to have tf.data return the entire dataset, instead of processing it through a for loop? When just minimizing a function of the dataset (using something like tf.losses.mean_squared_error), I usually input the data through a tensor or a numpy array, and I don't know how to input data through a for loop. How would I do this?
The tf.data module has specific tools which help in building a input pipeline for your ML model. A input pipeline takes in the raw data, processes it and then feeds it to the model. When should I use tf.data module? The tf.data module is useful when you have a large dataset in the form of a file such as .csv or .tfrecord. tf.data.Dataset can perform shuffling and batching of samples efficiently. Useful for large datasets as well as small datasets. It could combine train and test datasets. How can I create batches and iterate through them for training? I think you can efficiently do this with NumPy and np.reshape method. Pandas can read data files for you. Then, you just need a for ... in ... loop to get each batch amd pass it to your model. How can I feed NumPy data to a TensorFlow model? There are two options to use tf.placeholder() or tf.data.Dataset. The tf.data.Dataset is a much easier implementation. I recommend to use it. Also, has some good set of methods. The tf.placeholder creates a placeholder tensor which feeds the data to a TensorFlow graph. This process would consume more time feeding in the data.
1.2
true
1
5,970
2019-02-27 00:06:57.810
Pipenv: Multiple Environments
Right now I'm using virtualenv and just switching over to Pipenv. Today in virtualenv I load in different environment variables and settings depending on whether I'm in development, production, or testingby setting DJANGO_SETTINGS_MODULE to myproject.settings.development, myproject.settings.production, and myproject.settings.testing. I'm aware that I can set an .env file, but how can I have multiple versions of that .env file?
You should create different .env files with different prefixes depending on the environment, such as production.env or testing.env. With pipenv, you can use the PIPENV_DONT_LOAD_ENV=1 environment variable to prevent pipenv shell from automatically exporting the .env file and combine this with export $(cat .env | xargs). export $(cat production.env | xargs) && PIPENV_DONT_LOAD_ENV=1 pipenv shell would configure your environment variables for production and then start a shell in the virtual environment.
1.2
true
1
5,971
2019-02-27 05:59:35.137
How to architect a GUI application with UART comms which stays responsive to the user
I'm writing an application in PyQt5 which will be used for calibration and test of a product. The important details: The product under test uses an old-school UART/serial communication link at 9600 baud. ...and the test / calibration operation involves communicating with another device which has a UART/serial communication link at 300 baud(!) In both cases, the communication protocol is ASCII text with messages terminated by a newline \r\n. During the test/calibration cycle the GUI needs to communicate with the devices, take readings, and log those readings to various boxes in the screen. The trouble is, with the slow UART communications (and the long time-outs if there is a comms drop-out) how do I keep the GUI responsive? The Minimally Acceptable solution (already working) is to create a GUI which communicates over the serial port, but the user interface becomes decidedly sluggish and herky-jerky while the GUI is waiting for calls to serial.read() to either complete or time out. The Desired solution is a GUI which has a nice smooth responsive feel to it, even while it is transmitting and receiving serial data. The Stretch Goal solution is a GUI which will log every single character of the serial communications to a text display used for debugging, while still providing some nice "message-level" abstraction for the actual logic of the application. My present "minimally acceptable" implementation uses a state machine where I run a series of short functions, typically including the serial.write() and serial.read() commands, with pauses to allow the GUI to update. But the state machine makes the GUI logic somewhat tricky to follow; the code would be much easier to understand if the program flow for communicating to the device was written in a simple linear fashion. I'm really hesitant to sprinkle a bunch of processEvents() calls throughout the code. And even those don't help when waiting for serial.read(). So the correct solution probably involves threading, signals, and slots, but I'm guessing that "threading" has the same two Golden Rules as "optimization": Rule 1: Don't do it. Rule 2 (experts only): Don't do it yet. Are there any existing architectures or design patterns to use as a starting point for this type of application?
Okay for the past few days I've been digging, and figured out how to do this. Since there haven't been any responses, and I do think this question could apply to others, I'll go ahead and post my solution. Briefly: Yes, the best way to solve this is with with PyQt Threads, and using Signals and Slots to communicate between the threads. For basic function (the "Desired" solution above) just follow the existing basic design pattern for PyQt multithreaded GUI applications: A GUI thread whose only job is to display data and relay user inputs / commands, and, A worker thread that does everything else (in this case, including the serial comms). One stumbling point along the way: I'd have loved to write the worker thread as one linear flow of code, but unfortunately that's not possible because the worker thread needs to get info from the GUI at times. The only way to get data back and forth between the two threads is via Signals and Slots, and the Slots (i.e. the receiving end) must be a callable, so there was no way for me to implement some type of getdata() operation in the middle of a function. Instead, the worker thread had to be constructed as a bunch of individual functions, each one of which gets kicked off after it receives the appropriate Signal from the GUI. Getting the serial data monitoring function (the "Stretch Goal" above) was actually pretty easy -- just have the low-level serial transmit and receive routines already in my code emit Signals for that data, and the GUI thread receives and logs those Signals. All in all it ended up being a pretty straightforward application of existing principles, but I'm writing it down so hopefully the next guy doesn't have to go down so many blind alleys like I did along the way.
0
false
1
5,972
2019-02-27 13:33:17.083
how to register users of different kinds using different tables in django?
I'm new to django, I want to register users using different tables for different users like students, teaching staff, non teaching staff, 3 tables. How can i do it instead of using default auth_users table for registration
cf Sam's answer for the proper solutions from a technical POV. From a design POV, "student", "teaching staff" etc are not entities but different roles a user can have. One curious things with living persons and real-life things in general is that they tend to evolve over time without any respect for our well-defined specifications and classifications - for example it's not uncommon for a student to also have teaching duties at some points, for a teacher to also be studying some other topic, or for a teacher to stop teaching and switch to more administrative tasks. If you design your model with distinct entities instead of one single entitie and distinct roles, it won't properly accomodate those kind of situations (and no, having one account as student and one as teacher is not a proper solution either). That's why the default user model in Django is based on one single entity (the User model) and features allowing roles definitions (groups and permissions) in such a way that one user can have many roles, whether at the same time or in succession.
0
false
2
5,973
2019-02-27 13:33:17.083
how to register users of different kinds using different tables in django?
I'm new to django, I want to register users using different tables for different users like students, teaching staff, non teaching staff, 3 tables. How can i do it instead of using default auth_users table for registration
In Django authentication, there is Group model available which have many to many relationship with User model. You can add students, teaching staff and non teaching staff to Group model for separating users by their type.
0
false
2
5,973
2019-02-28 01:29:15.947
How do I know if a file has finished copying?
I've been given a simple file-conversion task: whenever an MP4 file is in a certain directory, I do some magic to it and move it to a different directory. Nice and straightforward, and easy to automate. However, if a user is copying some huge file into the directory, I worry that my script might catch it mid-copy, and only have half of the file to work with. Is there a way, using Python 3 on Windows, to check whether a file is done copying (in other words, no process is currently writing to it)? EDIT: To clarify, I have no idea how the files are getting there: my script just needs to watch a shared network folder and process files that are put there. They might be copied from a local folder I don't have access to, or placed through SCP, or downloaded from the web; all I know is the destination.
you could try first comparing the size of the file initially, or alternatively see if there are new files in the folder, capture the name of the new file and see if its size increases in x time, if you have a script, you could show the code....
0
false
1
5,974
2019-02-28 03:04:27.143
Viewing Graph from saved .pbtxt file on Tensorboard
I just have a graph.pbtxt file. I want to view the graph in tensorboard. But I am not aware of how to do that. Do I have to write any python script or can I do it from the terminal itself? Kindly help me to know the steps involved.
Open tensorboard and use the "Upload" button on the left to upload the pbtxt file will directly open the graph in tensorboard.
0.986614
false
1
5,975
2019-02-28 16:24:27.333
Intersection of interpol1d objects
I have 2 cumulative distributions that I want to find the intersection of. To get an underlying function, I used the scipy interpol1d function. What I’m trying to figure out now, is how to calculate their intersection. Not sure how I can do it. Tried fsolve, but I can’t find how to restrict the range in which to search for a solution (domain is limited).
Use scipy.optimize.brentq for bracketed root-finding: brentq(lambda x: interp1d(xx, yy)(x) - interp1d(xxx, yyy)(x), -1, 1)
0
false
1
5,976
2019-02-28 18:54:51.120
How to make depth of nii images equal?
I am having some nii images and each having same height and width but different depth. So I need to make the depth of each image equal, how can I do that? Also I didn't find any Python code, which can help me.
Once you have defined the depth you want for all volumes, let it be D, you can instantiate an image (called volume when D > 1) of dimensions W x H x D, for every volume you have. Then you can fill every such volume, pixel by pixel, by mapping the pixel position onto the original volume and retrieving the value of the pixel by interpolating the values in neighboring pixels. For example, a pixel (i_x, i_y, i_z) in the new volume will be mapped in a point (i_x, i_y, i_z') of the old volume. One of the simplest interpolation methods is the linear interpolation: the value of (i_x, i_y, i_z) is a weighted average of the values (i_x, i_y, floor(i_z')) and (i_x, i_y, floor(i_z') + 1).
0
false
1
5,977
2019-02-28 21:02:20.790
Tensorflow data pipeline: Slow with caching to disk - how to improve evaluation performance?
I've built a data pipeline. Pseudo code is as follows: dataset -> dataset = augment(dataset) dataset = dataset.batch(35).prefetch(1) dataset = set_from_generator(to_feed_dict(dataset)) # expensive op dataset = Cache('/tmp', dataset) dataset = dataset.unbatch() dataset = dataset.shuffle(64).batch(256).prefetch(1) to_feed_dict(dataset) 1 to 5 actions are required to generate the pretrained model outputs. I cache them as they do not change throughout epochs (pretrained model weights are not updated). 5 to 8 actions prepare the dataset for training. Different batch sizes have to be used, as the pretrained model inputs are of a much bigger dimensionality than the outputs. The first epoch is slow, as it has to evaluate the pretrained model on every input item to generate templates and save them to the disk. Later epochs are faster, yet they're still quite slow - I suspect the bottleneck is reading the disk cache. What could be improved in this data pipeline to reduce the issue? Thank you!
prefetch(1) means that there will be only one element prefetched, I think you may want to have it as big as the batch size or larger. After first cache you may try to put it second time but without providing a path, so it would cache some in the memory. Maybe your HDD is just slow? ;) Another idea is you could just manually write to compressed TFRecord after steps 1-4 and then read it with another dataset. Compressed file has lower I/O but causes higher CPU usage.
0
false
1
5,978
2019-03-01 11:32:59.497
Get data from an .asp file
My girlfriend has been given the task of getting all the data from a webpage. The web page belongs to an adult education centre. To get to the webpage, you must first log in. The url is a .asp file. She has to put the data in an Excel sheet. The entries are student names, numbers, ID card number, telephone, etc. There are thousands of entries. HR students alone has 70 pages of entries. This all shows up on the webpage as a table. It is possible to copy and paste. I can handle Python openpyxl reasonably and I have heard of web-scraping, which I believe Python can do. I don't know what .asp is. Could you please give me some tips, pointers, about how to get the data with Python? Can I automate this task? Is this a case for MySQL? (About which I know nothing.)
This is a really broad question and not really in the style of Stack Overflow. To give you some pointers anyway. In the end .asp files, as far as I know, behave like normal websites. Normal websites are interpreted in the browser like HTML, CSS etc. This can be parsed with Python. There are two approaches to this that I have used in the past that work. One is to use a library like requests to get the HTML of a page and then read it using the BeautifulSoup library. This gets more complex if you need to visit authenticated pages. The other option is to use Selenium for python. This module is more a tool to automate browsing itself. You can use this to automate visiting the website and entering login credentials and then read content on the page. There are probably more options which is why this question is too broad. Good luck with your project though! EDIT: You do not need MySql for this. Especially not if the required output is an Excel file, which I would generate as a CSV instead because standard Python works better with CSV files than Excel.
0.201295
false
2
5,979
2019-03-01 11:32:59.497
Get data from an .asp file
My girlfriend has been given the task of getting all the data from a webpage. The web page belongs to an adult education centre. To get to the webpage, you must first log in. The url is a .asp file. She has to put the data in an Excel sheet. The entries are student names, numbers, ID card number, telephone, etc. There are thousands of entries. HR students alone has 70 pages of entries. This all shows up on the webpage as a table. It is possible to copy and paste. I can handle Python openpyxl reasonably and I have heard of web-scraping, which I believe Python can do. I don't know what .asp is. Could you please give me some tips, pointers, about how to get the data with Python? Can I automate this task? Is this a case for MySQL? (About which I know nothing.)
Try using the tool called Octoparse. Disclaimer: I've never used it myself, but only came close to using it. So, from my knowledge of its features, I think it would be useful for your need.
0.201295
false
2
5,979
2019-03-01 22:45:49.617
Pygame/Python/Terminal/Mac related
I'm a beginner, I have really hit a brick wall, and would greatly appreciate any advice someone more advanced can offer. I have been having a number of extremely frustrating issues the past few days, which I have been round and round google trying to solve, tried all sorts of things to no avail. Problem 1) I can't import pygame in Idle with the error: ModuleNotFoundError: No module named 'pygame' - even though it is definitely installed, as in terminal, if I ask pip3 to install pygame it says: Requirement already satisfied: pygame in /usr/local/lib/python3.7/site-packages (1.9.4) I think there may be a problem with several conflicting versions of python on my computer, as when i type sys.path in Idle (which by the way displays Python 3.7.2 ) the following are listed: '/Users/myname/Documents', '/Library/Frameworks/Python.framework/Versions/3.7/lib/python37.zip', '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7', '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload', '/Users/myname/Library/Python/3.7/lib/python/site-packages', '/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages' So am I right in thinking pygame is in the python3.7/sitepackages version, and this is why idle won't import it? I don't know I'm just trying to make sense of this. I have absoloutely no clue how to solve this,"re-set the path" or whatever. I don't even know how to find all of these versions of python as only one appears in my applications folder, the rest are elsewhere? Problem 2) Apparently there should be a python 2.7 system version installed on every mac system which is vital to the running of python regardless of the developing environment you use. Yet all of my versions of python seem to be in the library/downloaded versions. Does this mean my system version of python is gone? I have put the computer in recovery mode today and done a reinstall of the macOS mojave system today, so shouldn't any possible lost version of python 2.7 be back on the system now? Problem 3) When I go to terminal, frequently every command I type is 'not found'. I have sometimes found a temporary solution is typing: export PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin" but the problems always return! As I say I also did a system reinstall today but that has helped none! Can anybody please help me with these queries? I am really at the end of my tether and quite lost, forgive my programming ignorance please. Many thanks.
You should actually add the export PATH="/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin" to your .bash_profile (if you are using bash). Do this by opening your terminal, verifying that it says "bash" at the top. If it doesn't, you may have a .zprofile instead. Type ls -al and it will list all the invisible files. If you have .bash_profile listed, use that one. If you have .zprofile, use that. Type nano .bash_profile to open and edit the profile and add the command to the end of it. This will permanently add the path to your profile after you restart the terminal. Use ^X to exit nano and type Y to save your changes. Then you can check that it works when you try to run the program from IDLE.
0
false
1
5,980
2019-03-03 16:50:01.227
Force screen session to use specific version of python
I am using a screen on my server. When I ask which python inside the screen I see it is using the default /opt/anaconda2/bin/python version which is on my server, but outside the screen when I ask which python I get ~/anaconda2/bin/python. I want to use the same python inside the screen but I don't know how can I set it. Both path are available in $PATH
You could do either one of the following: Use a virtual environment (install virtualenv). You can specify the version of Python you want to use when creating the virtual environment with -p /opt/anaconda2/bin/python. Use an alias: alias python=/opt/anaconda2/bin/python.
0.386912
false
1
5,981
2019-03-04 17:51:31.130
How can i remove an object in python?
i'm trying to create a chess simulator. consider this scenario: there is a black rook (instance object of Rook class) in square 2B called rook1. there is a white rook in square 2C called rook2. when the player moves rook1 to square 2C , the i should remove rook2 object from memory completely. how can i do it? P.S. i'v already tried del rook2 , but i don't know why it doesn't work.
Trying to remove objects from memory is the wrong way to go. Python offers no option to do that manually, and it would be the wrong operation to perform anyway. You need to alter whatever data structure represents your chess board so that it represents a game state where there is a black rook at c2 and no piece at b2, rather than a game state where there is a black rook at b2 and a white rook at c2. In a reasonable Python beginner-project implementation of a chess board, this probably means assigning to cells in a list of lists. No objects need to be manually removed from memory to do this. Having rook1 and rook2 variables referring to your rooks is unnecessary and probably counterproductive.
0.999329
false
1
5,982
2019-03-04 22:00:24.150
Text classification beyond the keyword dependency and inferring the actual meaning
I am trying to develop a text classifier that will classify a piece of text as Private or Public. Take medical or health information as an example domain. A typical classifier that I can think of considers keywords as the main distinguisher, right? What about a scenario like bellow? What if both of the pieces of text contains similar keywords but carry a different meaning. Following piece of text is revealing someone's private (health) situation (the patient has cancer): I've been to two clinics and my pcp. I've had an ultrasound only to be told it's a resolving cyst or a hematoma, but it's getting larger and starting to make my leg ache. The PCP said it can't be a cyst because it started out way too big and I swear I have NEVER injured my leg, not even a bump. I am now scared and afraid of cancer. I noticed a slightly uncomfortable sensation only when squatting down about 9 months ago. 3 months ago I went to squat down to put away laundry and it kinda hurt. The pain prompted me to examine my leg and that is when I noticed a lump at the bottom of my calf muscle and flexing only made it more noticeable. Eventually after four clinic visits, an ultrasound and one pcp the result seems to be positive and the mass is getting larger. [Private] (Correct Classification) Following piece of text is a comment from a doctor which is definitely not revealing is health situation. It introduces the weaknesses of a typical classifier model: Don’t be scared and do not assume anything bad as cancer. I have gone through several cases in my clinic and it seems familiar to me. As you mentioned it might be a cyst or a hematoma and it's getting larger, it must need some additional diagnosis such as biopsy. Having an ache in that area or the size of the lump does not really tells anything bad. You should visit specialized clinics few more times and go under some specific tests such as biopsy, CT scan, pcp and ultrasound before that lump become more larger. [Private] (Which is the Wrong Classification. It should be [Public]) The second paragraph was classified as private by all of my current classifiers, for obvious reason. Similar keywords, valid word sequences, the presence of subjects seemed to make the classifier very confused. Even, both of the content contains subjects like I, You (Noun, Pronouns) etc. I thought about from Word2Vec to Doc2Vec, from Inferring meaning to semantic embeddings but can't think about a solution approach that best suits this problem. Any idea, which way I should handle the classification problem? Thanks in advance. Progress so Far: The data, I have collected from a public source where patients/victims usually post their own situation and doctors/well-wishers reply to those. I assumed while crawling is that - posts belongs to my private class and comments belongs to public class. All to gether I started with 5K+5K posts/comments and got around 60% with a naive bayes classifier without any major preprocessing. I will try Neural Network soon. But before feeding into any classifier, I just want to know how I can preprocess better to put reasonable weights to either class for better distinction.
(1) Bayes is indeed a weak classifier - I'd try SVM. If you see improvement than further improvement can be achieved using Neural Network (and perhaps Deep Learning) (2) Feature engineering - use TFiDF , and try other things (many people suggest Word2Vec, although I personally tried and it did not improve). Also you can remove stop words. One thing to consider, because you give two anecdotes is to measure objectively the level of agreement between human beings on the task. It is sometime overlooked that two people given the same text can disagree on labels (some might say that a specific document is private although it is public). Just a point to notice - because if e.g. the level of agreement is 65%, then it will be very difficult to build an algorithm that is more accurate.
-0.265586
false
1
5,983
2019-03-05 03:08:47.917
How do you profile a Python script from Windows command line using PyPy and vmprof?
I have a Python script that I want to profile using vmprof to figure out what parts of the code are slow. Since PyPy is generally faster, I also want to profile the script while it is using the PyPy JIT. If the script is named myscript.py, how do you structure the command on the command line to do this? I have already installed vmprof using pip install vmprof
I would be suprised if it works, but the command is pypy -m vmprof myscript.py <your program args>. I would expect it to crash saying vmprof is not supported on windows.
0
false
1
5,984
2019-03-06 00:43:24.310
How to update python 3.6 to 3.7 using Mac terminal
OK I was afraid to use the terminal, so I installed the python-3.7.2-macosx10.9 package downloaded from python.org Ran the certificate and shell profile scripts, everything seems fine. Now the "which python3" has changed the path from 3.6 to the new 3.7.2 So everything seems fine, correct? My question (of 2) is what's going on with the old python3.6 folder still in the applications folder. Can you just delete it safely? Why when you install a new version does it not at least ask you if you want to update or install and keep both versions? Second question, how would you do this from the terminal? I see the first step is to sudo to the root. I've forgotten the rest. But from the terminal, would this simply add the new version and leave the older one like the package installer? It's pretty simple to use the package installer and then delete a folder. So, thanks in advance. I'm new to python and have not much confidence using the terminal and all the powerful shell commands. And yeah I see all the Brew enthusiasts. I DON'T want to use Brew for the moment. The python snakes nest of pathways is a little confusing, for the moment. I don't want to get lost with a zillion pathways from Brew because it's confusing for the moment. I love Brew, leave me alone.
Each version of the Python installation is independent of each other. So its safe to delete the version you don't want, but be cautious of this because it can lead to broken dependencies :-). You can run any version by adding the specific version i.e $python3.6 or $python3.7 The best approach is to use virtual environments for your projects to enhance consistency. see pipenv
0
false
1
5,985
2019-03-07 02:42:18.347
How do I figure out what dependencies to install when I copy my Django app from one system to another?
I'm using Django and Python 3.7. I want to write a script to help me easily migrate my application from my local machien (a Mac High Sierra) to a CentOS Linux instance. I'm using a virtual environment in both places. There are many things that need to be done here, but to keep the question specific, how do I determine on my remote machine (where I'm deploying my project to), what dependencies are lacking? I'm using rsync to copy the files (minus the virtual environment)
On the source system execute pip freeze > requirements.txt, then copy the requiremnts.txt to the target system and then on the target system install all the dependencies with pip install -r requirements.txt. Of course you will need to activate the virtual environments on both systems before execute the pip commands. If you are using a source code management system like git it is a good idea to keep the requirements.txt up to date in your source code repository.
1.2
true
1
5,986
2019-03-07 10:03:42.277
Does angular server and flask server have both to be running at the same?
I'm new to both angular and flask framework so plz be patient with me. I'm trying to build a web app with flask as a backend server and Angular for the frontend (I didn't start it yet), and while gathering infos and looking at tutorials and some documentation (a little bit) I'm wondering: Does Angular server and flask server need both to be running at the same time or will just flask be enough? Knowing that I want to send data from the server to the frontend to display and collecting data from users and sending it to the backend. I noticed some guys building the angular app and using the dist files but I don't exactly know how that works. So can you guys suggest what should I have to do or how to proceed with this? Thank you ^^
Angular does not need a server. It's a client-side framework so it can be served by any server like Flask. Probably in most tutorials, the backend is served by nodejs, not Flask.
1.2
true
1
5,987
2019-03-08 19:25:09.250
Change color of single word in Tk label widget
I would like to change the font color of a single word in a Tkinter label widget. I understand that something similar to what I would like to be done can be achieved with a Text widget.. for example making the word "YELLOW" show in yellow: self.text.tag_config("tag_yel", fg=clr_yellow) self.text.highligh_pattern("YELLOW", "tag_yel") But my text is static and all I want is to change the word "YELLOW" to show as yellow font and "RED" in red font and I cannot seem to figure out how to change text color without changing it all with label.config(fg=clr). Any help would be appreciated
You cannot do what you want. A label supports only a single foreground color and a single background color. The solution is to use a text or canvas widget., or to use two separate labels.
1.2
true
1
5,988
2019-03-11 12:10:11.213
Running python directly in terminal
Is it possible to execute short python expressions in one line in terminal, without passing a file? e.g. (borrowing from how I would write an awk expression) python 'print("hello world")'
For completeness, I found you can also feed a here-string to python. python <<< 'print("hello world")'
0
false
2
5,989
2019-03-11 12:10:11.213
Running python directly in terminal
Is it possible to execute short python expressions in one line in terminal, without passing a file? e.g. (borrowing from how I would write an awk expression) python 'print("hello world")'
python3 -c "print('Hello')" Use the -c flag as above.
1.2
true
2
5,989
2019-03-11 13:21:12.590
How to save and load my neural network model after training along with weights in python?
I have trained a single layer neural network model in python (a simple model without keras and tensorflow). How canI save it after training along with weights in python, and how to load it later?
So you write it down yourself. You need some simple steps: In your code for neural network, store weights in a variable. It could be simply done by using self.weights.weights are numpy ndarrays. for example if weights are between layer with 10 neurons to layer with 100 neurons, it is a 10 * 100(or 100* 10) nd array. Use numpy.save to save the ndarray. For next use of your network, use numpy.load to load weights In the first initialization of your network, use weights you've loaded. Don't forget, if your network is trained, weights should be frozen. It can be done by zeroing learning rate.
0.135221
false
1
5,990
2019-03-12 12:23:21.577
tf.gradient acting like tfp.math.diag_jacobian
I try to calculate noise for input data using the gradient of the loss function from the input-data: my_grad = tf.gradients(loss, input) loss is an array of size (n x 1) where n is the number of datasets, m is the size of the dataset, input is an array of (n x m) where m is the size of a single dataset. I need my_grad to be of size (n x m) - so for each dataset the gradient is calulated. But by definition the gradients where i!=j are zero - but tf.gradients allocates huge amount of memory and runs for prettymuch ever... A version, which calulates the gradients only where i=j would be great - any Idea how to get there?
I suppose I have found a solution: my_grad = tf.gradients(tf.reduce_sum(loss), input) ensures, that the cross dependencies i!=j are ignored - that works really nicely and fast..
0
false
1
5,991
2019-03-12 14:50:25.703
Lost my python.exe in Pycharm with Anaconda3
Everything was working perfectly until today, when for some reason my python.exe file disappeared from the Project Interpreter in Pycharm. It was located in C:\users\my_name\Anaconda3\python.exe, and for some reason I can't find it anywhere! Yet, all the packages are here (in the site-packages folder), and only the C:\users\my_name\Anaconda3\pythonw.exe is available. With the latest however, some packages I installed on top of those available in Anaconda3 won't be recognized. Therefore, how to get back the python.exe file?
My Python.exe was missing today in my existing environment in anaconda, so I clone my environment with anaconda to recreate Python.exe and use it again in Spyder.
0
false
3
5,992
2019-03-12 14:50:25.703
Lost my python.exe in Pycharm with Anaconda3
Everything was working perfectly until today, when for some reason my python.exe file disappeared from the Project Interpreter in Pycharm. It was located in C:\users\my_name\Anaconda3\python.exe, and for some reason I can't find it anywhere! Yet, all the packages are here (in the site-packages folder), and only the C:\users\my_name\Anaconda3\pythonw.exe is available. With the latest however, some packages I installed on top of those available in Anaconda3 won't be recognized. Therefore, how to get back the python.exe file?
The answer repeats the comment to the question. I had the same issue once after Anaconda update - python.exe was missing. It was Anaconda 3 installed to Program Files folder by MS Visual Studio (Python 3.6 on Windows10 x64). To solve the problem I manually copied python.exe file from the most fresh python package available (folder pkgs then folder like python-3.6.8-h9f7ef89_7).
1.2
true
3
5,992
2019-03-12 14:50:25.703
Lost my python.exe in Pycharm with Anaconda3
Everything was working perfectly until today, when for some reason my python.exe file disappeared from the Project Interpreter in Pycharm. It was located in C:\users\my_name\Anaconda3\python.exe, and for some reason I can't find it anywhere! Yet, all the packages are here (in the site-packages folder), and only the C:\users\my_name\Anaconda3\pythonw.exe is available. With the latest however, some packages I installed on top of those available in Anaconda3 won't be recognized. Therefore, how to get back the python.exe file?
I just had the same issue and found out that Avast removed it because it thought it was a threat. I found it in Avast -> Protection -> Virus Chest. And from there, you have the option to restore it.
0.386912
false
3
5,992
2019-03-12 18:13:12.880
trouble with appending scores in python
the code is supposed to give 3 questions with 2 attempts. if the answer is correct the first try, 3 points. second try gives 1 point. if second try is incorrect, the game will end. however, the scores are not adding up to create a final score after the 3 rounds. how do i make it so that it does that?
First move import random to the top of the script because you're importing it every time in the loop and the score is calculated just in the last spin of the program since you empty scoreList[] every time
0.673066
false
1
5,993
2019-03-13 05:05:14.420
Accessing Luigi visualizer on AWS
I’ve been using the Luigi visualizer for pipelining my python code. Now I’ve started using an aws instance, and want to access the visualizer from my own machine. Any ideas on how I could do that?
We had the very same problem today on GCP, and solved with the following steps: setting firewall rules for incoming TCP connections on port used by the service (which by default is 8082); installing apache2 server on the instance with a site.conf configuration that resolve incoming requests on ip-of-instance:8082. That's it. Hope this can help.
0.201295
false
1
5,994
2019-03-13 09:24:24.310
Async, multithreaded scraping in Python with limited threads
We have to refactor scraping algorithm. To speed it up we came up to conclusion to multi-thread processes (and limit them to max 3). Generally speaking scraping consists of following aspects: Scraping (async request, takes approx 2 sec) Image processing (async per image, approx 500ms per image) Changing source item in DB (async request, approx 2 sec) What I am aiming to do is to create batch of scraping requests and while looping through them, create a stack of consequent async operations: Process images and as soon as images are processed -> change source item. In other words - scraping goes. but image processing and changing source items must be run in separate limited async threads. Only think I don't know how to stack the batch and limit threads. Has anyone came across the same task and what approach have you used?
What you're looking for is consumer-producer pattern. Just create 3 different queues and when you process the item in one of them, queue new work in another. Then you can 3 different threads each of them processing one queue.
1.2
true
1
5,995
2019-03-13 20:16:42.690
Pymongo inserts _id in original array after insert_many .how to avoid insertion of _id?
Pymongo inserts _id in original array after insert_many .how to avoid insertion of _id ? And why original array is updated with _id? Please explain with example, if anybody knows? Thanks in advance.
Pymongo driver explicitly inserts _id of type ObjectId into the original array and hence original array gets updated before inserting into mongo. This is the expected behaviour of pymongo for insertmany query as per my previous experiences. Hope this answers your question.
1.2
true
1
5,996
2019-03-13 21:29:05.987
how can i prevent the user from closing my cmd window in a python script on windows
is there any way to prevent the user from closing the cmd window of a python script on windows or maybe just disable the (X) close button ?? I have looked for answers already but i couldn't find anything that would help me
I dont think its possible, what you can do instead is to not display the cmd window (backgroundworker) and make it into a hidden process with system rights so that it cant be shutdown until it finishes.
0
false
1
5,997
2019-03-14 00:37:45.023
regex python multiline
how can i search for patterns in texts that cover multiple lines and have fixed positions relating each other, for example a pattern consisting of 3 letters of x directly below each other and I want to find them at any position in the line, not just at the beginning for example. Thank you in advance for the answer!
I believe the problem you are asking about is "Find patterns that appear at the same offset in a series of lines." I do not think this describes a regular language, so you would need to draw on Python's extended regex features to have a chance at a regex-based solution. But I do not believe Python supports sufficiently extended features to accomplish this task [1]. If it is acceptable that they occur at a particular offset (rather than "any offset, so long as the offset is consistent"), then something like this should work: /^.{OFFSET}PATTERN.*\n^.{OFFSET}PATTERN.*\n^.{OFFSET}PATTERN/, using the MULTILINE flag so that ^ matches the beginning of a series of lines instead of just the beginning of the entire text. [1] In particular, you could use a backreference to capture the text preceding the desired pattern on one line, but I do not think you can query the length of the captured content "inline". You could search for the same leading text again on the next line, but that does not sound like what you want.
0
false
1
5,998
2019-03-14 15:52:38.597
Faster pytorch dataset file
I have the following problem, I have many files of 3D volumes that I open to extract a bunch of numpy arrays. I want to get those arrays randomly, i.e. in the worst case I open as many 3D volumes as numpy arrays I want to get, if all those arrays are in separate files. The IO here isn't great, I open a big file only to get a small numpy array from it. Any idea how I can store all these arrays so that the IO is better? I can't pre-read all the arrays and save them all in one file because then that file would be too big to open for RAM. I looked up LMDB but it all seems to be about Caffe. Any idea how I can achieve this?
One trivial solution can be pre-processing your dataset and saving multiple smaller crops of the original 3D volumes separately. This way you sacrifice some disk space for more efficient IO. Note that you can make a trade-off with the crop size here: saving bigger crops than you need for input allows you to still do random crop augmentation on the fly. If you save overlapping crops in the pre-processing step, then you can ensure that still all possible random crops of the original dataset can be produced. Alternatively you may try using a custom data loader that retains the full volumes for a few batch. Be careful, this might create some correlation between batches. Since many machine learning algorithms relies on i.i.d samples (e.g. Stochastic Gradient Descent), correlated batches can easily cause some serious mess.
0
false
1
5,999
2019-03-14 19:33:03.197
How does multiplexing in Django sockets work?
I am new at this part of web developing and was trying to figure out a way of creating a web app with the basic specifications as the example bellow: A user1 opens a page with a textbox (something where he can add text or so), and it will be modified as it decides to do it. If the user1 has problems he can invite other user2 to help with the typing. The user2 (when logged to the Channel/Socket) will be able to modify that field and the modifications made will be show to the user1 in real time and vice versa. Or another example is a room on CodeAcademy: Imagine that I am learning a new coding language, however, at middle of it I jeopardize it and had to ask for help. So I go forward and ask help to another user. This user access the page through a WebSocket (or something related to that). The user helps me changing my code and adding some comments at it in real time, and I also will be able to ask questions through it (real time communication) My questions is: will I be able to developed certain app using Django Channels 2 and multiplexing? or better move to use NodeJS or something related to that? Obs: I do have more experience working with python/django, so it will more productive for me right know if could find a way working with this combo.
This is definitely possible. They will be lots of possibilities, but I would recommend the following. Have a page with code on. The page has some websocket JS code that can connect to a Channels Consumer. The JS does 2 simple things. When code is updated code on the screen, send a message to the Consumer, with the new text (you can optimize this later). When the socket receives a message, then replace the code on screen with the new code. In your consumer, add your consumer to a channel group when connecting (the group will contain all of the consumers that are accessing the page) When a message is received, use group_send to send it to all the other consumers When your consumer callback function gets called, then send a message to your websocket
0.386912
false
1
6,000
2019-03-14 20:28:27.727
Operating system does not meet the minimum requirements of the language server
I installed Python 3.7.2 and VSCode 1.32.1 on Mac OS X 10.10.2. In VSCode I installed the Pyhton extension and got a message saying: "Operating system does not meet the minimum requirements of the language server. Reverting to alternative, Jedi". When clicking the "More" option under the message I got information indicating that I need OS X 10.12, at least. I tried to install an older version of the extension, did some reading here and asked Google, but I'm having a hard time since I don´t really know what vocabulary to use. My questions are: Will the extension work despite the error message? Do I need to solve this, and how do I do that?
The extension will work without the language server, but some thing won't work quite as well (e.g. auto-complete and some refactoring options). Basically if you remove the "python.jediEnabled" setting -- or set it to false -- and the extension works fine for you then that's the important thing. :)
1.2
true
1
6,001