Q_CreationDate
stringlengths
23
23
Title
stringlengths
11
149
Question
stringlengths
25
6.53k
Answer
stringlengths
15
5.1k
Score
float64
-1
1.2
Is_accepted
bool
2 classes
N_answers
int64
1
17
Q_Id
int64
0
6.76k
2019-07-10 15:34:08.677
how to average in a specific dimension with numpy.mean?
I have a matrix called POS which has form (10,132) and I need to average those first 10 elements in such a way that my averaged matrix has the form of (1,132) I have tried doing means = pos.mean (axis = 1) or menas = np.mean(pos) but the result in the first case is a matrix of (10,) and in the second it is a simple number i expect the ouput a matrix of shape (1,132)
The solution is to specify the correct axis and use keepdims=True which is noted by several commenters (If you add your answer I will delete mine). This can be done with either pos.mean(axis = 0,keepdims=True) or np.mean(pos,axis=0,keepdims=True)
1.2
true
1
6,182
2019-07-10 20:56:07.317
Delete empty directory from Jupyter notebook error
I am trying to delete an empty directory in Jupyter notebook. When I select the folder and click Delete, an error message pops up saying: 'A directory must be empty before being deleted.' There are no files or folders in the directory and it is empty. Any advice on how to delete it? Thank you!
Usually, Jupyter itself creates a hidden .ipynb_checkpoints folder within the directory when you inspect it. You can check its existence (or any other hidden file/folders) in the directory using ls -a in a terminal that has a current working directory as the corresponding folder.
1.2
true
2
6,183
2019-07-10 20:56:07.317
Delete empty directory from Jupyter notebook error
I am trying to delete an empty directory in Jupyter notebook. When I select the folder and click Delete, an error message pops up saying: 'A directory must be empty before being deleted.' There are no files or folders in the directory and it is empty. Any advice on how to delete it? Thank you!
Go to your local directory where it stores the workbench files, ex:(C:\Users\prasadsarada) You can see all the folders you have created in Jupyter Notebook. delete it there.
0
false
2
6,183
2019-07-10 21:47:53.280
TCP Socket on Server Side Using Python with select on Windows
By trying to find an optimization to my server on python, I have stumbled on a concept called select. By trying to find any code possible to use, no matter where I looked, Windows compatibility with this subject is hard to find. Any ideas how to program a TCP server with select on windows? I know about the idea of unblocking the sockets to maintain the compatibility with it. Any suggestions will be welcomed.
Using select() under Windows is 99% the same as it is under other OS's, with some minor variations. The minor variations (at least the ones I know about) are: Under Windows, select() only works for real network sockets. In particular, don't bother trying to select() on stdin under Windows, as it won't work. Under Windows, if you attempt a non-blocking TCP connection and the TCP connection fails asynchronously, you will get a notification of that failure via the third ("exception") fd_set only. (Under other OS's you will get notified that the failed-to-connect TCP-socket is ready-for-read/write also) Under Windows, select() will fail if you don't pass in at least one valid socket to it (so you can't use select([], [], [], timeoutInSeconds) as an alternative to time.sleep() like you can under some other OS's) Other than that select() for Windows is like select() for any other OS. (If your real question about how to use select() in general, you can find information about that using a web search)
0.386912
false
1
6,184
2019-07-11 11:48:58.680
Tensorflow 2.0: Accessing a batch's tensors from a callback
I'm using Tensorflow 2.0 and trying to write a tf.keras.callbacks.Callback that reads both the inputs and outputs of my model for the batch. I expected to be able to override on_batch_end and access model.inputs and model.outputs but they are not EagerTensor with a value that I could access. Is there anyway to access the actual tensors values that were involved in a batch? This has many practical uses such as outputting these tensors to Tensorboard for debugging, or serializing them for other purposes. I am aware that I could just run the whole model again using model.predict but that would force me to run every input twice through the network (and I might also have non-deterministic data generator). Any idea on how to achieve this?
No, there is no way to access the actual values for input and output in a callback. That's not just part of the design goal of callbacks. Callbacks only have access to model, args to fit, the epoch number and some metrics values. As you found, model.input and model.output only points to the symbolic KerasTensors, not actual values. To do what you want, you could take the input, stack it (maybe with RaggedTensor) with the output you care about, and then make it an extra output of your model. Then implement your functionality as a custom metric that only reads y_pred. Inside your metric, unstack the y_pred to get the input and output, and then visualize / serialize / etc. Metrics Another way might be to implement a custom Layer that uses py_function to call a function back in python. This will be super slow during serious training but may be enough for use during diagnostic / debugging.
0.386912
false
1
6,185
2019-07-11 13:34:12.703
Drone control by python
I am new in drone, can you please explain one thing: Is it possible to have RC controller programmed by python? As I understood using telemetry module and DroneKit, it is possible to control the drone using python. But usually telemetry module supporting drones are custom drones and as I understood telemetry module does not work as good as RC. So to have cheaper price, can someone suggest me solution about how to control RC drone using python?
You can use tello drones .These drones can be programmed as per your requirement using python .
0
false
1
6,186
2019-07-11 17:14:00.880
Is there a way of deleting specific text for the user in Python?
I am making a program in python and want to clear what the user has enterd, this is because I am using the keyboard function to register input as is is given, but there is still text left over after a keypress is registerd and I don't want this to happen. I was woundering if there is a module that exists to remove text that is being entered Any help would be greatly apreciated, and just the name of a module is fine; I can figure out how to use it, just cant find an appropriate module. EDIT: Sorry if i did not make my self clear, I dont really want to clear the whole screen, just what the user has typed. So that they don't have to manually back space after their input has been taken.
'sys.stdout.write' is the moduel I was looking for.
0
false
1
6,187
2019-07-13 01:06:02.247
Quickest way to insert zeros into numpy array
I have a numpy array ids = np.array([1,1,1,1,2,2,2,3,4,4]) and another array of equal length vals = np.array([1,2,3,4,5,6,7,8,9,10]) Note: the ids array is sorted by ascending order I would like to insert 4 zeros before the beginning of each new id - i.e. new array = np.array([0,0,0,0,1,2,3,4,0,0,0,0,5,6,7,0,0,0,0,8,0,0,0,0,9,10]) Only, way I am able to produce this is by iterating through the array which is very slow - and I am not quite sure how to do this using insert, pad, or expand_dim ...
u can use np.zeros and append it to your existing array like newid=np.append(np.zeros((4,), dtype=int),ids) Good Luck!
0
false
1
6,188
2019-07-13 23:37:40.520
How can I put my curvilinear coordinate data on a map projection?
I'm working with NetCDF files from NCAR and I'm trying to plot sea-ice thickness. This variable is on a curvilinear (TLAT,TLON) grid. What is the best way to plot this data on a map projection? Do I need to re-grid it to a regular grid or is there a way to plot it directly? I'm fairly new to Python so any help would be appreciated. Please let me know if you need any more information. Thank you! I've tried libraries like iris, scipy, and basemap, but I couldn't really get a clear explanation on how to implement them for my case.
I am pretty sure you can already use methods like contour, contourf, pcolormesh from Python's matplotlib without re-gridding the data. The same methods work for Basemap.
1.2
true
1
6,189
2019-07-15 07:29:41.747
how to use natural language generation from a csv file input .which python module we should use.can any one share a sample tutorial?
take a input as a csv file and generate text/sentence using nlg. I have tried with pynlg and markov chain.But nothing worked .What else I can use?
There are not much python libraries for NLG!!. Try out nlglib a python wrapper around SimpleNLG. For tutorial purposes, you could read Building Natural Language Generation systems by e.reiter.
-0.386912
false
1
6,190
2019-07-15 10:55:05.307
[How to run code by using cmd from sublime text 3 ]
I am a newbie in Python, and have a problem. When I code Python using Sublime Text 3 and run directly on it, it does not find some Python library which I already imported. I Googled this problem and found out Sublime Text is just a Text Editor. I already had code in Sublime Text 3 file, how can I run it without this error? For example: 'ModuleNotFoundError: No module named 'matplotlib'. I think it should be run by cmd but I don't know how.
Depending on what OS you are using this is easy. On Windows you can press win + r, then type cmd. This will open up a command prompt. Then, type in pip install matplotlib. This will make sure that your module is installed. Then, navigate to the folder which your code is located in. You can do this by typing in cd Documents if you first need to get to your documents and then for each subsequent folder. Then, try typing in python and hitting enter. If a python shell opens up then type quit() and then type python filename.py and it will run. If no python shell opens up then you need to change your environment variables. Press the windows key and pause break at the same time, then click on Advanced system settings. Then press Environment Variables. Then double click on Path. Then press New. Then locate the installation folder of you Python install, which may be in C:\Users\YOURUSERNAME\AppData\Local\Programs\Python\Python36 Now put in the path and press ok. You should now be able to run python from your command line.
1.2
true
1
6,191
2019-07-15 11:09:48.323
how can I use gpiozero robot library to change speeds of motors via L298N
In my raspberry pi, i need to run two motors with a L298N. I can pwm on enable pins to change speeds. But i saw that gpiozero robot library can make things a lot easier. But When using gpiozero robot library, how can i alter speeds of those motors by giving signel to the enable pins.
I have exactly the same situation. You can of course program the motors separately but it is nice to use the robot class. Looking into the gpiocode for this, I find that in our case the left and right tuples have a third parameter which is the pin for PWM motor speed control. (GPIO Pins 12 13 18 19 have hardware PWM support). The first two outout pins in the tuple are to be signalled as 1, 0 for forward, 0,1 for back. So here is my line of code: Initio = Robot(left=(4, 5, 12), right=(17, 18, 13)) Hope it works for you! I have some interesting code on the stocks for controlling the robot's absolute position, so it can explore its environment.
0.201295
false
1
6,192
2019-07-15 20:45:58.190
Does Kivy have laserjet printer support?
Is there a way to print a Page/Widget/Label in Kivy? (or some other way in python). Unfortunately, I don't know how to ask the question correctly since I am new to software development. I want to build a price tracking app for my business in which i will have to print some stuff.
Not directly, no, but the printing part isn't really Kivy's responsibility - probably you can find another Python module to handle this. In terms of what is printed, you can export an image of any part of the Kivy gui and print that.
0.673066
false
1
6,193
2019-07-16 11:12:04.277
How to access python dictionary from C?
I have a dictionary in python and I need to access that dictionary from a C program? or for example, convert this dictionary into struct map in C I don't have any idea how this could be done. I will be happy to get some hints regarding that or if there are any libraries that could help. Update: the dictionary is generated from the abstract syntax tree of C program by using pycparser. so, I wrote a python function to generate this dictionary and I can dump it using pickle or save it as a text file. Now I want to use keys and their values from a c program and I don't know how to access that dictionary.
You could export the dictionary to a JSON and parse the JSON file from C...
0.386912
false
1
6,194
2019-07-16 12:46:43.023
Flask non-template HTML files included by Jinja
In my Flask application, I have one html file that holds some html and some js that semantically belongs together and cannot be used separately in a sensible way. I include this file in 2 of my html templates by using Jinja's {%include ... %}. Now my first approach was to put this file in my templates folder. However, I never call render_template on this file, so it seems unapt to store it in that directory. Another approach would be to put it into the static folder, since its content is indeed static. But then I don't know how to tell Jinja to look for it in a different directory, since all the files using Jinja are in the templates folder. Is there a way to accomplish this with Jinja, or is there a better approach altogether?
You're over-thinking this. If it's included by Jinja, then it's a template file and belongs in the templates directory.
1.2
true
1
6,195
2019-07-16 13:11:16.803
Keras, Tensorflow are reserving all GPU memory on model build
my GPU is NVIDIA RTX 2080 TI Keras 2.2.4 Tensorflow-gpu 1.12.0 CUDA 10.0 Once I load build a model ( before compilation ), I found that GPU memory is fully allocated [0] GeForce RTX 2080 Ti | 50'C, 15 % | 10759 / 10989 MB | issd/8067(10749M) What could be the reason, how can i debug it? I don't have spare memory to load the data even if I load via generators I have tried to monitor the GPUs memory usage found out it is full just after building the layers (before compiling model)
I meet a similar problem when I load pre-trained ResNet50. The GPU memory usage just surges to 11GB while ResNet50 usually only consumes less than 150MB. The problem in my case is that I also import PyTorch without actually used it in my code. After commented it, everything works fine. But I have another PC with the same code that works just fine. So I uninstall and reinstall the Tensorflow and PyTorch with the correct version. Then everything works fine even if I import PyTorch.
0
false
1
6,196
2019-07-16 13:41:47.907
how is every object related to pyObject when c does not have Inheritance
I have been going through source code of python. It looks like every object is derived from PyObject. But, in C, there is no concept of object oriented programming. So, how exactly is this implemented without inheritance?
What makes the Object Oriented programming paradigm is the relation between "classes" as templates for a data set and functions that will operate on this data set. And, the inheritance mechanism which is a relation from a class to ancestor classes. These relations, however, do not depend on a particular language Syntax - just that they are present in anyway. So, nothing stops one from doing "object orientation" in C, and in fact, organized libraries, even without an OO framework, end up with an organization related to OO. It happens that the Python object system is entirely defined in pure C, with objects having a __class__ slot that points to its class with a C pointer - only when "viwed" from Python the full represenation of the class is resented. Classes in their turn having a __mro__ and __bases__ slots that point to the different arrangements of superclasses (the pointers this time are for containers that will be seen from Python as sequences). So, when coding in C using the definitions and API of the Python runtime, one can use OOP just in the same way as coding in Python - and in fact use Python objects that are interoperable with the Python language. (The cython project will even transpile a superset of the Python language to C and provide transparent ways f writing native code with Python syntax) There are other frameworks available to C that provide different OOP systems, that are equaly conformant, for example, glib - which defines "gobject" and is the base for all GTK+ and GNOME applications.
0.386912
false
1
6,197
2019-07-16 15:52:02.673
How can I verify if there is an incoming message to my node with MPI?
I'm doing a project using Python with MPI. Every node of my project needs to know if there is any incoming message for it before continuing the execution of other tasks. I'm working on a system where multiple nodes executes some operations. Some nodes may need the outputs of another nodes and therefore needs to know if this output is available. For illustration purposes, let's consider two nodes, A and B. A needs the output of B to execute it's task, but if the output is not available A needs to do some other tasks and then verify if B has send it's output again, in a loop. What I want to do is this verification of availability of output from B in A. I made some research and found something about a method called probe, but don't understood neither found a usefull documentation about what it does or how to use. So, I don't know if it solves my problem. The idea of what I want is ver simple: I just need to check if there is data to be received when I use the method "recv" of mpi4py. If there is something the code do some tasks, if there ins't the code do some other taks.
(elaborating on Gilles Gouaillardet's comment) If you know you will eventually receive a message, but want to be able to run some computations while it is being prepared and sent, you want to use non-blocking receives, not probe. Basically use MPI_Irecv to setup a receive request as soon as possible. If you want to know whether the message is ready yet, use MPI_Test to check the request. This is much better than using probes, because you ensure that a receive buffer is ready as early as possible and the sender is not blocked, waiting for the receiver to see that there is a message and post the receive. For the specific implementation you will have to consult the manual of the Python MPI wrapper you use. You might also find helpful information in the MPI standard itself.
1.2
true
1
6,198
2019-07-17 14:40:06.277
Getting the Raw Data Out of an Excel Pivot Table in Python
I have a pivot table in excel that I want to read the raw data from that table into python. Is it possible to do this? I do not see anything in the documentation on it or on Stack Overflow. If the community could be provided some examples on how to read the raw data that drives pivot tables, this could greatly assist in routine analytical tasks. EDIT: In this scenario there are no raw data tabs. I want to know how to ping the pivot table get the raw data and read it into python.
First, recreate raw data from the pivot table. The pivot table has full information to rebuild the raw data. Make sure that none of the items in the pivot table fields are hidden -- clear all the filters and Slicers that have been applied. The pivot table does not need to contain all the fields -- just make sure that there is at least one field in the Values area. Show the grand totals for rows and columns. If the totals aren't visible, select a cell in the pivot table, and on the Ribbon, under PivotTable Tools, click the Analyze tab. In the Layout group, click Grand totals, then click On for Rows and Columns. Double-click the grand total cell at the bottom right of the pivot table. This should create a new sheet with the related records from the original source data. Then, you could read the raw data from the source.
0
false
1
6,199
2019-07-18 11:24:22.350
how to add custom Keras model in OpenCv in python
i have created a model for classification of two types of shoes now how to deploy it in OpenCv (videoObject detection)?? thanks in advance
You would save the model to H5 file model.save("modelname.h5") , then load it in OpenCV code load_model("modelname.h5"). Then in a loop detect the objects you find via model.predict(ImageROI)
0
false
1
6,200
2019-07-18 17:58:17.687
How to remove a virtualenv which is created by PyCharm?
Since I have selected my project's interpreter as Pipenv during project creation, PyCharm has automatically created the virtualenv. Now, when I try to remove the virtualenv via pipenv --rm, I get the error You are attempting to remove a virtualenv that Pipenv did not create. Aborting. So, how can I properly remove this virtualenv?
the command "pipenv" actually comes from the virtualenv,he can't remove himself.you should close the project and remove it without activated virtualenv
0.995055
false
1
6,201
2019-07-18 22:58:53.810
How to deal with infrequent data in a time series prediction model
I am trying to create a basic model for stock price prediction and some of the features I want to include come from the companies quarterly earnings report (every 3 months) so for example; if my data features are Date, OpenPrice, Close Price, Volume, LastQrtrRevenue how do I include LastQrtrRevenue if I only have a value for it every 3 months? Do I leave the other days blank (or null) or should I just include a constant of the LastQrtrRevenue and just update it on the day the new figures are released? Please if anyone has any feedback on dealing with data that is released infrequently but is important to include please share.... Thank you in advance.
I would be tempted to put the last quarter revenue in a separate table, with a date field representing when that quarter began (or ended, it doesn't really matter). Then you can write queries to work the way that most suits your application. You could certainly reconstitute the view you mention above using that table, as long as you can relate it to the main table. You would just need to join the main table by company name, while selected the max() of the last quarter revenue table.
0.386912
false
1
6,202
2019-07-19 11:26:59.093
Compare list items in pythonic way
For a list say l = [1, 2, 3, 4] how do I compare l[0] < l[1] < l[2] < l[3] in pythonic way?
Another way would be to use the .sort() method in which case you'd have to return a new list altogether.
0
false
1
6,203
2019-07-19 18:13:40.823
How does log in spark stage/tasks help in understanding actual spark transformation it corresponds to
Often during debugging Spark Jobs on failure we can find the appropriate Stage and task responsible for the failure such as String Index Out of Bounds exception but it becomes difficult to understand which transformation is responsible for this failure.The UI shows information such as Exchange/HashAggregate/Aggregate but finding the actual transformation responsible for this failure becomes really difficult in 500+ lines of code, so how should it be possible to debug Spark task failures and tracing the transformation responsible for the same?
Break your execution down. It's the easiest way to understand where the error might be coming from. Running a 500+ line of code for the first time is never a good idea. You want to have the intermediate results while you are working with it. Another way is to use an IDE and walk through the code. This can help you understand where the error originated from. I prefer PyCharm (Community Edition is free), but VS Code might be a good alternative too.
0
false
1
6,204
2019-07-20 16:40:06.663
How to host a Python script on the cloud?
I wrote a Python script which scrapes a website and sends emails if a certain condition is met. It repeats itself every day in a loop. I converted the Python file to an EXE and it runs as an application on my computer. But I don't think this is the best solution to my needs since my computer isn't always on and connected to the internet. Is there a specific website I can host my Python code on which will allow it to always run? More generally, I am trying to get the bigger picture of how this works. What do you actually have to do to have a Python script running on the cloud? Do you just upload it? What steps do you have to undertake? Thanks in advance!
well i think one of the best option is pythonanywhere.com there you can upload your python script(script.py) and then run it and then finish. i did this with my telegram bot
0.386912
false
2
6,205
2019-07-20 16:40:06.663
How to host a Python script on the cloud?
I wrote a Python script which scrapes a website and sends emails if a certain condition is met. It repeats itself every day in a loop. I converted the Python file to an EXE and it runs as an application on my computer. But I don't think this is the best solution to my needs since my computer isn't always on and connected to the internet. Is there a specific website I can host my Python code on which will allow it to always run? More generally, I am trying to get the bigger picture of how this works. What do you actually have to do to have a Python script running on the cloud? Do you just upload it? What steps do you have to undertake? Thanks in advance!
You can deploy your application using AWS Beanstalk. It will provide you with the whole python environment along with server configuration likely to be changed according to your needs. Its a PAAS offering from AWS cloud.
0.265586
false
2
6,205
2019-07-20 19:16:43.307
Using Python Libraries or Codes in My Java Application
I'm trying to build an OCR desktop application using Java and, to do this, I have to use libraries and functions that were created using the Python programming language, so I want to figure out: how can I use those libraries inside my Java application? I have already seen Jython, but it is only useful for cases when you want to run Java code in Python; what I want is the other way around (using Python code in Java applications).
I have worked in projects where Python was used for ML (machine learning) tasks and everything else was written in Java. We separated the execution environments entirely. Instead of mixing Python and Java in some esoteric way, you create independent services (one for Python, one for Java), and then handle inter-process communication via HTTP or messaging or some other mechanism. "Mircoservices" if you will.
1.2
true
1
6,206
2019-07-21 23:11:36.943
Spotfire - Dynamically creating buttons using
Hello I am creating a spotfire dashboard which I would like to be reusable for each year. Currently my layout is designed as a page with 8 buttons containing the names of stores, if clicked on, spotfire applies a filter so that only informations relating to that store shows. (these were individually created manually) Is there a way to automate this with JS or Iron Python, so that for each store a button is automatically created, and in action control for each button is to apply that stores filter? I have looked around but cannot find anything relating to dynamically creating buttons. Not asking for you to code this for me, but if someone can point me towards some resources or general logic on how this could be done it would be much appreciated.
Why not just putting a textarea on your page? Inside this textarea, you add a filter control that filters data the way you want ;) With this you don't have problem with elements to create dynamically, because it's impossible to create spotfirecontrols dynamically.
0.135221
false
3
6,207
2019-07-21 23:11:36.943
Spotfire - Dynamically creating buttons using
Hello I am creating a spotfire dashboard which I would like to be reusable for each year. Currently my layout is designed as a page with 8 buttons containing the names of stores, if clicked on, spotfire applies a filter so that only informations relating to that store shows. (these were individually created manually) Is there a way to automate this with JS or Iron Python, so that for each store a button is automatically created, and in action control for each button is to apply that stores filter? I have looked around but cannot find anything relating to dynamically creating buttons. Not asking for you to code this for me, but if someone can point me towards some resources or general logic on how this could be done it would be much appreciated.
Think txemsukr is right. This is not possible. To do it with JS or IP, the API would have to exist. Several of the elements you mentioned (action controls), you can't control with the API.
0.135221
false
3
6,207
2019-07-21 23:11:36.943
Spotfire - Dynamically creating buttons using
Hello I am creating a spotfire dashboard which I would like to be reusable for each year. Currently my layout is designed as a page with 8 buttons containing the names of stores, if clicked on, spotfire applies a filter so that only informations relating to that store shows. (these were individually created manually) Is there a way to automate this with JS or Iron Python, so that for each store a button is automatically created, and in action control for each button is to apply that stores filter? I have looked around but cannot find anything relating to dynamically creating buttons. Not asking for you to code this for me, but if someone can point me towards some resources or general logic on how this could be done it would be much appreciated.
instead of buttons, why not a dropdown populated by the unique values in the "store names" column to set a document property, and have your data listing limit the data to [store_name] = ${store_name}
0.265586
false
3
6,207
2019-07-22 05:27:22.927
Can I use GCP for training only but predict with my own AI machine?
My laptop had a problem with training big a dataset but not for predicting. Can I use Google Cloud Platform for training, only then export and download some sort of weights or model of that machine learning, so I can use it on my own laptop, and if so how to do it?
Decide if you want to use Tensorflow or Keras etc. Prepare scripts to train and save model, and another script to use it for prediction. It should be simple enough to use GCP for training and download the model to use on your machine. You can choose to use a high end machine (lot of memory, cores, GPU) on GCP. Training in distributed mode may be more complex. Then download the model and use it on local machine. If you run into issues, post your scripts and ask another question.
0
false
1
6,208
2019-07-23 06:15:00.117
Define dynamic environment path variables for different system configuration
I'm not sure if this is a valid question, but I'm stuck doing this. I've a python script which does some operation on my local system Users/12345/Desktop/Sample/one.py I want the same to be run on remote server whose path is Server/Users/23552/Dir/ASR/Desktop/Sample/one.py I know how to do this in PHP using define path APP_HOME sort of I'm baffled in Python Can someone pl help me?
You can always use the relative path, I guess relative path should solve your issue.
0
false
1
6,209
2019-07-23 06:31:38.687
how to get best fit line when we have data on vertical line?
I started learning Linear Regression and I was solving this problem. When i draw scatter plot between independent variable and dependent variable, i get vertical lines. I have 0.5M sample data. X-axis data is given within range of let say 0-20. In this case I am getting multiple target value for same x-axis point hence it draws vertical line. My question is, Is there any way i can transform the data in such a way that it doesn't perform vertical line and i can get my model working. There are 5-6 such independent variable that draw the same pattern. Thanks in advance.
Instead of fitting y as a function of x, in this case you should fit x as a function of y.
0
false
1
6,210
2019-07-23 23:56:46.253
How to best(most efficiently) read the first sheet in Excel file into Pandas Dataframe?
Loading the excel file using read_excel takes quite long. Each Excel file has several sheets. The first sheet is pretty small and is the sheet I'm interested in but the other sheets are quite large and have graphs in them. Generally this wouldn't be a problem if it was one file, but I need to do this for potentially thousands of files and pick and combine the necessary data together to analyze. If somebody knows a way to efficiently load in the file directly or somehow quickly make a copy of the Excel data as text that would be helpful!
The method read_excel() reads the data into a Pandas Data Frame, where the first parameter is the filename and the second parameter is the sheet. df = pd.read_excel('File.xlsx', sheetname='Sheet1')
-0.201295
false
1
6,211
2019-07-24 04:43:32.027
How can I combine 2 pivot ( Sale and Pos Order ) into 1 pivot view on my new module?
I have a problem because I'm new guy in Odoo 11, my task is combine 2 pivot ( Sales and Pos Order ) into 1 pivot view of new Module that i create. So how can i do this? step by step, because I'm just new guy. Please help me, thanks in advance
You Can use select queries for both the models and there is no need for same field or relation you can just use Union All.For Example select pos_order as po LEFT JOIN pos_order_line pol ON(pol.order_id = po.id) UNION ALL select sale_order so LEFT JOIN sale_order_line sol ON(sol.order_id = so.id) Hope this will help you in this regard and don't forget to define the fields you want to show on the pivot view.
0
false
1
6,212
2019-07-24 09:38:40.803
What is the difference between the _tkinter and tkinter modules?
I am also trying to understand how to use Tkinter so could you please explain the basics?
What is the difference between the _tkinter and tkinter modules? _tkinter is a C-based module that exposes an embedded tcl/tk interpreter. When you import it, and only it, you get access to this interpreter but you do not get access to any of the tkinter classes. This module is not designed to be imported by python scripts. tkinter provides python-based classes that use the embedded tcl/tk interpreter. This is the module that defines Tk, Button, Text, etc.
0.386912
false
1
6,213
2019-07-24 13:11:20.187
How to send a query or stored procedure execution request to a specific location/region of cosmosdb?
I'm trying to multi-thread some tasks using cosmosdb to optimize ETL time, and I can't find how, using the python API (but I could do something in REST if required) if I have a stored procedure to call twice for two partitions keys, I could send it to two different regions (namely 'West Europe' and 'Central France) I defined those as PreferredLocations in the connection policy but don't know how to include to a query, the instruction to route it to a specific location.
The only place you could specify that on would be the options objects of the requests. However there is nothing related to the regions. What you can do is initialize multiple clients that have a different order in the preferred locations and then spread the load that way in different regions. However, unless your apps are deployed on those different regions and latency is less, there is no point in doing so since Cosmos DB will be able to cope with all the requests in a single region as long as you have the RUs needed.
1.2
true
1
6,214
2019-07-25 19:40:43.613
Is it possible to start using Django's migration system after years of not using it?
A project I recently joined, for various reasons, decided not to use Django migration system and uses our own system (which is similar enough to Django's that we could possibly automate translations) Primary Question Is it possible to start using Django's migration system now? More Granular Question(s) Ideally, we'd like to find some way of saying "all our tables and models are in-sync (i.e. there is no need to create and apply any migrations), Django does not need to produce any migrations for any existing model, only for changes we make. Is it possible to do this? Is it simply a case of "create the django migration table, generate migrations (necessary?), and manually update the migration table to say that they've all been ran"? Where can I find more information for how to go about doing this? Are there any examples of people doing this in the past? Regarding SO Question Rules I didn't stop to think for very long about whether or not this is an "acceptable" question to ask on SO. I assume that it isn't due to the nature of the question not having a clear, objective set of criteria for a correct answer. however, I think that this problem is surely common enough, that it could provide an extremely valuable resource for anyone in my shoes in the future. Please consider this before voting to remove.
I think you should probably be able to do manage.py makemigrations (you might need to use each app name the first time) which will create the migrations files. You should then be able to do manage.py migrate --fake which will mimic the migration run without actually impacting your tables. From then on (for future changes), you would run makemigrations and migrate as normal.
0.386912
false
1
6,215
2019-07-25 19:50:15.017
Peewee incrementing an integer field without the use of primary key during migration
I have a table I need to add columns to it, one of them is a column that dictates business logic. So think of it as a "priority" column, and it has to be unique and a integer field. It cannot be the primary key but it is unique for business logic purposes. I've searched the docs but I can't find a way to add the column and add default (say starting from 1) values and auto increment them without setting this as a primarykey.. Thus creating the field like example_column = IntegerField(null=False, db_column='PriorityQueue',default=1) This will fail because of the unique constraint. I should also mention this is happening when I'm migrating the table (existing data will all receive a value of '1') So, is it possible to do the above somehow and get the column to auto increment?
It should definitely be possible, especially outside of peewee. You can definitely make a counter that starts at 1 and increments to the stop and at the interval of your choice with range(). You can then write each incremented variable to the desired field in each row as you iterate through.
1.2
true
2
6,216
2019-07-25 19:50:15.017
Peewee incrementing an integer field without the use of primary key during migration
I have a table I need to add columns to it, one of them is a column that dictates business logic. So think of it as a "priority" column, and it has to be unique and a integer field. It cannot be the primary key but it is unique for business logic purposes. I've searched the docs but I can't find a way to add the column and add default (say starting from 1) values and auto increment them without setting this as a primarykey.. Thus creating the field like example_column = IntegerField(null=False, db_column='PriorityQueue',default=1) This will fail because of the unique constraint. I should also mention this is happening when I'm migrating the table (existing data will all receive a value of '1') So, is it possible to do the above somehow and get the column to auto increment?
Depends on your database, but postgres uses sequences to handle this kind of thing. Peewee fields accept a sequence name as an initialization parameter, so you could pass it in that manner.
0
false
2
6,216
2019-07-26 08:32:45.950
How to check if a url is valid in Scrapy?
I have a list of url and many of them are invalid. When I use scrapy to crawl, the engine will automatically filter those urls with 404 status code, but some urls' status code aren't 404 and will be crawled so when I open it, it says something like there's nothing here or the domain has been changed, etc. Can someone let me know how to filter these types of invalid urls?
In your callback (e.g. parse) implement checks that detect those cases of 200 responses that are not valid, and exit the callback right away (return) when you detect one of those requests.
0
false
1
6,217
2019-07-27 18:25:21.400
Unable to view django error pages on Google Cloud web app
Settings.py DEBUG=True But the django web application shows Server Error 500. I need to see the error pages to debug what is wrong on the production server. The web application works fine in development server offline. The google logs does not show detail errors. Only shows the http code of the request.
Thank you all for replying to my question. The project had prod.py (production settings file, DEBUG=False) and a dev.py (development settings file). When python manage.py is called it directly calls dev.py(DEBUG=True). However, when I push to google app engine main.py is used to specify how to run the application. main.py calls wsgi.py which calls prod.pd (DEBUG=False). This is why the django error pages were not showing. I really appreciate you all. VictorTorres, Mahirq9 and ParthS007
0
false
1
6,218
2019-07-28 11:14:17.907
How to turn the image in the correct orientation?
I have a paper on which there are scans of documents, I use tesseract to recognize the text, but sometimes the images are in the wrong orientation, then I cut these documents from the sheet and work with each one individually, but I need to turn them in the correct position, how to do it?
I’m not sure if there are simple ways, but you can rotate the document after you do not find adequate characters in it, if you see letters, then the document is in the correct orientation. As I understand it, you use a parser, so the check can be very simple, if there are less than 5 keys, then the document is turned upside down incorrectly
1.2
true
2
6,219
2019-07-28 11:14:17.907
How to turn the image in the correct orientation?
I have a paper on which there are scans of documents, I use tesseract to recognize the text, but sometimes the images are in the wrong orientation, then I cut these documents from the sheet and work with each one individually, but I need to turn them in the correct position, how to do it?
If all scans are in same orientation on the paper, then you can always try rotating it in reverse if tesseract is causing the problem in reading. If individual scans can be in arbitrary orientation, then you will have to use the same method on individual scans instead.
0.201295
false
2
6,219
2019-07-29 14:50:34.170
Tensorflow Serving number of requests in queue
I have my own TensorFlow serving server for multiple neural networks. Now I want to estimate the load on it. Does somebody know how to get the current number of requests in a queue in TensorFlow serving? I tried using Prometheus, but there is no such option.
Actually ,the tf serving doesn't have requests queue , which means that the tf serving would't rank the requests, if there are too many requests. The only thing that tf serving would do is allocating a threads pool, when the server is initialized. when a request coming , the tf serving will use a unused thread to deal with the request , if there are no free threads, the tf serving will return a unavailable error.and the client shoule retry again later. you can find the these information in the comments of tensorflow_serving/batching/streaming_batch_schedulor.h
1.2
true
2
6,220
2019-07-29 14:50:34.170
Tensorflow Serving number of requests in queue
I have my own TensorFlow serving server for multiple neural networks. Now I want to estimate the load on it. Does somebody know how to get the current number of requests in a queue in TensorFlow serving? I tried using Prometheus, but there is no such option.
what 's more ,you can assign the number of threads by the --rest_api_num_threads or let it empty and automatically configured by tf serivng
0
false
2
6,220
2019-07-31 10:20:06.377
speed up pandas search for a certain value not in the whole df
I have a large pandas DataFrame consisting of some 100k rows and ~100 columns with different dtypes and arbitrary content. I need to assert that it does not contain a certain value, let's say -1. Using assert( not (any(test1.isin([-1]).sum()>0))) results in processing time of some seconds. Any idea how to speed it up?
Just to make a full answer out of my comment: With -1 not in test1.values you can check if -1 is in your DataFrame. Regarding the performance, this still needs to check every single value, which is in your case 10^5*10^2 = 10^7. You only save with this the performance cost for summation and an additional comparison of these results.
1.2
true
1
6,221
2019-07-31 13:05:46.203
Is it possible to write a Python web scraper that plays an mp3 whenever an element's text changes?
Trying to figure out how to make python play mp3s whenever a tag's text changes on an Online Fantasy Draft Board (ClickyDraft). I know how to scrape elements from a website with python & beautiful soup, and how to play mp3s. But how do you think can I have it detect when a certain element changes so it can play the appropriate mp3? I was thinking of having the program scrape the site every 0.5seconds to detect the changes, but I read that that could cause problems? Is there any way of doing this?
The only way is too scrape the site on a regular basis. 0.5s is too fast. I don't know how time sensitive this project is. But scraping every 1/5/10 minute is good enough. If you need it quicker, just get a proxy (plenty of free ones out there) and you can scrape the site more often. Just try respecting the site, Don't consume too much of the sites ressources by requesting every 0.5 seconds
1.2
true
1
6,222
2019-07-31 15:27:06.880
How to point Django app to new DB without dropping the previous DB?
I am working on Django app on branch A with appdb database in settings file. Now I need to work on another branch(B) which has some new DB changes(eg. new columns, etc). The easiest for me is to point branch B to a different DB by changing the settings.py and then apply the migrations. I did the migrations but I am getting error like 1146, Table 'appdb_b.django_site' doesn't exist. So how can I use a different DB for my branchB code without dropping database appdb?
The existing migration files have information that causes the migrate command to believe that the tables should exist and so it complains about them not existing. You need to MOVE the migration files out of the migrations directory (everything except init.py) and then do a makemigrations and then migrate.
0.386912
false
1
6,223
2019-08-01 09:16:33.917
Stop music from playing in headless browser
I was learning how to play music using selenium so I wrote a program which would be used as a module to play music. Unfortunately I exited the python shell without exiting the headless browser and now the song is continuously playing. Could someone tell me how I can find the current headless browser and exit it?
If you are on a Linux box, You can easily find the process Id with ps aux| grep chrome command and Kill it. If you are on Windows kill the process via Task Manager
0.201295
false
2
6,224
2019-08-01 09:16:33.917
Stop music from playing in headless browser
I was learning how to play music using selenium so I wrote a program which would be used as a module to play music. Unfortunately I exited the python shell without exiting the headless browser and now the song is continuously playing. Could someone tell me how I can find the current headless browser and exit it?
You need to include in your script to stop the music before closing the session of your headless browser.
0.201295
false
2
6,224
2019-08-01 13:21:36.583
How to install a new python module on VSCode?
I'm trying to install new python modules on my computer and I know how to install through the terminal, but I wish to know if there is a way to install a new module directly through VSCode (like it is possible on PyCharm)? I already installed through the terminal, it isn't a problem, but I want to install without be obligate to open the terminal when I'm working on VSCode.
Unfortunately! for now, only possible way is terminal.
0.265586
false
1
6,225
2019-08-01 19:24:54.457
feeding annotations as ground truth along with the images to the model
I am working on an object detection model. I have annotated images whose values are stored in a data frame with columns (filename,x,y,w,h, class). I have my images inside /drive/mydrive/images/ directory. I have saved the data frame into a CSV file in the same directory. So, now I have annotations in a CSV file and images in the images/ directory. I want to feed this CSV file as the ground truth along with the image so that when the bounding boxes are recognized by the model and it learns contents of the bounding box. How do I feed this CSV file with the images to the model so that I can train my model to detect and later on use the same to predict bounding boxes of similar images? I have no idea how to proceed. I do not get an error. I just want to know how to feed the images with bounding boxes so that the network can learn those bounding boxes.
We need to feed the bounding boxes to the loss function. We need to design a custom loss function, preprocess the bounding boxes and feed it back during back propagation.
0
false
1
6,226
2019-08-02 14:20:48.933
Detecting which words are the same between two pieces of text
I need some python advice to implement an algorithm. What I need is to detect which words from text 1 are in text 2: Text 1: "Mary had a dog. The dog's name was Ethan. He used to run down the meadow, enjoying the flower's scent." Text 2: "Mary had a cat. The cat's name was Coco. He used to run down the street, enjoying the blue sky." I'm thinking I could use some pandas datatype to check repetitions, but I'm not sure. Any ideas on how to implement this would be very helpful. Thank you very much in advance.
Since you do not show any work of your own, I'll just give an overall algorithm. First, split each text into its words. This can be done in several ways. You could remove any punctuation then split on spaces. You need to decide if an apostrophe as in dog's is part of the word--you probably want to leave apostrophes in. But remove periods, commas, and so forth. Second, place the words for each text into a set. Third, use the built-in set operations to find which words are in both sets. This will answer your actual question. If you want a different question that involves the counts or positions of the words, you should make that clear.
0
false
2
6,227
2019-08-02 14:20:48.933
Detecting which words are the same between two pieces of text
I need some python advice to implement an algorithm. What I need is to detect which words from text 1 are in text 2: Text 1: "Mary had a dog. The dog's name was Ethan. He used to run down the meadow, enjoying the flower's scent." Text 2: "Mary had a cat. The cat's name was Coco. He used to run down the street, enjoying the blue sky." I'm thinking I could use some pandas datatype to check repetitions, but I'm not sure. Any ideas on how to implement this would be very helpful. Thank you very much in advance.
You can use dictionary to first store words from first text and than just simply look up while iterating the second text. But this will take space. So best way is to use regular expressions.
0
false
2
6,227
2019-08-03 16:17:51.303
I can not get pigments highlighting for Python to work in my Sphinx documentation
I've tried adding highlight_language and pygments_style in the config.py and also tried various ways I found online inside the .rst file. Can anyone offer any advice on how to get the syntax highlighting working?
Sorry, it turns out that program arguments aren't highlighted (the test I was using)
0
false
1
6,228
2019-08-04 03:43:05.637
Install & run an extra APK file with Kivy
I am currently developing mobile applications in Kivy. I would like to create an app to aid in the development process. This app would download an APK file from a network location and install/run it. I know how to download files of course. How can I programmatically install and run an Android APK file in Kivy/Android/Python3?
Look up how you would do it in Java, then you should be able to do it from Kivy using Pyjnius.
0
false
1
6,229
2019-08-04 09:57:01.713
Java code to convert between UTF8 and UTF16 offsets (Java string offsets to/from Python 3 string offsets)
Given a Java string and an offset into that String, what is the correct way of calculating the offset of that same location into an UTF8 string? More specifically, given the offset of a valid codepoint in the Java string, how can one map that offset to a new offset of that codepoint in a Python 3 string? And vice versa? Is there any library method which already provides the mapping between Java String offsets and Python 3 string offsets?
No, there cannot be. UTF-16 uses a varying number of code units per codepoint and so does UTF-8. So, the indices are entirely dependent on the codepoints in the string. You have to scan the string and count. There are relationships between the encodings, though. A codepoint has two UTF-16 code units if and only if it has four UTF-8 code units. So, an algorithm could tally UTF-8 code units by scanning UTF-16 codepoints: 4 four a high surrogate, 0 for a low surrogate, 3 for some range, 2 for another and 1 for another.
0
false
1
6,230
2019-08-04 13:24:08.390
Why converting dictionaries to lists only returns keys?
I am wondering why when I use list(dictionary) it only returns keys and not their definitions into a list? For example, I import a glossary with terms and definitions into a dictionary using CSV reader, then use the built in list() function to convert the dictionary to a list, and it only returns keys in the list. It's not really an issue as it actually allows my program to work well, was just wondering is that just how it is supposed to behave or? Many thanks for any help.
In short: In essence it works that way, because it was designed that way. It makes however sense if we take into account that x in some_dict performs a membercheck on the dictionary keys. Frequently Python code iterates over a collection, and does not know the type of the collection it iterates over: it can be a list, tuple, set, dictionary, range object, etc. The question is, do we see a dictionary as a collection, and if yes, a collection of what? If we want to make it collection, there are basically three logical answers to the second question: we can see it as a collection of the keys, of the values, or key-value pairs. Especially keys and key-value pairs are popular. C# for example sees a dictionary as a collection of KeyValuePair<TK, TV>s. Python provides the .values() and .items() method to iterate over the values and key-value pairs. Dictionaries are mainly designed to perform a fast lookup for a key and retrieve the corresponding value. Therefore the some_key in some_dict would be a sensical query, or (some_key, some_value) in some_dict, since the latter chould check if the key is in the dictionary, and then check if it matches with some_value. The latter is however less flexible, since often we might not want to be interested in the corresponding value, we simply want to check if the dictionary contains a certain key. We furthermore can not support both use cases concurrently, since if the dictionary would for example contain 2-tuples as keys, then that means it is ambiguous if (1, 2) in some_dict would mean that for key 1 the value is 2; or if (1, 2) is a key in the dictionary. Since the designers of Python decided to define the membership check on dictionaries on the keys, it makes more sense to make a dictionary an iterable over its keys. Indeed, one usually expects that if x in some_iterable holds, then x in list(some_iterable) should hold as well. If the iterable of a dictionary would return 2-tuples of key-value pairs (like C# does), then if we would make a list of these 2-tuples, it would not be in harmony with the membership check on the dictionary itself. Since if 2 in some_dict holds, 2 in list(some_dict) would fail.
0
false
1
6,231
2019-08-04 13:31:10.467
How do I bulk download images (70k) from urls with a restriction on the simultaneous downloads?
I'm a bit clueless. I have a csv file with these columns: name - picture url I would like to bulk download the 70k images into a folder, rename the images with the name in the first column and number them if there is more than one per name. Some are jpegs some are pngs. I'm guessing I need to use pandas to get the data from the csv but I don't know how to make the downloading/renaming part without starting all the downloads at the same time, which will for sure crash my computer (It did, I wasn't even mad). Thanks in advance for any light you can shed on this.
Try downloading in batches like 500 images...then sleep for some 1 seconds and loop it....quite time consuming...but sure fire method....for the coding reference you can explore packges like urllib (for downloading) and as soon as u download the file use os.rename() to change the name....As u already know for that csv file use pandas...
0.201295
false
1
6,232
2019-08-05 03:07:24.777
Standard Deviation of every pixel in an image in Python
I have an image stored in a 2D array called data. I know how to calculate the standard deviation of the entire array using numpy that outputs one number quantifying how much the data is spread. However, how can I made a standard deviation map (of the same size as my image array) and each element in this array is the standard deviation of the corresponding pixel in the image array (i.e, data).
Use slicing, given images[num, width, height] you may calculate std. deviation of a single image using images[n].std() or for a single pixel: images[:, x, y].std()
1.2
true
1
6,233
2019-08-05 06:28:35.310
Convert each row in a PySpark DataFrame to a file in s3
I'm using PySpark and I need to convert each row in a DataFrame to a JSON file (in s3), preferably naming the file using the value of a selected column. Couldn't find how to do that. Any help will be very appreciated.
I think directly we can't store for each row as a JSON based file. Instead of that we can do like iterate for each partition of dataframe and connect to S3 using AWS S3 based library's (to connect to S3 on the partition level). Then, On each partition with the help of iterator, we can convert the row into JSON based file and push to S3.
0
false
1
6,234
2019-08-06 04:52:59.987
what do hidden layers mean in a neural network?
in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model. I understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer) So for example, given the standard MNIST datset that is used to train and predict handwritten digits between 0 to 9, a model would look like this: An image of a handwritten digit will have 784 pixels. Since there are 784 pixels, there would be 784 input nodes and the value of each node is the pixel intensity(0-255) each node branches out and these branches are the weights. My next layer is my hidden layer, and the value of a given node in the hidden layer is the weighted sum of my input nodes (pixels*weights). Whatever value I get, I squash it with a sigmoid function and I get a value between 0 and 1. That number that I get from the sigmoid. What does it represent exactly and why is it relevant? My understanding is that if I want to build more hidden layers, I'll be using the values of my initial hidden layer, but at this point, i'm stuck as to what the values of the first hidden layer mean exactly. Thank you!
AFAIK, for this digit recognition case, one way to think about it is each level of the hidden layers represents the level of abstraction. For now, imagine the neural network for digit recognition has only 3 layers which is 1 input layer, 1 hidden layer and 1 output layer. Let's take a look at a number. To recognise that it is a number we can break the picture of the number to a few more abstract concepts such as lines, circles and arcs. If we want to recognise 6, we can first recognise the more abstract concept that is exists in the picture. for 6 it would be an arc and a circle for this example. For 8 it would be 2 circles. For 1 it would be a line. It is the same for a neural network. We can think of layer 1 for pixels, layer 2 for recognising the abstract concept we talked earlier such as lines, circles and arcs and finally in layer 3 we determine which number it is. Here we can see that the input goes through a series of layers from the most abstract layer to the less abstract layer (pixels -> line, circle, arcs -> number). In this example we only have 1 hidden layer but in real implementation it would be better to have more hidden layer that 1 depending on your interpretation of the neural network. Sometime we don't even have to think about what each layer represents and let the training do it fo us. That is the purpose of the training anyway.
0
false
3
6,235
2019-08-06 04:52:59.987
what do hidden layers mean in a neural network?
in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model. I understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer) So for example, given the standard MNIST datset that is used to train and predict handwritten digits between 0 to 9, a model would look like this: An image of a handwritten digit will have 784 pixels. Since there are 784 pixels, there would be 784 input nodes and the value of each node is the pixel intensity(0-255) each node branches out and these branches are the weights. My next layer is my hidden layer, and the value of a given node in the hidden layer is the weighted sum of my input nodes (pixels*weights). Whatever value I get, I squash it with a sigmoid function and I get a value between 0 and 1. That number that I get from the sigmoid. What does it represent exactly and why is it relevant? My understanding is that if I want to build more hidden layers, I'll be using the values of my initial hidden layer, but at this point, i'm stuck as to what the values of the first hidden layer mean exactly. Thank you!
A hidden layer in a neural network may be understood as a layer that is neither an input nor an output, but instead is an intermediate step in the network's computation. In your MNIST case, the network's state in the hidden layer is a processed version of the inputs, a reduction from full digits to abstract information about those digits. This idea extends to all other hidden layer cases you'll encounter in machine learning -- a second hidden layer is an even more abstract version of the input data, a recurrent neural network's hidden layer is an interpretation of the inputs that happens to collect information over time, or the hidden state in a convolutional neural network is an interpreted version of the input with certain features isolated through the process of convolution. To reiterate, a hidden layer is an intermediate step in your neural network's process. The information in that layer is an abstraction of the input, and holds information required to solve the problem at the output.
0
false
3
6,235
2019-08-06 04:52:59.987
what do hidden layers mean in a neural network?
in a standard neural network, I'm trying to understand, intuitively, what the values of a hidden layer mean in the model. I understand the calculation steps, but I still dont know how to think about the hidden layers and how interpret the results (of the hidden layer) So for example, given the standard MNIST datset that is used to train and predict handwritten digits between 0 to 9, a model would look like this: An image of a handwritten digit will have 784 pixels. Since there are 784 pixels, there would be 784 input nodes and the value of each node is the pixel intensity(0-255) each node branches out and these branches are the weights. My next layer is my hidden layer, and the value of a given node in the hidden layer is the weighted sum of my input nodes (pixels*weights). Whatever value I get, I squash it with a sigmoid function and I get a value between 0 and 1. That number that I get from the sigmoid. What does it represent exactly and why is it relevant? My understanding is that if I want to build more hidden layers, I'll be using the values of my initial hidden layer, but at this point, i'm stuck as to what the values of the first hidden layer mean exactly. Thank you!
Consider a very basic example of AND, OR, NOT and XOR functions. You may already know that a single neuron is only suitable when the problem is linearly separable. Here in this case, AND, OR and NOT functions are linearly separable and so they can be easy handled using a single neuron. But consider the XOR function. It is not linearly separable. So a single neuron will not be able to predict the value of XOR function. Now, XOR function is a combination of AND, OR and NOT. Below equation is the relation between them: a XOR b = (a AND (NOT b)) OR ((NOT a) AND b) So, for XOR, we can use a network which contain three layers. First layer will act as NOT function, second layer will act as AND of the output of first layer and finally the output layer will act as OR of the 2nd hidden layer. Note: This is just a example to explain why it is needed, XOR can be implemented in various other combination of neurons.
0
false
3
6,235
2019-08-06 09:58:13.633
Showing text coordinate from png on raw .pdf file
I am doing OCR on Raw PDF file where in i am converting into png images and doing OCR on that. My objective is to extract coordinates for a certain keyword from png and showcase those coordinates on actual raw pdf. I have already tried showing those coordinates on png images using opencv but i am not able to showcase those coordinates on actual raw pdf since the coordinate system of both format are different. Can anyone please helpme on how to showcase bounding box on actual raw pdf based on the coordinates generated from png images.
All you need to do is map the coordinates of the OCR token (which would be given for the image) to that of the pdf page. For instance, image_dimensions = [1800, 2400] # width, height pdf_page_dimension = [595, 841] # these are coordinates of the specific page of the pdf Assuming, on OCRing the image, a word has coordinates = [400, 700, 450, 720] , the same can be rendered on the pdf by multiplying them with scale on each axis x_scale = pdf_page_dimension[0] / image_dimensions[0] y_scale = pdf_page_dimension[1] / image_dimensions[1] scaled_coordinates = [400*x_scale, 700*y_scale, 450*x_scale, 720*y_scale] Pdf page dimensions can be obtained from any of the packages: poppler, pdfparser, pdfminer, pdfplumber
-0.386912
false
1
6,236
2019-08-06 13:48:46.087
How to evaluate HDBSCAN text clusters?
I'm currently trying to use HDBSCAN to cluster movie data. The goal is to cluster similar movies together (based on movie info like keywords, genres, actor names, etc) and then apply LDA to each cluster and get the representative topics. However, I'm having a hard time evaluating the results (apart from visual analysis, which is not great as the data grows). With LDA, although it's hard to evaluate it, i've been using the coherence measure. However, does anyone have any idea on how to evaluate the clusters made by HDBSCAN? I haven't been able to find much info on it, so if anyone has any idea, I'd very much appreciate!
Its the same problem everywhere in unsupervised learning. It is unsupervised, you are trying to discover something new and interesting. There is no way for the computer to decide whether something is actually interesting or new. It can decide and trivial cases when the prior knowledge is coded in machine processable form already, and you can compute some heuristics values as a proxy for interestingness. But such measures (including density-based measures such as DBCV are actually in no way better to judge this than the clustering algorithm itself is choosing the "best" solution). But in the end, there is no way around manually looking at the data, and doing the next steps - try to put into use what you learned of the data. Supposedly you are not invory tower academic just doing this because of trying to make up yet another useless method... So use it, don't fake using it.
1.2
true
1
6,237
2019-08-06 14:54:30.430
Best Practice for Batch Processing with RabbitMQ
I'm looking for the best way to preform ETL using Python. I'm having a channel in RabbitMQ which send events (can be even every second). I want to process every 1000 of them. The main problem is that RabbitMQ interface (I'm using pika) raise callback upon every message. I looked at Celery framework, however the batch feature was depreciated in version 3. What is the best way to do it? I thinking about saving my events in a list, and when it reaches 1000 to copy it to other list and preform my processing. However, how do I make it thread-safe? I don't want to lose events, and I'm afraid of losing events while synchronising the list. It sounds like a very simple use-case, however I didn't find any good best practice for it.
First of all, you should not "batch" messages from RabbitMQ unless you really have to. The most efficient way to work with messaging is to process each message independently. If you need to combine messages in a batch, I would use a separate data store to temporarily store the messages, and then process them when they reach a certain condition. Each time you add an item to the batch, you check that condition (for example, you reached 1000 messages) and trigger the processing of the batch. This is better than keeping a list in memory, because if your service dies, the messages will still be persisted in the database. Note : If you have a single processor per queue, this can work without any synchronization mechanism. If you have multiple processors, you will need to implement some sort of locking mechanism.
0.201295
false
1
6,238
2019-08-07 07:44:51.857
Pycharm is not letting me run my script 'test_splitter.py' , but instead 'Nosetests in test_splitter.py'?
I see many posts on 'how to run nosetests', but none on how to make pycharm et you run a script without nosetests. And yet, I seem to only be able to run or debug 'Nosetests test_splitter.py' and not ust 'test_splitter.py'! I'm relatively new to pycharm, and despite going through the documentation, I don't quite understand what nosetests are about and whether they would be preferrable for me testing myscript. But I get an error ModuleNotFoundError: No module named 'nose' Process finished with exit code 1 Empty suite I don't have administartive access so cannot download nosetests, if anyone would be sugesting it. I would just like to run my script! Other scripts are letting me run them just fine without nosetests!
I found the solution: I can run without nosetests from the 'Run' dropdown options in the toolbar, or Alt+Shift+F10.
0
false
1
6,239
2019-08-07 11:43:33.013
How do I display a large black rectangle with a moveable transparent circle in pygame?
That question wasn't very clear. Essentially, I am trying to make a multi-player Pac-Man game whereby the players (when playing as ghosts) can only see a certain radius around them. My best guess for going about this is to have a rectangle which covers the whole maze and then somehow cut out a circle which will be centred on the ghost's rect. However, I am not sure how to do this last part in pygame. I'd just like to add if it's even possible in pygame, it would be ideal for the circle to be pixelated and not a smooth circle, but this is not essential. Any suggestions? Cheers.
The best I can think of is kind of a hack. Build an image outside pygame that is mostly black with a circle of zero-alpha in the center, then blit that object on top of your ghost character to only see a circle around it. I hope there is a better way but I do not know what that is.
0.201295
false
1
6,240
2019-08-07 16:34:16.810
How do I convert scanned PDF into searchable PDF in Python (Mac)? e.g. OCRMYPDF module
I am writing a program in python that can read pdf document, extract text from the document and rename the document using extracted text. At first, the scanned pdf document is not searchable. I would like to convert the pdf into searchable pdf on Python instead of using Google doc, Cisdem pdf converter. I have read about ocrmypdf module which can used to solve this. However, I do not know how to write the code due to my limited knowledge. I expect the output to convert the scanned pdf into searchable pdf.
I suggest working on the working through the turoial, will maybe take you some time but it should be wortht it. I'm not exactly sure what you exactly want. In my project the settings below work fine in Most of the Cases. import ocrmypdf , tesseract def ocr(file_path, save_path): ocrmypdf.ocr(file_path, save_path, rotate_pages=True, remove_background=True,language="en", deskew=True, force_ocr=True)
0.545705
false
2
6,241
2019-08-07 16:34:16.810
How do I convert scanned PDF into searchable PDF in Python (Mac)? e.g. OCRMYPDF module
I am writing a program in python that can read pdf document, extract text from the document and rename the document using extracted text. At first, the scanned pdf document is not searchable. I would like to convert the pdf into searchable pdf on Python instead of using Google doc, Cisdem pdf converter. I have read about ocrmypdf module which can used to solve this. However, I do not know how to write the code due to my limited knowledge. I expect the output to convert the scanned pdf into searchable pdf.
This would be done well into two steps Create Python OCR Python function import ocrmypdf def ocr(file_path, save_path): ocrmypdf.ocr(file_path, save_path) Call and use a function. ocr("input.pdf","output.pdf") Thank you, if you got any question ask please.
0
false
2
6,241
2019-08-07 16:55:55.510
How to direct the same Amazon S3 events into several different SQS queues?
I'm working with AWS Lambda functions (in Python), that process new files that appear in the same Amazon S3 bucket and folders. When new file appears in s3:/folder1/folderA, B, C, an event s3:ObjectCreated:* is generated and it goes into sqs1, then processed by Lambda1 (and then deleted from sqs1 after successful processing). I need the same event related to the same new file that appears in s3:/folder1/folderA (but not folderB, or C) to go also into sqs2, to be processed by Lambda2. Lambda1 modifies that file and saves it somewhere, Lambda2 gets that file into DB, for example. But AWS docs says that: Notification configurations that use Filter cannot define filtering rules with overlapping prefixes, overlapping suffixes, or prefix and suffix overlapping. So question is how to bypass this limitation? Are there any known recommended or standard solutions?
Instead of set up the S3 object notification of (S3 -> SQS), you should set up a notification of (S3 -> Lambda). In your lambda function, you parse the S3 event and then you write your own logic to send whatever content about the S3 event to whatever SQS queues you like.
0.201295
false
1
6,242
2019-08-08 08:08:44.877
Triggering actions in Python Flask via cron or similar
I'm needing to trigger an action at a particular date/time either in python or by another service. Let's say I have built an application that stores the expiry dates of memberships in a database. I'm needing to trigger a number of actions when the member expires (for example, changing the status of the membership and sending an expiry email to the member), which is fine - I can deal with the actions. However, what I am having trouble with is how do I get these actions to trigger when the expiry date is reached? Are there any concepts or best practices that I should stick to when doing this? Currently, I've achieved this by executing a Google Cloud Function every day (via Google Cloud Scheduler) which checks if the membership expiry is equal to today, and completes the action if it is. I feel like this solution is quite 'hacky'.
I'm not sure which database you are using but I'm inferring you have a table that have the "membership" details of all your users. And each day you run a Cron job that queries this table to see which row has "expiration_date = today", is that correct?. I believe that's an efficient way to do it (it will be faster if you have few columns on that table).
1.2
true
1
6,243
2019-08-09 10:59:47.240
How to deploy and run python scripts inside nodejs application?
I'm working with a MEAN stack application, that passes a file to python script, and this script doing some tasks and then it returns some results. The question is and how to install the required python packages when I deploy it? Thanks! I've tried to run python code inside nodejs application, using python shell.
Place python script along with requirements.txt(which has your python dependencies) in your nodejs project directory. During deployment , call pip install on the requirements.txt and it should install the packages for you. You can call python script from nodejs just like any shell command using inbuild child_process module or python-shell.
1.2
true
1
6,244
2019-08-09 14:23:00.953
Trying to extract Certificate information in Python
I am very new to python and cannot seem to figure out how to accomplish this task. I want to connect to a website and extract the certificate information such as issuer and expiration dates. I have looked all over, tried all kinds of steps but because I am new I am getting lost in the socket, wrapper etc. To make matters worse, I am in a proxy environment and it seems to really complicate things. Does anyone know how I could connect and extract the information while behind the proxy?
Python SSL lib don't deal with proxies.
0
false
1
6,245
2019-08-09 22:16:56.867
Python - Using RegEx to extract only the String in between pattern
Hoping somebody can point me in the right direction. I am trying to parse log file to figure out how many users are logging into the system on a per-day basis. The log file gets generated in the pattern listed below. "<"Commit ts="20141001114139" client="ABCREX/John Doe"> "8764","ABCREX/John Doe","00.000.0.000","User 'ABCREX/John Doe' successfully logged in from address '00.000.0.000'." "<"/Commit> "<"Commit ts="20141001114139" client="ABCREX/John Doe"> "8764","ABCREX/Jerry Doe","00.000.0.000","User 'ABCREX/Jerry Doe' successfully logged in from address '00.000.0.000'." "<"/Commit> "<"Commit ts="20141001114139" client="ABCREX/John Doe"> "8764","ABCREX/Jane Doe","00.000.0.000","User 'ABCREX/Jane Doe' successfully logged in from address '00.000.0.000'." "<"/Commit> I am trying to capture the username from the above lines and load into DB. so I am interested only in values John Doe, Jerry Doe, Jane Doe but the when I do pattern match using REGEX it returns the below client="ABCREX/John Doe"> then using the code I am employing I have to apply multiple replace to remove "Client", "ABCREX/", ">"...etc I currently have code which is working but I feel its highly inefficient and resource consuming. I am performing split on tags then parsing reading line by line. '''extract the user login Name''' UserLoginName = str(re.search('client=(.*)>',items).group()).replace('ABCREX/', '').replace('client="','').replace('">', '') print(UserLoginName) Is there any way I can tell the REGEX to grab only the string found within the pattern and not include the pattern in the results as well?
pattern = r'User\s\'ABCREX/(.*?)\'' list_of_usernames = re.findall(pattern, output) That would match the pattern "User 'ABCREX/Jerry Doe'" and pull out the username and add it to a list. Is that helpful? I'm new here too so let me know if there is more I can help answer.
0
false
1
6,246
2019-08-10 05:15:41.640
SelectField to create dropdown menu
I have a database with some tables in it. I want now on my website has the dropdown and the choices are the names of people from a column of the table from my database and every time I click on a name it will show me a corresponding ID also from a column from this table. how I can do that? or maybe a guide where should I find an answer ! many thanks!!!
You have to do that in python(if that's what you are using in the backend). You can create functions in python that gets the list of name of tables which then you can pass to your front-end code. Similarly, you can setup functions where you get the specific table name from HTML and pass it to python and do all sort of database queries. If all these sounds confusing to you. I suggest you take a Full stack course on udemy, youtube, etc because it can't really be explained in one simple answer. I hope it was helpful. Feel free to ask me more
0
false
1
6,247
2019-08-10 21:15:17.527
Using the original python packages instead of the jython packages
I am trying to create a hybrid application with python back-end and java GUI and for that purpose I am using jython to access the data from the GUI. I wrote code using a standard Python 3.7.4 virtual environment and it worked "perfectly". But when I try to run the same code on jython it doesn't work so it seems that in jython some packages like threading are overwritten with java functionality. My question is how can I use the threading package for example from python but in jython environment? Here is the error: Exception in thread Thread-1:Traceback (most recent call last): File "/home/dexxrey/jython2.7.0/Lib/threading.py", line 222, in _Thread__bootstrap self.run() self._target(*self._args, **self._kwargs)
Since you have already decoupled the application i.e using python for backend and java for GUI, why not stick to that and build in a communication layer between the backend and frontend, this layer could either be REST or any Messaging framework.
0.201295
false
1
6,248
2019-08-11 22:56:25.070
How to get the rand() function in excel to rerun when accessing an excel file through python
I am trying to access an excel file using python for my physics class. I have to generates data that follows a function but creates variance so it doesn’t line up perfectly to the function(simulating the error experienced in experiments). I did this by using the rand() function. We need to generate a lot of data sets so that we can average them together and eliminate the error/noise creates by the rand() function. I tried to do this by loading the excel file and recording the data I need, but then I can’t figure out how to get the rand() function to rerun and create a new data set. In excel it reruns when i change the value of any cell on the excel sheet, but I don’t know how to do this when I’m accessing the file with Python. Can someone help me figure out how to do this? Thank You.
Excel formulas like RAND(), or any other formula, will only refresh when Excel is actually running and recalculating the worksheet. So, even though you may be access the data in an Excel workbook with Python, you won't be able to run Excel calculations that way. You will need to find a different approach.
1.2
true
1
6,249
2019-08-12 13:56:37.400
Jupyter notebook: need to run a cell even though close the tab
My notebook is located on a server, which means that the kernel will still run even though I close the notebook tab. I was thus wondering if it was possible to let the cell running by itself while closing the window? As the notebook is located on a server the kernel will not stop running... I tried to read previous questions but could not find an answer. Any idea on how to proceed? Thanks!
You can make open a new file and write outputs to it. I think that's the best that you can do.
0.201295
false
2
6,250
2019-08-12 13:56:37.400
Jupyter notebook: need to run a cell even though close the tab
My notebook is located on a server, which means that the kernel will still run even though I close the notebook tab. I was thus wondering if it was possible to let the cell running by itself while closing the window? As the notebook is located on a server the kernel will not stop running... I tried to read previous questions but could not find an answer. Any idea on how to proceed? Thanks!
If you run the cell before closing the tab it will continue to run once the tab has been closed. However, the output will be lost (anything using print functions to stdout or plots which display inline) unless it is written to file.
0
false
2
6,250
2019-08-12 16:37:23.493
Rasa NLU model to old
I have a problem. I am trying to use my model with Rasa core, but it gives me this error: rasa_nlu.model.UnsupportedModelError: The model version is to old to be loaded by this Rasa NLU instance. Either retrain the model, or run withan older version. Model version: 0.14.6 Instance version: 0.15.1 Does someone know which version I need to use then and how I can install that version?
I believe you trained this model on the previous version of Rasa NLU and updated Rasa NLU to a new version (Rasa NLU is a dependency for Rasa Core, so changes were made in requirenments.txt file). If this is a case, there are 2 ways to fix it: Recommended solution. If you have data and parameters, train your NLU model again using current dependencies (this one that you have running now). So you have a new model which is compatible with your current version of Rasa If you don't have a data or can not retrain a model for some reason, then downgrade Rasa NLU to version 0.14.6. I'm not sure if your current Rasa core is compatible with NLU 0.14.6, so you might also need to downgrade Rasa core if you see errors. Good luck!
1.2
true
1
6,251
2019-08-12 18:34:13.630
how can i split a full name to first name and last name in python?
I'm a novice in python programming and i'm trying to split full name to first name and last name, can someone assist me on this ? so my example file is: Sarah Simpson I expect the output like this : Sarah,Simpson
name = "Thomas Winter" LastName = name.split()[1] (note the parantheses on the function call split.) split() creates a list where each element is from your original string, delimited by whitespace. You can now grab the second element using name.split()[1] or the last element using name.split()[-1]
0
false
1
6,252
2019-08-12 19:59:30.143
pythonanwhere newbie: I don't see sqlite option
I see an option for MySql and Postgres, and have read help messages for sqlite, but I don't see anyway to use it or to install it. So it appears that it's available or else there wouldn't be any help messages, but I can't find it. I can't do any 'sudo', so no 'apt install', so don't know how to invoke and use it!
sqlite is already installed. You don't need to invoke anything to install it. Just configure your web app to use it.
0.386912
false
1
6,253
2019-08-12 21:44:29.637
Python/Django and services as classes
Are there any conventions on how to implement services in Django? Coming from a Java background, we create services for business logic and we "inject" them wherever we need them. Not sure if I'm using python/django the wrong way, but I need to connect to a 3rd party API, so I'm using an api_service.py file to do that. The question is, I want to define this service as a class, and in Java, I can inject this class wherever I need it and it acts more or less like a singleton. Is there something like this I can use with Django or should I build the service as a singleton and get the instance somewhere or even have just separate functions and no classes?
Adding to the answer given by bruno desthuilliers and TreantBG. There are certain questions that you can ask about the requirements. For example one question could be, does the api being called change with different type of objects ? If the api doesn't change, you will probably be okay with keeping it as a method in some file or class. If it does change, such that you are calling API 1 for some scenario, API 2 for some and so on and so forth, you will likely be better off with moving/abstracting this logic out to some class (from a better code organisation point of view). PS: Python allows you to be as flexible as you want when it comes to code organisation. It's really upto you to decide on how you want to organise the code.
0
false
1
6,254
2019-08-14 01:33:36.937
Does it make sense to use a part of the dataset to train my model?
The dataset I have is a set of quotations that were presented to various customers in order to sell a commodity. Prices of commodities are sensitive and standardized on a daily basis and therefore negotiations are pretty tricky around their prices. I'm trying to build a classification model that had to understand if a given quotation will be accepted by a customer or rejected by a customer. I made use of most classifiers I knew about and XGBClassifier was performing the best with ~95% accuracy. Basically, when I fed an unseen dataset it was able to perform well. I wanted to test how sensitive is the model to variation in prices, in order to do that, I synthetically recreated quotations with various prices, for example, if a quote was being presented for $30, I presented the same quote at $5, $10, $15, $20, $25, $35, $40, $45,.. I expected the classifier to give high probabilities of success as the prices were lower and low probabilities of success as the prices were higher, but this did not happen. Upon further investigation, I found out that some of the features were overshadowing the importance of price in the model and thus had to be dealt with. Even though I dealt with most features by either removing them or feature engineering them to better represent them I was still stuck with a few features that I just cannot remove (client-side requirements) When I checked the results, it turned out the model was sensitive to 30% of the test data and was showing promising results, but for the rest of the 70% it wasn't sensitive at all. This is when the idea struck my mind to feed only that segment of the training data where price sensitivity can be clearly captured or where the success of the quote is inversely related to the price being quoted. This created a loss of about 85% of the data, however the relationship that I wanted the model to learn was being captured perfectly well. This is going to be an incremental learning process for the model, so each time a new dataset comes, I'm thinking of first evaluating it for the price sensitivity and then feeding in only that segment of the data for training which is price sensitive. Having given some context to the problem, some of the questions I had were: Does it make sense to filter out the dataset for segments where the kind of relationship I'm looking for is being exhibited? Post training the model on the smaller segment of the data and reducing the number of features from 21 to 8, the model accuracy went down to ~87%, however it seems to have captured the price sensitivity bit perfectly. The way I evaluated price sensitivity is by taking the test dataset and artificially adding 10 rows for each quotation with varying prices to see how the success probability changes in the model. Is this a viable approach to such a problem?
To answer your first question, deleting the part of the dataset that doesn't work is not a good idea because then your model will overfit on the data that gives better numbers. This means that the accuracy will be higher, but when presented with something that is slightly different from the dataset, the probability of the network adapting is lower. To answer the second question, it seems like that's a good approach, but again I'd recommend keeping the full dataset.
0.386912
false
1
6,255
2019-08-14 15:13:27.587
should jedi be install in every python project environment?
I am using jedi and more specifically deoplete-jedi in neovim and I wonder if I should install it in every project as a dependency or if I can let jedi reside in the same python environment as neovim uses (and set the setting to tell deoplete-jedi where to look) It seems wasteful to have to install it in ever project but then again IDK how it would find my project environment from within the neovim environment either.
If by the word "project"you mean Python virtual environments then yes, you have to install every program and every library that you use to every virtualenv separately. flake8, pytest, jedi, whatever. Python virtual environments are intended to protect one set of libraries from the other so that you could install different sets of libraries and even different versions of libraries. The price is that you have to duplicate programs/libraries that are used often. There is a way to connect a virtualenv to the globally installed packages but IMO that brings more harm than good.
0.673066
false
1
6,256
2019-08-15 00:11:44.450
ModuleNotFoundError: no module named efficientnet.tfkeras
I attempted to do import segmentation_models as sm, but I got an error saying efficientnet was not found. So I then did pip install efficientnet and tried it again. I now get ModuleNotFoundError: no module named efficientnet.tfkeras, even though Keras is installed as I'm able to do from keras.models import * or anything else with Keras how can I get rid of this error?
To install segmentation-models use the following command: pip install git+https://github.com/qubvel/segmentation_models
1.2
true
1
6,257
2019-08-15 03:25:14.913
Editing a python package
The question is really simple: I have a python package installed using pip3 and I'd like to tweak it a little to perform some computations. I've read (and it seems logical) that is very discouraged to not to edit the installed modules. Thus, how can I do this once I downloaded the whole project folder to my computer? Is there any way to, once edited this source code install it with another name? How can I avoid mixing things up? Thanks!
You can install the package from its source code, instead of PyPi. Download the source code - do a git clone <package-git-url> of the package Instead of pip install <package>, install with pip install -e <package-directory-path> Change code in the source code, and it will be picked up automatically.
0
false
1
6,258
2019-08-15 04:06:41.070
Setting up keras and tensoflow to operate with AMD GPU
I am trying to set up Keras in order to run models using my GPU. I have a Radeon RX580 and am running Windows 10. I saw realized that CUDA only supports NVIDIA GPUs and was having difficulty finding a way to get my code to run on the GPU. I tried downloading and setting up plaidml but afterwards from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) only printed that I was running on a CPU and there was not a GPU available even though the plaidml setup was a success. I have read that PyOpenCl is needed but have not gotten a clear answer as to why or to what capacity. Does anyone know how to set up this AMD GPU to work properly? any help would be much appreciated. Thank you!
To the best of my knowledge, PlaidML was not working because I did not have the required prerequisites such as OpenCL. Once I downloaded the Visual Studio C++ build tools in order to install PyopenCL from a .whl file. This seemed to resolve the issue
1.2
true
1
6,259
2019-08-15 19:33:15.593
How to deploy flask GUI web application only locally with exe file?
I'd like to build a GUI for a few Python functions I've written that pull data from MS SQL Server. My boss wants me to share the magic of Python & SQL with the rest of the team, without them having to learn any coding. I've decided to go down the route of using Flask to create a webapp and creating an executable file using pyinstaller. I'd like it to work similarly to Jupyter Notebook, where you click on the file and it opens the notebook in your browser. I was able to hack together some code to get a working prototype of the GUI. The issue is I don't know how to deploy it. I need the GUI/Webapp to only run on the local computer for the user I sent the file to, and I don't want it accessible via the internet (because of proprietary company data, security issues, etc). The only documentation I've been able to find for deploying Flask is going the routine route of a web server. So the question is, can anyone provide any guidance on how to deploy my GUI WebApp so that it's only available to the user who has the file, and not on the world wide web? Thank you!
So, a few assumptions-- since you're a business and you're rocking a SQLServer-- you likely have Active Directory, and the computers that you care to access this app are all hooked into that domain (so, in reality, you, or your system admin does have full control over those computers). Also, the primary function of the app is to access a SQLServer to populate itself with data before doing something with that data. If you're deploying that app, I'm guessing you're probably also including the SQLServer login details along with it. With that in mind, I would just serve the Flask app on the network on it's own machine (maybe even the SQLServer machine if you have the choice), and then either implement security within the app that feeds off AD to authenticate, or just have a simple user/pass authentication you can distribute to users. By default random computers online aren't going to be able to access that app unless you've set your firewalls to deliberately route WAN traffic to it. That way, you control the Flask server-- updates only have to occur at one point, making development easier, and users simply have to open up a link in an email you send, or a shortcut you leave on their desktop.
0.201295
false
2
6,260
2019-08-15 19:33:15.593
How to deploy flask GUI web application only locally with exe file?
I'd like to build a GUI for a few Python functions I've written that pull data from MS SQL Server. My boss wants me to share the magic of Python & SQL with the rest of the team, without them having to learn any coding. I've decided to go down the route of using Flask to create a webapp and creating an executable file using pyinstaller. I'd like it to work similarly to Jupyter Notebook, where you click on the file and it opens the notebook in your browser. I was able to hack together some code to get a working prototype of the GUI. The issue is I don't know how to deploy it. I need the GUI/Webapp to only run on the local computer for the user I sent the file to, and I don't want it accessible via the internet (because of proprietary company data, security issues, etc). The only documentation I've been able to find for deploying Flask is going the routine route of a web server. So the question is, can anyone provide any guidance on how to deploy my GUI WebApp so that it's only available to the user who has the file, and not on the world wide web? Thank you!
Unfortunately, you do not have control over a give users computer. You are using flask, so your application is a web application which will be exposing your data to some port. I believe the default flask port is 5000. Regardless, if your user opens the given port in their firewall, and this is also open on whatever router you are connected to, then your application will be publicly visible. There is nothing that you can do from your python application code to prevent this. Having said all of that, if you are running on 5000, it is highly unlikely your user will have this port publicly exposed. If you are running on port 80 or 8080, then the chances are higher that you might be exposing something. A follow up question would be where is the database your web app is connecting to? Is it also on your users machine? If not, and your web app can connect to it regardless of whose machine you run it on, I would be more concerned about your DB being publicly exposed.
0
false
2
6,260
2019-08-18 17:11:06.800
how to host python script in a Web Server and access it by calling an API from xamarin application?
I need to work with opencv in my xamarin application . I found that if I use openCV directly in xamarin , the size of the app will be huge . the best solution I found for this is to use the openCV in python script then to host the python script in a Web Server and access it by calling an API from xamarin . I have no idea how to do this . any help please ? and is there is a better solutions ?
You can create your web server using Flask or Django. Flask is a simple micro framework whereas Django is a more advanced MVC like framework.
0.386912
false
1
6,261
2019-08-20 02:11:48.553
Python: Reference forked project in requirements.txt
I have a Python project which uses an open source package registered as a dependency in requirements.txt The package has some deficiencies, so I forked it on Github and made some changes. Now I'd like to test out these changes by running my original project, but I'd like to use the now forked (updated) code for the package I'm depending on. The project gets compiled into a Docker image; pip install is used to add the package into the project during the docker-compose build command. What are the standard methods of creating a docker image and running the project using the newly forked dependency, as opposed to the original one? Can requirements.txt be modified somehow or do I need to manually include it into the project? If the latter, how?
you can use git+https://github.com/...../your_forked_repo in your requirements.txt instead of typing Package==1.1.1
1.2
true
1
6,262
2019-08-20 17:45:25.027
Parsing in Python where delimiter also appears in the data
Wow, I'm thankful for all of the responses on this! To clarify the data pattern does repeat. Here is a sample: Item: some text Name: some other text Time recorded: hh:mm Time left: hh:mm other unrelated text some other unrelated text lots more text that is unrelated Item: some text Name: some other text Time recorded: hh:mm Time left: hh:mm other unrelated text some other unrelated text lots more text that is unrelated Item: some text Name: some other text Time recorded: hh:mm Time left: hh:mm and so on and so on I am using Python 3.7 to parse input from a text file that is formatted like this sample: Item: some text Name: some other text Time recorded: hh:mm Time left: hh:mm and the pattern repeats, with other similar fields, through a few hundred pages. Because there is a ":" value in some of the values (i.e. hh:mm), I not sure how to use that as a delimiter between the key and the value. I need to obtain all of the values associated with "Item", "Name", and "Time left" and output all of the matching values to a CSV file (I have the output part working) Any suggestions? Thank you! (apologies, I asked this on Stack Exchange and it was deleted, I'm new at this)
Use the ': ' (with a space) as a delimiter.
0.101688
false
1
6,263
2019-08-20 20:09:50.087
Getting error 'file is not a database' after already accessing the database
I am currently helping with some NLP code and in the code we have to access a database to get the papers. I have fun the code successfully before but every time I try to run the code again I get the error sqlite3.DatabaseError: file is not a database. I am not sure what is happening here because the database is still in the same exact position and the path doesn't change. I've tried looking up this problem but haven't found similar issues. I am hoping that someone can explain what is happening here because I don't even know how to start with this issue because it runs once but not again.
I got the same issue. I have a program that print some information from my database but after running it again and again, I got an error that my database was unable to load. For me I think it may be because I have tried to be connected to my database that this problem occurs. And what I suggest you is to reboot your computer or to research the way of being connected several times to the database
1.2
true
1
6,264
2019-08-21 11:32:33.130
How do I stop a Python script from running in my command line?
I recently followed a tutorial on web scraping, and as part of that tutorial, I had to execute (?) the script I had written in my command line.Now that script runs every hour and I don't know how to stop it. I want to stop the script from running. I have tried deleting the code, but the script still runs. What should I do?
I can't comment, but you must show us the script or part of the script so we can try to find out the problem or the video you were watching. Asking just a question without an example doesn't help us as much figure out the problem. If you're using Flask, in the terminal or CMD you're running the script. Type in CTRL+C and it should stop the script. OR set the debug to false eg. app.run(debug=False) turn that to False because sometimes that can make it run in background and look for updates even though the script was stopped. In conclusion: Try to type CTRL+C or if not set debug to False
0.201295
false
2
6,265
2019-08-21 11:32:33.130
How do I stop a Python script from running in my command line?
I recently followed a tutorial on web scraping, and as part of that tutorial, I had to execute (?) the script I had written in my command line.Now that script runs every hour and I don't know how to stop it. I want to stop the script from running. I have tried deleting the code, but the script still runs. What should I do?
You can kill it from task manager.
1.2
true
2
6,265
2019-08-23 04:11:30.900
How to insert variables into a text cell using google colab
I would like to insert a python variable into a text cell, in google colab. For example, if a=10, I would like to insert the a into a text cell and render the value. So in the text cell (using Jupyter Notebook with nbextensions) I would like to write the following in the text cell: There will be {{ a }} pieces of fruit at the reception. It should show up as: There will be 10 pieces of fruit at the reception. The markdown cheatsheets and explanations do not say how to achieve this. Is this possible currently?
It's not possible to change 'input cell' (either code or markdown) programmatically. You can change only the output cells. Input cells always require manually change. (even %load doesn't work)
0.995055
false
1
6,266