Q_CreationDate
stringlengths
23
23
Title
stringlengths
11
149
Question
stringlengths
25
6.53k
Answer
stringlengths
15
5.1k
Score
float64
-1
1.2
Is_accepted
bool
2 classes
N_answers
int64
1
17
Q_Id
int64
0
6.76k
2018-04-18 09:29:24.593
Scrapy - order of crawled urls
I've got an issue with scrapy and python. I have several links. I crawl data from each of them in one script with the use of loop. But the order of crawled data is random or at least doesn't match to the link. So I can't match url of each subpage with the outputed data. Like: crawled url, data1, data2, data3. Data 1, data2, data3 => It's ok, because it comes from one loop, but how can I add to the loop current url or can I set the order of link's list? Like first from the list is crawled as first, second is crawled as second...
Ok, It seems that the solution is in settings.py file in scrapy. DOWNLOAD_DELAY = 3 Between requests. It should be uncommented. Defaultly it's commented.
-0.135221
false
2
5,461
2018-04-18 09:29:24.593
Scrapy - order of crawled urls
I've got an issue with scrapy and python. I have several links. I crawl data from each of them in one script with the use of loop. But the order of crawled data is random or at least doesn't match to the link. So I can't match url of each subpage with the outputed data. Like: crawled url, data1, data2, data3. Data 1, data2, data3 => It's ok, because it comes from one loop, but how can I add to the loop current url or can I set the order of link's list? Like first from the list is crawled as first, second is crawled as second...
time.sleep() - would it be a solution?
0
false
2
5,461
2018-04-18 20:24:57.843
gcc error when installing pyodbc
I am installing pyodbc on Redhat 6.5. Python 2.6 and 2.7.4 are installed. I get the following error below even though the header files needed for gcc are in the /usr/include/python2.6. I have updated every dev package: yum groupinstall -y 'development tools' Any ideas on how to resolve this issue would be greatly appreciated??? Installing pyodbc... Processing ./pyodbc-3.0.10.tar.gz Installing collected packages: pyodbc Running setup.py install for pyodbc ... error Complete output from command /opt/rh/python27/root/usr/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-JAGZDD-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(file).read().replace('\r\n', '\n'), file, 'exec'))" install --record /tmp/pip-QJasL0-record/install-record.txt --single-version-externally-managed --compile: running install running build running build_ext building 'pyodbc' extension creating build creating build/temp.linux-x86_64-2.7 creating build/temp.linux-x86_64-2.7/tmp creating build/temp.linux-x86_64-2.7/tmp/pip-JAGZDD-build creating build/temp.linux-x86_64-2.7/tmp/pip-JAGZDD-build/src gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DPYODBC_VERSION=3.0.10 -DPYODBC_UNICODE_WIDTH=4 -DSQL_WCHART_CONVERT=1 -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.8.sdk/usr/include -I/opt/rh/python27/root/usr/include/python2.7 -c /tmp/pip-JAGZDD-build/src/cnxninfo.cpp -o build/temp.linux-x86_64-2.7/tmp/pip-JAGZDD-build/src/cnxninfo.o -Wno-write-strings In file included from /tmp/pip-JAGZDD-build/src/cnxninfo.cpp:8: ** **/tmp/pip-JAGZDD-build/src/pyodbc.h:41:20: error: Python.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:42:25: error: floatobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:43:24: error: longobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:44:24: error: boolobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:45:27: error: unicodeobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbc.h:46:26: error: structmember.h: No such file or directory ** In file included from /tmp/pip-JAGZDD-build/src/pyodbc.h:137, from /tmp/pip-JAGZDD-build/src/cnxninfo.cpp:8: /tmp/pip-JAGZDD-build/src/pyodbccompat.h:61:28: error: stringobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbccompat.h:62:25: error: intobject.h: No such file or directory /tmp/pip-JAGZDD-build/src/pyodbccompat.h:63:28: error: bufferobject.h: No such file or directory In file included from /tmp/pip-JAGZDD-build/src/cnxninfo.cpp:8: /tmp/pip-JAGZDD-build/src/pyodbc.h: In function ‘void _strlwr(char*)’: /tmp/pip-JAGZDD-build/src/pyodbc.h:92: error: ‘tolower’ was not declared in this scope In file included from /tmp/pip-JAGZDD-build/src/pyodbc.h:137, from /tmp/pip-JAGZDD-build/src/cnxninfo.cpp:8: /tmp/pip-JAGZDD-build/src/pyodbccompat.h: At global scope: /tmp/pip-JAGZDD-build/src/pyodbccompat.h:71: error: expected initializer before ‘*’ token /tmp/pip-JAGZDD-build/src/pyodbccompat.h:81: error: ‘Text_Buffer’ declared as an ‘inline’ variable /tmp/pip-JAGZDD-build/src/pyodbccompat.h:81: error: ‘PyObject’ was not declared in this scope /tmp/pip-JAGZDD-build/src/pyodbccompat.h:81: error: ‘o’ was not declared in this scope /tmp/pip-JAGZDD-build/src/pyodbccompat.h:82: error: expected ‘,’ or ‘;’ before ‘{’ token /tmp/pip-JAGZDD-build/src/pyodbccompat.h:93: error: ‘Text_Check’ declared as an ‘inline’ variable /tmp/pip-JAGZDD-build/src/pyodbccompat.h:93: error: ‘PyObject’ was not declared in this scope /tmp/pip-JAGZDD-build/src/pyodbccompat.h:93: error: ‘o’ was not declared in this scope /tmp/pip-JAGZDD-build/src/pyodbccompat.h:94: error: expected ‘,’ or ‘;’ before ‘{’ token /tmp/pip-JAGZDD-build/src/pyodbccompat.h:104: error: ‘PyObject’ was not declared in this scope /tmp/pip-JAGZDD-build/src/pyodbccompat.h:104: error: ‘lhs’ was not declared in this scope /tmp/pip-JAGZDD-build/src/pyodbccompat.h:104: error: expected primary-expression before ‘const’ /tmp/pip-JAGZDD-build/src/pyodbccompat.h:104: error: initializer expression list treated as compound expression /tmp/pip-JAGZDD-build/src/pyodbccompat.h:109: error: ‘Text_Size’ declared as an ‘inline’ variable /tmp/pip-JAGZDD-build/src/pyodbccompat.h:109: error: ‘PyObject’ was not declared in this scope /tmp/pip-JAGZDD-build/src/pyodbccompat.h:109: error: ‘o’ was not declared in this scope /tmp/pip-JAGZDD-build/src/pyodbccompat.h:110: error: expected ‘,’ or ‘;’ before ‘{’ token /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘TextCopyToUnicode’ declared as an ‘inline’ variable /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘Py_UNICODE’ was not declared in this scope /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘buffer’ was not declared in this scope /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘PyObject’ was not declared in this scope /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: ‘o’ was not declared in this scope /tmp/pip-JAGZDD-build/src/pyodbccompat.h:118: error: initializer expression list treated as compound expression /tmp/pip-JAGZDD-build/src/pyodbccompat.h:119: error: expected ‘,’ or ‘;’ before ‘{’ token error: command 'gcc' failed with exit status 1
The resolution was to re-install Python2.7
0
false
1
5,462
2018-04-18 21:14:22.503
Count number of nodes per level in a binary tree
I've been searching for a bit now and haven't been able to find anything similar to my question. Maybe i'm just not searching correctly. Anyways this is a question from my exam review. Given a binary tree, I need to output a list such that each item in the list is the number of nodes on a level in a binary tree at the items list index. What I mean, lst = [1,2,1] and the 0th index is the 0th level in the tree and the 1 is how many nodes are in that level. lst[1] will represent the number of nodes (2) in that binary tree at level 1. The tree isn't guaranteed to be balanced. We've only been taught preorder,inorder and postorder traversals, and I don't see how they would be useful in this question. I'm not asking for specific code, just an idea on how I could figure this out or the logic behind it. Any help is appreciated.
The search ordering doesn't really matter as long as you only count each node once. A depth-first search solution with recursion would be: Create a map counters to store a counter for each level. E.g. counters[i] is the number of nodes found so far at level i. Let's say level 0 is the root. Define a recursive function count_subtree(node, level): Increment counters[level] once. Then for each child of the given node, call count_subtree(child, level + 1) (the child is at a 1-deeper level). Call count_subtree(root_node, 0) to count starting at the root. This will result in count_subtree being run exactly once on each node because each node only has one parent, so counters[level] will be incremented once per node. A leaf node is the base case (no children to call the recursive function on). Build your final list from the values of counters, ordered by their keys ascending. This would work with any kind of tree, not just binary. Running time is O(number of nodes in tree). Side note: The depth-first search solution would be easier to divide and run on parallel processors or machines than a similar breadth-first search solution.
0.386912
false
1
5,463
2018-04-19 08:06:38.817
Going back to previous line in Spyder
I am using the Spyder editor and I have to go back and forth from the piece of code that I am writing to the definition of the functions I am calling. I am looking for shortcuts to move given this issue. I know how to go to the function definition (using Ctrl + g), but I don't know how to go back to the piece of code that I am writing. Is there an easy way to do this?
(Spyder maintainer here) You can use the shortcuts Ctrl+Alt+Left and Ctrl+Alt+Right to move to the previous/next cursor position, respectively.
1.2
true
1
5,464
2018-04-19 12:15:18.167
clean up python versions mac osx
I tried to run a python script on my mac computer, but I ended up in troubles as it needed to install pandas as a dependency. I tried to get this dependency, but to do so I installed different components like brew, pip, wget and others including different versions of python using brew, .pkg package downloaded from python.org. In the end, I was not able to run the script anyway. Now I would like to sort out the things and have only one version of python (3 probably) working correctly. Can you suggest me the way how to get the overview what I have installed on my computer and how can I clean it up? Thank you in advance
Use brew list to see what you've installed with Brew. And Brew Uninstall as needed. Likewise, review the logs from wget to see where it installed things. Keep in mind that MacOS uses Python 2.7 for system critical tasks; it's baked-into the OS so don't touch it. Anything you installed with pip is saved to the /site-packages directory of the Python version in which you installed it so it will disappear when you remove that version of Python. The .pkg files installed directly into your Applications folder and can be deleted safely like any normal app.
0.999329
false
1
5,465
2018-04-19 20:49:21.823
Python/Flask: only one user can call a endpoint at one time
I have a API build using Python/Flask, and I have a endpoint called /build-task that called by the system, and this endpoint takes about 30 minutes to run. My question is that how do I lock the /build-task endpoint when it's started and running already? So so other user, or system CANNOT call this endpoint.
You have some approaches for this problem: 1 - You can create a session object, save a flag in the object and check if the endpoint is already running and respond accordingly. 2 - Flag on the database, check if the endpoint is already running and respond accordingly.
0.386912
false
1
5,466
2018-04-19 22:13:32.410
After delay() is called on a celery task, it takes more than 5 to 10 seconds for the tasks to even start executing with redis as the server
I have Redis as my Cache Server. When I call delay() on a task,it takes more than 10 tasks to even start executing. Any idea how to reduce this unnecessary lag? Should I replace Redis with RabbitMQ?
It's very difficult to say what the cause of the delay is without being able to inspect your application and server logs, but I can reassure you that the delay is not normal and not an effect specific to either Celery or using Redis as the broker. I've used this combination a lot in the past and execution of tasks happens in a number of milliseconds. I'd start by ensuring there are no network related issues between your client creating the tasks, your broker (Redis) and your task consumers (celery workers). Good luck!
1.2
true
1
5,467
2018-04-21 12:34:46.780
add +1 hour to datetime.time() django on forloop
I have code like this, I want to check in the time range that has overtime and sum it. currently, am trying out.hour+1 with this code, but didn't work. overtime_all = 5 overtime_total_hours = 0 out = datetime.time(14, 30) while overtime_all > 0: overtime200 = object.filter(time__range=(out, out.hour+1)).count() overtime_total_hours = overtime_total_hours + overtime200 overtime_all -=1 print overtime_total_hours how to add 1 hour every loop?...
I found the solution now, and this is work. overtime_all = 5 overtime_total_hours = 0 out = datetime.time(14, 30) while overtime_all > 0: overtime200 = object.filter(time__range=(out,datetime.time(out.hour+1, 30))).count() overtime_total_hours = overtime_total_hours + overtime200 overtime_all -=1 print overtime_total_hours i do change out.hour+1 to datetime.time(out.hour+1, 30) its work fine now, but i dont know maybe there more compact/best solution. thank you guys for your answer.
0.201295
false
2
5,468
2018-04-21 12:34:46.780
add +1 hour to datetime.time() django on forloop
I have code like this, I want to check in the time range that has overtime and sum it. currently, am trying out.hour+1 with this code, but didn't work. overtime_all = 5 overtime_total_hours = 0 out = datetime.time(14, 30) while overtime_all > 0: overtime200 = object.filter(time__range=(out, out.hour+1)).count() overtime_total_hours = overtime_total_hours + overtime200 overtime_all -=1 print overtime_total_hours how to add 1 hour every loop?...
Timedelta (from datetime) can be used to increment or decrement a datatime objects. Unfortunately, it cannot be directly combined with datetime.time objects. If the values that are stored in your time column are datetime objects, you can use them (e.g.: my_datetime + timedelta(hours=1)). If they are time objects, you'll need to think if they represent a moment in time (in that case, they should be converted to datetime objects) or a duration (in that case, it's probably easier to store it as an integer representing the total amount of minutes, and to perform all operations on integers).
1.2
true
2
5,468
2018-04-22 02:32:28.877
k-means clustering multi column data in python
I Have data-set for which consist 2000 lines in a text file. Each line represents x,y,z (3D coordinates location) of 20 skeleton joint points of human body (eg: head, shoulder center, shoulder left, shoulder right,......, elbow left, elbow right). I want to do k-means clustering of this data. Data is separated by 'spaces ', each joint is represented by 3 values (Which represents x,y,z coordinates). Like head and shoulder center represented by .0255... .01556600 1.3000... .0243333 .010000 .1.3102000 .... So basically I have 60 columns in each row, which which represents 20 joints and each joins consist of three points. My question is how do I format or use this data for k-means clustering,
You don't need to reformat anything. Each row is a 60 dimensional vector of continous values with a comparable scale (coordinates), as needed for k-means. You can just run k-means on this. But assuming that the measurements were taken in sequence, you may observe a strong correlation between rows, so I wouldn't expect the data to cluster extremely well, unless you set up the use to do and hold certain poses.
1.2
true
1
5,469
2018-04-22 11:28:39.070
How to get the quantity of products in specified date in odoo 10
I want to create table in odoo 10 with the following columns: quantity_in_the_first_day_of_month,input_quantity,output_quantity,quantity_in_the_last_day_of_the_month. but i don't know how to get the quantity of the specified date
You can join the sale order and sale order line to get specified date. select sum(sol.product_uom_qty) from sale_order s,sale_order_line sol where sol.order_id=s.id and DATE(s.date_order) = '2018-01-01'
0
false
1
5,470
2018-04-24 04:53:51.450
How do CPU cores get allocated to python processes in multiprocessing?
Let's say I am running multiple python processes(not threads) on a multi core CPU (say 4). GIL is process level so GIL within a particular process won't affect other processes. My question here is if the GIL within one process will take hold of only single core out of 4 cores or will it take hold of all 4 cores? If one process locks all cores at once, then multiprocessing should not be any better than multi threading in python. If not how do the cores get allocated to various processes? As an observation, in my system which is 8 cores (4*2 because of hyperthreading), when I run a single CPU bound process, the CPU usage of 4 out of 8 cores goes up. Simplifying this: 4 python threads (in one process) running on a 4 core CPU will take more time than single thread doing same work (considering the work is fully CPU bound). Will 4 different process doing that amount of work reduce the time taken by a factor of near 4?
Process to CPU/CPU core allocation is handled by the Operating System.
0
false
1
5,471
2018-04-24 13:49:41.587
How to read back the "random-seed" from a saved model of Dynet
I have a model already trained by dynet library. But i forget the --dynet-seed parameter when training this model. Does anyone know how to read back this parameter from the saved model? Thank you in advance for any feedback.
You can't read back the seed parameter. Dynet model does not save the seed parameter. The obvious reason is, it is not required at testing time. Seed is only used to set fixed initial weights, random shuffling etc. for different experimental runs. At testing time no parameter initialisation or shuffling is required. So, no need to save seed parameter. To the best of my knowledge, none of the other libraries like tensorflow, pytorch etc. save the seed parameter as well.
1.2
true
1
5,472
2018-04-24 20:57:16.490
Django/Python - Serial line concurrency
I'm currently working on gateway with an embedded Linux and a Webserver. The goal of the gateway is to retrieve data from electrical devices through a RS485/Modbus line, and to display them on a server. I'm using Nginx and Django, and the web front-end is delivered by "static" files. Repeatedly, a Javascript script file makes AJAX calls that send CGI requests to Nginx. These CGI requests are answered with JSON responses thanks to Django. The responses are mostly data that as been read on the appropriate Modbus device. The exact path is the following : Randomly timed CGI call -> urls.py -> ModbusCGI.py (import an other script ModbusComm.py)-> ModbusComm.py create a Modbus client and instantly try to read with it. Next to that, I wanted to implement a Datalogger, to store data in a DB at regular intervals. I made a script that also import the ModbusComm.py script, but it doesn't work : sometime multiple Modbus frames are sent at the same time (datalogger and cgi scripts call the same function in ModbusComm.py "files" at the same time) which results in an error. I'm sure this problem would also occur if there are a lot of users on the server (CGI requests sent at the same time). Or not ? (queue system already managed for CGI requests? I'm a bit lost) So my goal would be to make a queue system that could handle calls from several python scripts => make them wait while it's not their turn => call a function with the right arguments when it's their turn (actually using the modbus line), and send back the response to the python script so it can generate the JSON response. I really don't know how to achieve that, and I'm sure there are better way to do this. If I'm not clear enough, don't hesitate to make me aware of it :)
I had the same problem when I had to allow multiple processes to read some Modbus (and not only Modbus) data through a serial port. I ended up with a standalone process (“serial port server”) that exclusively works with a serial port. All other processes work with that port through that standalone process via some inter processes communication mechanism (we used Unix sockets). This way when an application wants to read a Modbus register it connects to the “serial port server”, sends its request and receives the response. All the actual serial port communication is done by the “serial port server” in sequential way to ensure consistency.
0
false
1
5,473
2018-04-24 22:23:26.923
Make Python 3 default on Mac OS?
I would like to ask if it is possible to make Python 3 a default interpreter on Mac OS 10 when typing python right away from the terminal? If so, can somebody help how to do it? I'm avoiding switching between the environments. Cheers
You can do that by changing alias, typing in something like $ alias python=python3 in the terminal. If you want the change to persist open ~/.bash_profile using nano and then add alias python=python3. CTRL+O to save and CTRL+X to close. Then type $ source ~./bash_profile in the terminal.
0.201295
false
1
5,474
2018-04-25 00:38:39.330
can't import more than 50 contacts from csv file to telegram using Python3
Trying to Import 200 contacts from CSV file to telegram using Python3 Code. It's working with first 50 contacts and then stop and showing below: telethon.errors.rpc_error_list.FloodWaitError: A wait of 101 seconds is required Any idea how I can import all list without waiting?? Thanks!!
You can not import a large number of people in sequential. ُThe telegram finds you're sperm. As a result, you must use ‍sleep between your requests
0
false
1
5,475
2018-04-25 07:54:39.583
Grouping tests in pytest: Classes vs plain functions
I'm using pytest to test my app. pytest supports 2 approaches (that I'm aware of) of how to write tests: In classes: test_feature.py -> class TestFeature -> def test_feature_sanity In functions: test_feature.py -> def test_feature_sanity Is the approach of grouping tests in a class needed? Is it allowed to backport unittest builtin module? Which approach would you say is better and why?
Typically in unit testing, the object of our tests is a single function. That is, a single function gives rise to multiple tests. In reading through test code, it's useful to have tests for a single unit be grouped together in some way (which also allows us to e.g. run all tests for a specific function), so this leaves us with two options: Put all tests for each function in a dedicated module Put all tests for each function in a class In the first approach we would still be interested in grouping all tests related to a source module (e.g. utils.py) in some way. Now, since we are already using modules to group tests for a function, this means that we should like to use a package to group tests for a source module. The result is one source function maps to one test module, and one source module maps to one test package. In the second approach, we would instead have one source function map to one test class (e.g. my_function() -> TestMyFunction), and one source module map to one test module (e.g. utils.py -> test_utils.py). It depends on the situation, perhaps, but the second approach, i.e. a class of tests for each function you are testing, seems more clear to me. Additionally, if we are testing source classes/methods, then we could simply use an inheritance hierarchy of test classes, and still retain the one source module -> one test module mapping. Finally, another benefit to either approach over just a flat file containing tests for multiple functions, is that with classes/modules already identifying which function is being tested, you can have better names for the actual tests, e.g. test_does_x and test_handles_y instead of test_my_function_does_x and test_my_function_handles_y.
0.999909
false
2
5,476
2018-04-25 07:54:39.583
Grouping tests in pytest: Classes vs plain functions
I'm using pytest to test my app. pytest supports 2 approaches (that I'm aware of) of how to write tests: In classes: test_feature.py -> class TestFeature -> def test_feature_sanity In functions: test_feature.py -> def test_feature_sanity Is the approach of grouping tests in a class needed? Is it allowed to backport unittest builtin module? Which approach would you say is better and why?
There are no strict rules regarding organizing tests into modules vs classes. It is a matter of personal preference. Initially I tried organizing tests into classes, after some time I realized I had no use for another level of organization. Nowadays I just collect test functions into modules (files). I could see a valid use case when some tests could be logically organized into same file, but still have additional level of organization into classes (for instance to make use of class scoped fixture). But this can also be done just splitting into multiple modules.
1.2
true
2
5,476
2018-04-25 08:16:18.483
How to calculate a 95 credible region for a 2D joint distribution?
Suppose we have a joint distribution p(x_1,x_2), and we know x_1,x_2,p. Both are discrete, (x_1,x_2) is scatter, its contour could be drawn, marginal as well. I would like to show the area of 95% quantile (a scale of 95% data will be contained) of the joint distribution, how can I do that?
As the other points out, there are infinitely many solutions to this problem. A practical one is to find the approximate center of the point cloud and extend a circle from there until it contains approximately 95% of the data. Then, find the convex hull of the selected points and compute its area. Of course, this will only work if the data is sort of concentrated in a single area. This won't work if there are several clusters.
0.201295
false
2
5,477
2018-04-25 08:16:18.483
How to calculate a 95 credible region for a 2D joint distribution?
Suppose we have a joint distribution p(x_1,x_2), and we know x_1,x_2,p. Both are discrete, (x_1,x_2) is scatter, its contour could be drawn, marginal as well. I would like to show the area of 95% quantile (a scale of 95% data will be contained) of the joint distribution, how can I do that?
If you are interested in finding a pair x_1, x_2 of real numbers such that P(X_1<=x_1, X_2<=x_2) = 0.95 and your distribution is continuous then there will be infinitely many of these pairs. You might be better of just fixing one of them and then finding the other
0
false
2
5,477
2018-04-25 12:20:28.077
queires and advanced operations in influxdb
Recently started working on influxDB, can't find how to add new measurements or make a table of data from separate measurements, like in SQL we have to join table or so. The influxdb docs aren't that clear. I'm currently using the terminal for everything and wouldn't mind switching to python but most of it is about HTTP post schemes in the docs, is there any other alternative? I would prefer influxDB in python if the community support is good
The InfluxDB query language does not support joins across measurements. It instead needs to be done client side after querying data. Querying, without join, data from multiple measurements can be done with one query.
1.2
true
1
5,478
2018-04-26 22:40:24.603
Run external python file with Mininet
I try to write a defense system by using mininet + pox. I have l3_edited file to calculate entropy. I understand when a host attacked. I have my myTopo.py file that create a topo with Mininet. Now my question: I want to change hosts' ips when l3_edited detect an attack. Where should I do it? I believe I should write program and run it in mininet. (not like custom topo but run it after create mininet, in command line). If it's true, how can I get hosts' objest? If I can get it, I can change their IPs. Or should I do it on my myTopo.py ??? Then, how can I run my defense code, when I detect an attack?
If someone looking for answer... You can use your custom topology file to do other task. Multithread solved my problem.
1.2
true
1
5,479
2018-04-27 12:58:39.440
Select columns periodically on pandas DataFrame
I'm working on a Dataframe with 1116 columns, how could I select just the columns in a period of 17 ? More clearly select the 12th, 29th,46th,63rd... columns
df.iloc[:,[i*17 for i in range(0,65)]]
0
false
1
5,480
2018-04-27 15:23:38.983
How to create different Python Wheel distributions for Ubuntu and RedHat
I have a Cython-based package which depends on other C++ SO libraries. Those libraries are binary different between Ubuntu (dev) and RedHat (prod). So the SO file generated by Cython has to be different as well. If I use Wheel to package it the file name is same for both environments: package-version-cp27-cp27mu-linux_x86_64.whl So if I upload it to pypi it will conflict with RedHat based distribution of the same package. I have to upload it to pypi because the project is then PEX-ed (via Pants) and PEX tries to download from pypi and fails if it does not find it with the following exception. Exception caught: 'pex.resolver.Unsatisfiable' Any ideas how to resolve it? Thx.
I found a solution by using a different PyPi instance. So our DEV Ubuntu environment and PROD RedHat just use two different PyPi sources. To do that I had to make two configurations ~/.pypic and ~/.pip/pip.conf to upload.
0
false
1
5,481
2018-04-28 20:06:07.330
Why use zappa/chalice in serverless python apps?
I am new to python and thought it would be great to have my very first python project running on AWS infrastructure. Given my previous node.js experience with lambdas, I thought that every function would have its own code and the app is only glued together by the persistence layer, everything else are decoupled separate functions. In Python lambdas there are serverless microframeworks like Chalice or Zappa that seem to be an accepted practice. For me though it feels like they are hacking around the concept of serverless approach. You still have a full-blown app build on let's say Flask, or even Django, and that app is served through lambda. There is still one application that has all the routing, configs, boilerplate code, etc instead of small independent functions that just do their job. I currently do not see how and if this makes like any easier. What is the benefit / reason for having the whole code base served through lambdas as opposed to individual functions? Is there an execution time penalty if using flask/django/whatever else with serverless apps? If this depends on the particular project, what would be the guidance when to use framework, and when to use individual functions?
Benefits. You can use known concept, and adopt it in serverless. Performance. The smaller code is the less ram it takes. It must be loaded, processed, and so on. Just to process single request? For me that was always too much. Let's say you have diango project, that is working on elastic beanstalk, and you need some lamdas to deal with limited problems. Now. Do you want to have two separate configurations? What about common functions? Serverless looks nice, but... let's assume that you have permissions, so your app, for every call will pull that stuff. Perhaps you have it cached - in redis, as a hole permissions for user... Other option is dynamodb, which is even more expensive. Yes there is nice SLA, but API is quite strange, also if you plan keeping more data there... the more data you have the slower it work - for same money. In other words - if you put more data, fetching will cost more - if you want same speed.
0
false
1
5,482
2018-04-29 13:47:46.340
How to preprocess audio data for input into a Neural Network
I'm currently developing a keyword-spotting system that recognizes digits from 0 to 9 using deep neural networks. I have a dataset of people saying the numbers(namely the TIDIGITS dataset, collected at Texas Instruments, Inc), however the data is not prepared to be fed into a neural network, because not all the audio data have the same audio length, plus some of the files contain several digits being spoken in sequence, like "one two three". Can anyone tell me how would I transform these wav files into 1 second wav files containing only the sound of one digit? Is there any way to automatically do this? Preparing the audio files individually would be time expensive. Thank you in advance!
I would split each wav by the areas of silence. Trim the silence from beginning and end. Then I'd run each one through a FFT for different sections. Smaller ones at the beginning of the sound. Then I'd normalise the frequencies against the fundamental. Then I'd feed the results into the NN as a 3d array of volumes, frequencies and times.
0.201295
false
1
5,483
2018-04-29 20:58:03.330
How would i generate a random number in python without duplicating numbers
I was wondering how to generate a random 4 digit number that has no duplicates in python 3.6 I could generate 0000-9999 but that would give me a number with a duplicate like 3445, Anyone have any ideas thanks in advance
Generate a random number check if there are any duplicates, if so go back to 1 you have a number with no duplicates OR Generate it one digit at a time from a list, removing the digit from the list at each iteration. Generate a list with numbers 0 to 9 in it. Create two variables, the result holding value 0, and multiplier holding 1. Remove a random element from the list, multiply it by the multiplier variable, add it to the result. multiply the multiplier by 10 go to step 3 and repeat for the next digit (up to the desired digits) you now have a random number with no repeats.
-0.386912
false
1
5,484
2018-04-30 15:12:36.730
Keras Neural Network. Preprocessing
I have this doubt when I fit a neural network in a regression problem. I preprocessed the predictors (features) of my train and test data using the methods of Imputers and Scale from sklearn.preprocessing,but I did not preprocessed the class or target of my train data or test data. In the architecture of my neural network all the layers has relu as activation function except the last layer that has the sigmoid function. I have choosen the sigmoid function for the last layer because the values of the predictions are between 0 and 1. tl;dr: In summary, my question is: should I deprocess the output of my neuralnet? If I don't use the sigmoid function, the values of my output are < 0 and > 1. In this case, how should I do it? Thanks
Usually, if you are doing regression you should use a linear' activation in the last layer. A sigmoid function will 'favor' values closer to 0 and 1, so it would be harder for your model to output intermediate values. If the distribution of your targets is gaussian or uniform I would go with a linear output layer. De-processing shouldn't be necessary unless you have very large targets.
0
false
1
5,485
2018-05-01 02:43:55.537
How to calculate the HMAC(hsa256) of a text using a public certificate (.pem) as key
I'm working on Json Web Tokens and wanted to reproduce it using python, but I'm struggling on how to calculate the HMAC_SHA256 of the texts using a public certificate (pem file) as a key. Does anyone know how I can accomplish that!? Tks
In case any one found this question. The answer provided by the host works, but the idea is wrong. You don't use any RSA keys with HMAC method. The RSA key pair (public and private) are used for asymmetric algorithm while HMAC is symmetric algorithm. In HMAC, the two sides of the communication keep the same secret text(bytes) as the key. It can be a public_cert.pem as long as you keep it secretly. But a public.pem is usually shared publicly, which makes it unsafe.
0.386912
false
1
5,486
2018-05-01 05:40:22.107
How to auto scale in JES
I'm coding watermarking images in JES and I was wondering how to Watermark a picture by automatically scaling a watermark image? If anyone can help me that would be great. Thanks.
Ill start by giving you a quote from the INFT1004 assignment you are asking for help with. "In particular, you should try not to use code or algorithms from external sources, and not to obtain help from people other than your instructors, as this can prevent you from mastering these concepts" It specifically says in this assignment that you should not ask people online or use code you find or request online, and is a breach of the University of Newcastle academic integrity code - you know the thing you did a module on before you started the course. A copy of this post will be sent along to the course instructor.
0
false
1
5,487
2018-05-01 13:33:54.250
Multi-label classification methods for large dataset
I realize there's another question with a similar title, but my dataset is very different. I have nearly 40 million rows and about 3 thousand labels. Running a simply sklearn train_test_split takes nearly 20 minutes. I initially was using multi-class classification models as that's all I had experience with, and realized that since I needed to come up with all the possible labels a particular record could be tied to, I should be using a multi-label classification method. I'm looking for recommendations on how to do this efficiently. I tried binary relevance, which took nearly 4 hours to train. Classifier chains errored out with a memory error after 22 hours. I'm afraid to try a label powerset as I've read they don't work well with a ton of data. Lastly, I've got adapted algorithm, MlkNN and then ensemble approaches (which I'm also worried about performance wise). Does anyone else have experience with this type of problem and volume of data? In addition to suggested models, I'm also hoping for advice on best training methods, like train_test_split ratios or different/better methods.
20 minutes for this size of a job doesn't seem that long, neither does 4 hours for training. I would really try vowpal wabbit. It excels at this sort of multilabel problem and will probably give unmatched performance if that's what you're after. It requires significant tuning and will still require quality training data, but it's well worth it. This is essentially just a binary classification problem. An ensemble will of course take longer so consider whether or not it's necessary given your accuracy requirements.
1.2
true
1
5,488
2018-05-01 20:23:34.880
PULP: Check variable setting against constraints
I'm looking to set up a constraint-check in Python using PULP. Suppose I had variables A1,..,Xn and a constraint (AffineExpression) A1X1 + ... + AnXn <= B, where A1,..,An and B are all constants. Given an assignment for X (e.g. X1=1, X2=4,...Xn=2), how can I check if the constraints are satisfied? I know how to do this with matrices using Numpy, but wondering if it's possible to do using PULP to let the library handle the work. My hope here is that I can check specific variable assignments. I do not want to run an optimization algorithm on the problem (e.g. prob.solve()). Can PULP do this? Is there a different Python library that would be better? I've thought about Google's OR-Tools but have found the documentation is a little bit harder to parse through than PULP's.
It looks like this is possible doing the following: Define PULP variables and constraints and add them to an LpProblem Make a dictionary of your assignments in the form {'variable name': value} Use LpProblem.assignVarsVals(your_assignment_dict) to assign those values Run LpProblem.valid() to check that your assignment meets all constraints and variable restrictions Note that this will almost certainly be slower than using numpy and Ax <= b. Formulating the problem might be easier, but performance will suffer due to how PULP runs these checks.
1.2
true
1
5,489
2018-05-02 15:18:32.357
How to find if there are wrong values in a pandas dataframe?
I am quite new in Python coding, and I am dealing with a big dataframe for my internship. I had an issue as sometimes there are wrong values in my dataframe. For example I find string type values ("broken leaf") instead of integer type values as ("120 cm") or (NaN). I know there is the df.replace() function, but therefore you need to know that there are wrong values. So how do I find if there are any wrong values inside my dataframe? Thank you in advance
"120 cm" is a string, not an integer, so that's a confusing example. Some ways to find "unexpected" values include: Use "describe" to examine the range of numerical values, to see if there are any far outside of your expected range. Use "unique" to see the set of all values for cases where you expect a small number of permitted values, like a gender field. Look at the datatypes of columns to see whether there are strings creeping in to fields that are supposed to be numerical. Use regexps if valid values for a particular column follow a predictable pattern.
0
false
1
5,490
2018-05-03 09:34:04.367
Read raw ethernet packet using python on Raspberry
I have a device which is sending packet with its own specific construction (header, data, crc) through its ethernet port. What I would like to do is to communicate with this device using a Raspberry and Python 3.x. I am already able to send Raw ethernet packet using the "socket" Library, I've checked with wireshark on my computer and everything seems to be transmitted as expected. But now I would like to read incoming raw packet sent by the device and store it somewhere on my RPI to use it later. I don't know how to use the "socket" Library to read raw packet (I mean layer 2 packet), I only find tutorials to read higher level packet like TCP/IP. What I would like to do is Something similar to what wireshark does on my computer, that is to say read all raw packet going through the ethernet port. Thanks, Alban
Did you try using ettercap package (ettercap-graphical)? It should be available with apt. Alternatively you can try using TCPDump (Java tool) or even check ip tables
0
false
1
5,491
2018-05-04 02:07:04.457
Host command and ifconfig giving different ips
I am using server(server_name.corp.com) inside a corporate company. On the server i am running a flask server to listen on 0.0.0.0:5000. servers are not exposed to outside world but accessible via vpns. Now when i run host server_name.corp.com in the box i get some ip1(10.*.*.*) When i run ifconfig in the box it gives me ip2(10.*.*.*). Also if i run ping server_name.corp.com in same box i get ip2. Also i can ssh into server with ip1 not ip2 I am able to access the flask server at ip1:5000 but not on ip2:5000. I am not into networking so fully confused on why there are 2 different ips and why i can access ip1:5000 from browser not ip2:5000. Also what is equivalent of host command in python ( how to get ip1 from python. I am using socktet.gethostbyname(server_name.corp.com) which gives me ip2)
Not quite clear about the network status by your statements, I can only tell that if you want to get ip1 by python, you could use standard lib subprocess, which usually be used to execute os command. (See subprocess.Popen)
0
false
1
5,492
2018-05-05 02:23:02.700
how to use python to check if subdomain exists?
Does anyone know how to check if a subdomain exists on a website? I am doing a sign up form and everyone gets there own subdomain, I have some javascript written on the front end but I need to find a way to check on the backend.
Do a curl or http request on subdomain which you want to verify, if you get 404 that means it doesn't exists, if you get 200 it definitely exists
0.201295
false
2
5,493
2018-05-05 02:23:02.700
how to use python to check if subdomain exists?
Does anyone know how to check if a subdomain exists on a website? I am doing a sign up form and everyone gets there own subdomain, I have some javascript written on the front end but I need to find a way to check on the backend.
Put the assigned subdomain in a database table within unique indexed column. It will be easier to check from python (sqlalchemy, pymysql ect...) if subdomain has already been used + will automatically prevent duplicates to be assigned/inserted.
0
false
2
5,493
2018-05-05 14:24:30.920
How to use visual studio code >after< installing anaconda
If you have never installed anaconda, it seems to be rather simple. In the installation process of Anaconda, you choose to install visual studio code and that is it. But I would like some help in my situation: My objective: I want to use visual studio code with anaconda I have a mac with anaconda 1.5.1 installed. I installed visual studio code. I updated anaconda (from the terminal) now it is 1.6.9 From there, I don't know how to proceed. any help please
You need to select the correct python interpreter. When you are in a .py file, there's a blue bar in the bottom of the window (if you have the dark theme), there you can select the anaconda python interpreter. Else you can open the command window with ctrl+p or command+p and type '>' for running vscode commands and search '> Python Interpreter'. If you don't see anaconda there google how to add a new python interpreter to vscode
0.386912
false
1
5,494
2018-05-05 16:40:27.703
Calling Python scripts from Java. Should I use Docker?
We have a Java application in our project and what we want is to call some Python script and return results from it. What is the best way to do this? We want to isolate Python execution to avoid affecting Java application at all. Probably, Dockerizing Python is the best solution. I don't know any other way. Then, a question is how to call it from Java. As far as I understand there are several ways: start some web-server inside Docker which accepts REST calls from Java App and runs Python scripts and returns results to Java via REST too. handle request and response via Docker CLI somehow. use Java Docker API to send REST request to Docker which then converted by Docker to Stdin/Stdout of Python script inside Docker. What is the most effective and correct way to connect Java App with Python, running inside Docker?
You don’t need docker for this. There are a couple of options, you should choose depending on what your Java application is doing. If the Java application is a client - based on swing, weblaunch, or providing UI directly - you will want to turn the python functionality to be wrapped in REST/HTTP calls. If the Java application is a server/webapp - executing within Tomcat, JBoss or other application container - you should simply wrap the python scrip inside a exec call. See the Java Runtime and ProcessBuilder API for this purpose.
1.2
true
1
5,495
2018-05-05 21:56:31.143
Unintuitive solidity contract return values in ethereum python
I'm playing around with ethereum and python and I'm running into some weird behavior I can't make sense of. I'm having trouble understanding how return values work when calling a contract function with the python w3 client. Here's a minimal example which is confusing me in several different ways: Contract: pragma solidity ^0.4.0; contract test { function test(){ } function return_true() public returns (bool) { return true; } function return_address() public returns (address) { return 0x111111111111111111111111111111111111111; } } Python unittest code from web3 import Web3, EthereumTesterProvider from solc import compile_source from web3.contract import ConciseContract import unittest import os def get_contract_source(file_name): with open(file_name) as f: return f.read() class TestContract(unittest.TestCase): CONTRACT_FILE_PATH = "test.sol" DEFAULT_PROPOSAL_ADDRESS = "0x1111111111111111111111111111111111111111" def setUp(self): # copied from https://github.com/ethereum/web3.py/tree/1802e0f6c7871d921e6c5f6e43db6bf2ef06d8d1 with MIT licence # has slight modifications to work with this unittest contract_source_code = get_contract_source(self.CONTRACT_FILE_PATH) compiled_sol = compile_source(contract_source_code) # Compiled source code contract_interface = compiled_sol[':test'] # web3.py instance self.w3 = Web3(EthereumTesterProvider()) # Instantiate and deploy contract self.contract = self.w3.eth.contract(abi=contract_interface['abi'], bytecode=contract_interface['bin']) # Get transaction hash from deployed contract tx_hash = self.contract.constructor().transact({'from': self.w3.eth.accounts[0]}) # Get tx receipt to get contract address tx_receipt = self.w3.eth.getTransactionReceipt(tx_hash) self.contract_address = tx_receipt['contractAddress'] # Contract instance in concise mode abi = contract_interface['abi'] self.contract_instance = self.w3.eth.contract(address=self.contract_address, abi=abi, ContractFactoryClass=ConciseContract) def test_return_true_with_gas(self): # Fails with HexBytes('0xd302f7841b5d7c1b6dcff6fca0cd039666dbd0cba6e8827e72edb4d06bbab38f') != True self.assertEqual(True, self.contract_instance.return_true(transact={"from": self.w3.eth.accounts[0]})) def test_return_true_no_gas(self): # passes self.assertEqual(True, self.contract_instance.return_true()) def test_return_address(self): # fails with AssertionError: '0x1111111111111111111111111111111111111111' != '0x0111111111111111111111111111111111111111' self.assertEqual(self.DEFAULT_PROPOSAL_ADDRESS, self.contract_instance.return_address()) I have three methods performing tests on the functions in the contract. In one of them, a non-True value is returned and instead HexBytes are returned. In another, the contract functions returns an address constant but python sees a different value from what's expected. In yet another case I call the return_true contract function without gas and the True constant is seen by python. Why does calling return_true with transact={"from": self.w3.eth.accounts[0]} cause the return value of the function to be HexBytes(...)? Why does the address returned by return_address differ from what I expect? I think I have some sort of fundamental misunderstanding of how gas affects function calls.
The returned value is the transaction hash on the blockchain. When transacting (i.e., when using "transact" rather than "call") the blockchain gets modified, and the library you are using returns the transaction hash. During that process you must have paid ether in order to be able to modify the blockchain. However, operating in read-only mode costs no ether at all, so there is no need to specify gas. Discounting the "0x" at the beginning, ethereum addresses have a length of 40, but in your test you are using a 39-character-long address, so there is a missing a "1" there. Meaning, tests are correct, you have an error in your input. Offtopic, both return_true and return_address should be marked as view in Solidity, since they are not actually modifying the state. I'm pretty sure you get a warning in remix. Once you do that, there is no need to access both methods using "transact" and paying ether, and you can do it using "call" for free. EDIT Forgot to mention: in case you need to access the transaction hash after using transact you can do so calling the .hex() method on the returned HexBytes object. That'll give you the transaction hash as a string, which is usually way more useful than as a HexBytes. I hope it helps!
0.673066
false
1
5,496
2018-05-05 22:40:06.727
Colaboratory: How to install and use on local machine?
Google Colab is awesome to work with, but I wish I can run Colab Notebooks completely locally and offline, just like Jupyter notebooks served from the local? How do I do this? Is there a Colab package which I can install? EDIT: Some previous answers to the question seem to give methods to access Colab hosted by Google. But that's not what I'm looking for. My question is how do I pip install colab so I can run it locally like jupyter after pip install jupyter. Colab package doesn't seem to exist, so if I want it, what do I do to install it from the source?
Google Colab is a cloud computer,it only runs through Internet,you can design your Python script,and run the Python script through Colab,run Python will use Google Colab hardware,Google will allocate CPU, RAM, GPU and etc for your Python script,your local computer just submit Python code to Google Colab,and run,then Google Colab return the result to your local computer,cloud computation is stronger than local computation if your local computer hardware is limited,see this question link will inspire you,asked by me,https://stackoverflow.com/questions/48879495/how-to-apply-googlecolab-stronger-cpu-and-more-ram/48922199#48922199
-0.496174
false
1
5,497
2018-05-06 09:13:56.887
Predicting binary classification
I have been self-learning machine learning lately, and I am now trying to solve a binary classification problem (i.e: one label which can either be true or false). I was representing this as a single column which can be 1 or 0 (true or false). Nonetheless, I was researching and read about how categorical variables can reduce the effectiveness of an algorithm, and how one should one-hot encode them or translate into a dummy variable thus ending with 2 labels (variable_true, variable_false). Which is the correct way to go about this? Should one predict a single variable with two possible values or 2 simultaneous variables with a fixed unique value? As an example, let's say we want to predict whether a person is a male or female: Should we have a single label Gender and predict 1 or 0 for that variable, or Gender_Male and Gender_Female?
it's basically the same, when talking about binary classification, you can think of a final layer for each model that adapt the output to other model e.g if the model output 0 or 1 than the final layer will translate it to vector like [1,0] or [0,1] and vise-versa by a threshold criteria, usually is >= 0.5 a nice byproduct of 2 nodes in the final layer is the confidence level of the model in it's predictions [0.80, 0.20] and [0.55, 0.45] will both yield [1,0] classification but the first prediction has more confidence this can be also extrapolate from 1 node output by the distance of the output from the fringes 1 and 0 so 0.1 will be considered with more confidence than 0.3 as a 0 prediction
1.2
true
1
5,498
2018-05-06 21:22:33.530
Does gRPC have the ability to add a maximum retry for call?
I haven't found any examples how to add a retry logic on some rpc call. Does gRPC have the ability to add a maximum retry for call? If so, is it a built-in function?
Retries are not a feature of gRPC Python at this time.
1.2
true
1
5,499
2018-05-07 02:06:48.980
Tensorflow How can I make a classifier from a CSV file using TensorFlow?
I need to create a classifier to identify some aphids. My project has two parts, one with a computer vision (OpenCV), which I already conclude. The second part is with Machine Learning using TensorFlow. But I have no idea how to do it. I have these data below that have been removed starting from the use of OpenCV, are HuMoments (I believe that is the path I must follow), each line is the HuMoments of an aphid (insect), I have 500 more data lines that I passed to one CSV file. How can I make a classifier from a CSV file using TensorFlow? HuMoments (in CSV file): 0.27356047,0.04652453,0.00084231,7.79486673,-1.4484489,-1.4727380,-1.3752532 0.27455502,0.04913969,3.91102408,1.35705980,3.08570234,2.71530819,-5.0277362 0.20708829,0.01563241,3.20141907,9.45211423,1.53559373,1.08038279,-5.8776765 0.23454372,0.02820523,5.91665789,6.96682467,1.02919203,7.58756583,-9.7028848
You can start with this tutorial, and try it first without changing anything; I strongly suggest this unless you are already familiar with Tensorflow so that you gain some familiarity with it. Now you can modify the input layer of this network to match the dimensions of the HuMoments. Next, you can give a numeric label to each type of aphid that you want to recognize, and adjust the size of the output layer to match them. You can now read the CSV file using python, and remove any text like "HuMoments". If your file has names of aphids, remove them and replace them with numerical class labels. Replace the training data of the code in the above link, with these data. Now you can train the network according to the description under the title "Train the Model". One more note. Unless it is essential to use Tensorflow to match your project requirements, I suggest using Keras. Keras is a higher level library that is much easier to learn than Tensorflow, and you have more sample code online.
0
false
1
5,500
2018-05-07 23:22:30.577
How can you fill in an open dialog box in headless chrome in Python and Selenium?
I'm working with Python and Selenium to do some automation in the office, and I need to fill in an "upload file" dialog box (a windows "open" dialog box), which was invoked from a site using a headless chrome browser. Does anyone have any idea on how this could be done? If I wasn't using a headless browser, Pywinauto could be used with a line similar to the following, for example, but this doesn't appear to be an option in headless chrome: app.pane.open.ComboBox.Edit.type_keys(uploadfilename + "{ENTER}") Thank you in advance!
This turned out to not be possible. I ended up running the code on a VM and setting a registry key to allow automation to be run while the VM was minimized, disconnected, or otherwise not being interacted with by users.
0
false
1
5,501
2018-05-08 10:55:31.387
How to "compile" a python script to an "exe" file in a way it would be run as background process?
I know how to run a python script as a background process, but is there any way to compile a python script into exe file using pyinstaller or other tools so it could have no console or window ?
If you want to run it in background without "console and "window" you have to run it as a service.
0
false
1
5,502
2018-05-08 12:08:02.053
(Django) Running asynchronous server task continously in the background
I want to let a class run on my server, which contains a connected bluetooth socket and continously checks for incoming data, which can then by interpreted. In principle the class structure would look like this: Interpreter: -> connect (initializes the class and starts the loop) -> loop (runs continously in the background) -> disconnect (stops the loop) This class should be initiated at some point and then run continously in the background, from time to time a http request would perhaps need data from the attributes of the class, but it should run on its own. I don't know how to accomplish this and don't want to get a description on how to do it, but would like to know where I should start, like how this kind of process is called.
Django on its own doesn't support any background processes - everything is request-response cycle based. I don't know if what you're trying to do even has a dedicated name. But most certainly - it's possible. But don't tie yourself to Django with this solution. The way I would accomplish this is I'd run a separate Python process, that would be responsible for keeping the connection to the device and upon request return the required data in some way. The only difficulty you'd have is determining how to communicate with that process from Django. Since, like I said, django is request based, that secondary app could expose some data to your Django app - it could do any of the following: Expose a dead-simple HTTP Rest API Expose an UNIX socket that would just return data immediatelly after connection Continuously dump data to some file/database/mmap/queue that Django could read
1.2
true
1
5,503
2018-05-08 18:49:32.583
Replace character with a absolute value
When searching my db all special characters work aside from the "+" - it thinks its a space. Looking on the backend which is python, there is no issues with it receiving special chars which I believe it is the frontend which is Javascript what i need to do is replace "+" == "%2b". Is there a way for me to use create this so it has this value going forth?
You can use decodeURIComponent('%2b'), or encodeUriComponent('+'); if you decode the response from the server, you get the + sign- if you want to replace all ocurrence just place the whole string insde the method and it decodes/encodes the whole string.
1.2
true
1
5,504
2018-05-08 21:02:22.097
How to deal with working on one project on different machines (paths)?
This is my first time coding a "project" (something more than solving exercises in single files). A number of my .py files have variables imported from a specific path. I also have a main "Run" file where I import things I've written in other files and execute the project as a whole. Recently I've started working on this project on several different machines (home, work, laptop etc) and have just started learning how to use GitHub. My question is, how do I deal with the fact that every time I open up my code on a different machine I need to go around changing all the paths to fit the new machine, and then change them back again when I'm home? I started writing a Run file for each location I work at so that my sys.path commands are ok with that machine, but it doesn't solve the problem of my other modules importing variables from specific paths that vary from machine to machine. Is there a way round this or is the problem in how I'm setting up the project itself? In an ideal world it would all just work without me having to change something before I run, depending on the machine I'm working on, but I don't know if that's possible. My current thoughts are whether there is some command I'm not aware of that can set variables inside a .py file from my main Run.py file - that way I can just have a run file for each machine. Any suggestions are gladly taken! Whatever it is, it must be better than commenting back in the correct file path each time I open it on a different machine!
You should always use relative paths, not static which I assume you have got. Assuming your in an index file and you need to access images folder, you probably have something like /users/username/project/images/image.png Instead you want something like ../images/image.png, this tells your index file to go backwards one folder to say the root of the project, then proceed into our images folder etc. Relative paths mean you create a path from where your file exists, and not an entire path from ground up.
0
false
1
5,505
2018-05-10 01:53:40.577
Document similarity in production environment
We are having n number of documents. Upon submission of new document by user, our goal is to inform him about possible duplication of existing document (just like stackoverflow suggests questions may already have answer). In our system, new document is uploaded every minute and mostly about the same topic (where there are more chance of duplication). Our current implementation includes gensim doc2vec model trained on documents (tagged with unique document ids). We infer vector for new document and find most_similar docs (ids) with it. Reason behind choosing doc2vec model is that we wanted to take advantage of semantics to improve results. As far as we know, it does not support online training, so we might have to schedule a cron or something that periodically updates the model. But scheduling cron will be disadvantageous as documents come in a burst. User may upload duplicates while model is not yet trained for new data. Also given huge amount of data, training time will be higher. So i would like to know how such cases are handled in big companies. Are there any better alternative? or better algorithm for such problem?
You don't have to take the old model down to start training a new model, so despite any training lags, or new-document bursts, you'll always have a live model doing the best it can. Depending on how much the document space changes over time, you might find retraining to have a negligible benefit. (One good model, built on a large historical record, might remain fine for inferring new vectors indefinitely.) Note that tuning inference to use more steps (especially for short documents), or a lower starting alpha (more like the training default of 0.025) may give better results. If word-vectors are available, there is also the "Word Mover's Distance" (WMD) calculation of document similarity, which might be ever better at identifying close duplicates. Note, though, it can be quite expensive to calculate – you might want to do it only against a subset of likely candidates, or have to add many parallel processors, to do it in bulk. There's another newer distance metric called 'soft cosine similarity' (available in recent gensim) that's somewhere between simple vector-to-vector cosine-similarity and full WMD in its complexity, that may be worth trying. To the extent the vocabulary hasn't expanded, you can load an old Doc2Vec model, and continue to train() it – and starting from an already working model may help you achieve similar results with fewer passes. But note: it currently doesn't support learning any new words, and the safest practice is to re-train with a mix of all known examples interleaved. (If you only train on incremental new examples, the model may lose a balanced understanding of the older documents that aren't re-presented.) (If you chief concern is documents that duplicate exact runs-of-words, rather than just similar fuzzy topics, you might look at mixing-in other techniques, such as breaking a document into a bag-of-character-ngrams, or 'shingleprinting' as in common in plagiarism-detection applications.)
1.2
true
1
5,506
2018-05-10 02:52:36.463
Apache Airflow: Gunicorn Configuration File Not Being Read?
I'm trying to run Apache Airflow's webserver from a virtualenv on a Redhat machine, with some configuration options from a Gunicorn config file. Gunicorn and Airflow are both installed in the virtualenv. The command airflow webserver starts Airflow's webserver and the Gunicorn server. The config file has options to make sure Gunicorn uses/accepts TLSv1.2 only, as well as a list of ciphers to use. The Gunicorn config file is gunicorn.py. This file is referenced through an environment variable GUNICORN_CMD_ARGS="--config=/path/to/gunicorn.py ..." in .bashrc. This variable also sets a couple of other variables in addition to --config. However, when I run the airflow webserver command, the options in GUNICORN_CMD_ARGS are never applied. Seeing as how Gunicorn is not called from command line, but instead by Airflow, I'm assuming this is why the GUNICORN_CMD_ARGS environment variable is not read, but I'm not sure and I'm new to both technologies... TL;DR: Is there another way to set up Gunicorn to automatically reference a config file, without the GUNICORN_CMD_ARGS environment variable? Here's what I'm using: gunicorn 19.8.1 apache-airflow 1.9.0 python 2.7.5
When Gunicorn is called by Airflow, it uses ~\airflow\www\gunicorn_config.py as its config file.
1.2
true
1
5,507
2018-05-10 10:48:13.883
How to make a Python Visualization as service | Integrate with website | specially sagemaker
I am from R background where we can use Plumber kind tool which provide visualization/graph as Image via end points so we can integrate in our Java application. Now I want to integrate my Python/Juypter visualization graph with my Java application but not sure how to host it and make it as endpoint. Right now I using AWS sagemaker to host Juypter notebook
Amazon SageMaker is a set of different services for data scientists. You are using the notebook service that is used for developing ML models in an interactive way. The hosting service in SageMaker is creating an endpoint based on a trained model. You can call this endpoint with invoke-endpoint API call for real time inference. It seems that you are looking for a different type of hosting that is more suitable for serving HTML media rich pages, and doesn’t fit into the hosting model of SageMaker. A combination of EC2 instances, with pre-built AMI or installation scripts, Congnito for authentication, S3 and EBS for object and block storage, and similar building blocks should give you a scalable and cost effective solution.
1.2
true
1
5,508
2018-05-11 04:04:54.463
Python - Enable TLS1.2 on OSX
I have a virtualenv environment running python 3.5 Today, when I booted up my MacBook, I found myself unable to install python packages for my Django project. I get the following error: Could not fetch URL <package URL>: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:646) - skipping I gather that TLS 1.0 has been discontinued, but from what I understand, newer versions of Python should be using TLS1.2, correct? Even outside of my environment, running pip3 trips the same error. I've updated to the latest version of Sierra and have updated Xcode as well. Does anyone know how to resolve this?
Here is the fix: curl https://bootstrap.pypa.io/get-pip.py | python Execute from within the appropriate virtual environment.
1.2
true
1
5,509
2018-05-11 21:19:50.853
python Ubuntu: too many open files [eventpoll]
Basically, it is a multi-threaded crawler program, which uses requests mainly. After running the program for a few hours, I keep getting the error "Too many open files". By running: lsof -p pid, I saw a huge number of entries like below: python 75452 xxx 396u a_inode 0,11 0 8121 [eventpoll] I cannot figure out what it is and how to trace back to the problem. Previously, I tried to have it running in Windows and never seen this error. Any idea how to continue investigating this issue? thanks.
I have figured out that it is caused by Gevent. After replacing gevent with multi-thread, everything is just OK. However, I still don't know what's wrong with gevent, which keeps opening new files(eventpoll).
0
false
1
5,510
2018-05-11 22:56:26.280
How to prepare Python Selenium project to be used on client's machine?
I've recently started freelance python programming, and was hired to write a script that scraped certain info online (nothing nefarious, just checking how often keywords appear in search results). I wrote this script with Selenium, and now that it's done, I'm not quite sure how to prepare it to run on the client's machine. Selenium requires a path to your chromedriver file. Am I just going to have to compile the py file as an exe and accept the path to his chromedriver as an argument, then show him how to download chromedriver and how to write the path? EDIT: Just actually had a thought while typing this out. Would it work if I sent the client a folder including a chromedriver.exe inside of said folder, so the path was always consistent?
Option 1) Deliver a Docker image if customer not to watch the browser during running and they can setup Docker environment. The Docker image should includes following items: Python Dependencies for running your script, like selenium Headless chrome browser and compatible chrome webdriver binary Your script, put them in github and fetch them when start docker container, so that customer can always get your latest code This approach's benefit: You only need to focus on scripts like bug fix and improvement after delivery Customer only need to execute same docker command Option 2) Deliver a Shell script to do most staff automatically. It should accomplish following items: Install Python (Or leave it for customer to complete) Install Selenium library and others needed Install latest chrome webdriver binary (which is compatible backward) Fetch your script from code repo like github, or simply deliver as packaged folder Run your script. Option 3) Deliver your script and an user guide, customer have to do many staff by self. You can supply a config file along with your script for customer to specify the chrome driver binary path after they download. Your script read the path from this file, better than enter it in cmd line every time.
0
false
1
5,511
2018-05-12 09:01:11.140
Using Hydrogen with Python 3
The default version of python installed on my mac is python 2. I also have python 3 installed but can't install python 2. I'd like to configure Hyrdrogen on Atom to run my script using python 3 instead. Does anybody know how to do this?
I used jupyter kernelspec list and I found 2 kernels available, one for python2 and another for python3 So I pasted python3 kernel folder in the same directory where python2 ken=rnel is installed and removed python2 kernel using 'rm -rf python2'
0
false
1
5,512
2018-05-12 09:48:08.917
Python 3 install location
I am using Ubuntu 16.04 . Where is the python 3 installation directory ? Running "whereis python3" in terminal gives me: python3: /usr/bin/python3.5m-config /usr/bin/python3 /usr/bin/python3.5m /usr/bin/python3.5-config /usr/bin/python3.5 /usr/lib/python3 /usr/lib/python3.5 /etc/python3 /etc/python3.5 /usr/local/lib/python3.5 /usr/include/python3.5m /usr/include/python3.5 /usr/share/python3 /usr/share/man/man1/python3.1.gz Also where is the intrepreter i.e the python 3 executable ? And how would I add this path to Pycharm ?
you can try this : which python3
1.2
true
1
5,513
2018-05-12 10:36:33.093
How to continue to train a model with new classes and data?
I have trained a model successfully and now I want to continue training it with new data. If a given data with the same amount of classes it works fine. But having more data then initially it will give me the error: ValueError: Shapes (?, 14) and (?, 21) are not compatible How can I dynamically increase the number of classes in my trained model or how to make the model accept a lesser number of classes? Do I need to save the classes in a pickle file?
Best thing to do is to train your network from scratch with the output layers adjusted to the new output class size. If retraining is an issue, then keep the trained network as it is and only drop the last layer. Add a new layer with the proper output size, initialized to random weights and then fine-tune (train) the entire network.
0
false
1
5,514
2018-05-13 11:51:03.607
transfer files between local machine and remote server
I want to make access from remote ubuntu server to local machine because I have multiple files in this machine and I want to transfer it periodically (every minute) to server how can I do that using python
You can easily transfer files between local and remote or between two remote servers. If both servers are Linux-based and require to transfer multiple files and folder using single command, however, you need to follow up below steps: User from one remote server should have access to another remote server to corresponding directory you want to transfer the file. You might need to create a policy or group and assign to server list to that group which you want to access and assign the user to that group so 2 different remote server can talk to each other. Run the following scp command:- scp [options] username1@source_host:directory1/filename1 username2@destination_host:directory2/filename2
0
false
1
5,515
2018-05-13 19:28:44.063
Need help using Keras' model.predict
My goal is to make an easy neural network fit by providing 2 verticies of a certain Graph and 1 if there's a link or 0 if there's none. I fit my model, it gets loss of about 0.40, accuracy of about 83% during fitting. I then evaluate the model by providing a batch of all positive samples and several batches of negative ones (utilising random.sample). My model gets loss of ~0.35 and 1.0 accuracy for positive samples and ~0.46 loss 0.68 accuracy for negative ones. My understanding of neural networks if extremely limited, but to my understanding the above means it theoretically always is right when it outputs 0 when there's no link, but can sometimes output 1 even if there is none. Now for my actual problem: I try to "reconstruct" the original graph with my neural network via model.predict. The problem is I don't understand what the predict output means. At first I assumed values above 0.5 mean 1, else 0. But if that's the case the model doesn't even come close to rebuilding the original. I get that it won't be perfect, but it simply returns value above 0.5 for random link candidates. Can someone explain to me how exactly model.predict works and how to properly use it to rebuild my graph?
The model that you trained is not directly optimized w.r.t. the graph reconstruction. Without loss of generality, for a N-node graph, you need to predict N choose 2 links. And it may be reasonable to assume that the true values of the most of these links are 0. When looking into your model accuracy on the 0-class and 1-class, it is clear that your model is prone to predict 1-class, assuming your training data is balanced. Therefore, your reconstructed graph contains many false alarm links. This is the exact reason why the performance of your reconstruction graph is poor. If it is possible to retrain the model, I suggest you do it and use more negative samples. If not, you need to consider applying some post-processing. For example, instead of finding a threshold to decide which two nodes have a link, use the raw predicted link probabilities to form a node-to-node linkage matrix, and apply something like the minimum spanning tree to further decide what are appropriate links.
0
false
1
5,516
2018-05-14 05:48:54.863
How to used a tensor in different graphs?
I build two graphs in my code, graph1 and graph2. There is a tensor, named embedding, in graph1. I tied to use it in graph2 by using get_variable, while the error is tensor must be from the same graph as Tensor. I found that this error occurs because they are in different graphs. So how can I use a tensor in graph1 to graph2?
expanding on @jdehesa's comment, embedding could be trained initially, saved from graph1 and restored to graph2 using tensorflows saver/restore tools. for this to work you should assign embedding to a name/variable scope in graph1 and reuse the scope in graph2
0
false
1
5,517
2018-05-14 18:25:36.107
Best practice for rollbacking a multi-purpose python script
I'm sorry if the title is a little ambiguous. Let me explain what I mean by that : I have a python script that does a few things : creates a row in a MySQL table, inserts a json document to a MongoDB, Updates stuff in a local file, and some other stuff, mostly related to databases. Thing is, I want the whole operation to be atomic. Means - If anything during the process I mentioned failed, I want to rollback everything I did. I thought of implementing a rollback function for every 'create' function I have. But I'd love to hear your opinion for how to make some sort of a linked list of operations, in which if any of the nodes failed, I want to discard all the changes done in the process. How would you design such a thing? Is there a library in Python for such things?
You should implement every action to be reversible and the reverse action to be executable even if the original action has failed. Then if you have any failures, you execute every reversal.
0
false
1
5,518
2018-05-15 09:13:53.017
Why and how would you not use a python GUI framework and make one yourself like many applications including Blender do?
I have looked at a few python GUI frameworks like PyQt, wxPython and Kivy, but have noticed there aren’t many popular (used widely) python applications, from what I can find, that use them. Blender, which is pretty popular, doesn’t seem to use them. How would one go about doing what they did/what did they do and what are the potential benefits over using the previously mentioned frameworks?
I would say that python isn't a popular choice when it comes to making a GUI application, which is why you don't find many examples of using the GUI frameworks. tkinter, which is part of the python development is another option for GUI's. Blender isn't really a good example as it isn't a GUI framework, it is a 3D application that integrates python as a means for users to manipulate it's data. It was started over 25 years ago when the choice of cross platform frameworks was limited, so making their own was an easier choice to make. Python support was added to blender about 13 years ago. One of the factors in blender's choice was to make each platform look identical. That goes against most frameworks that aim to implement a native look and feel for each target platform. So you make your own framework when the work of starting your own framework seems easier than adjusting an existing framework to your needs, or the existing frameworks all fail to meet your needs, one of those needs may be licensing with Qt and wxWidgets both available under (L)GPL, while Qt also sells non-GPL licensing. The benefit to using an existing framework is the amount of work that is already done, you will find there is more than you first think in a GUI framework, especially when you start supporting multiple operating systems.
1.2
true
1
5,519
2018-05-15 19:46:31.853
Installing Kivy to an alternate location
I have Python version 3.5 which is located here C:\Program Files(x86)\Microsoft Visual Studio\Shared\Python35_64 If I install kivy and its components and add-ons with this command: python -m pip install kivy, then it does not install in the place that I need. I want to install kivy in this location C:\Program Files(x86)\ Microsoft Visual Studio\Shared\Python35_64\Lib\site-packages, how can I do this? I did not understand how to do this from the explanations on the official website.
So it turned out that I again solved my problem myself, I have installed Python 3.5 and Python 3.6 on my PC, kiwy was installed in Python 3.6 by default, and my development environment was using Python 3.5, I replaced it with 3.6 and it all worked.
0.386912
false
1
5,520
2018-05-16 07:28:11.157
Portable application: s3 and Google cloud storage
I want to write an application which is portable. With "portable" I mean that it can be used to access these storages: amazon s3 google cloud storage Eucalyptus Storage The software should be developed using Python. I am unsure how to start, since I could not find a library which supports all three storages.
You can use boto3 for accessing any services of Amazon.
0.386912
false
1
5,521
2018-05-16 14:25:25.257
How to access created nodes in a mininet topology?
I am new in mininet. I created a custom topology with 2 linear switches and 4 nodes. I need to write a python module accessing each nodes in that topology and do something but I don't know how. Any idea please?
try the following: s1.cmd('ifconfig s1 192.168.1.0') h1.cmd('ifconfig h1 192.168.2.0')
1.2
true
1
5,522
2018-05-16 16:07:12.060
Real width of detected face
I've been researching like forever, but couldn't find an answer. I'm using OpenCV to detect faces, now I want to calculate the distance to the face. When I detect a face, I get a matofrect (which I can visualize with a rectangle). Pretty clear so far. But now: how do I get the width of the rectangle in the real world? There has to be some average values that represent the width of the human face. If I have that value (in inch, mm or whatever), I can calculate the distance using real width, pixel width and focal length. Please, can anyone help me? Note: I'm comparing the "simple" rectangle solution against a Facemark based distance measuring solution, so no landmark based answers. I just need the damn average face / matofrectwidth :D Thank you so much!
OpenCV's facial recognition is slightly larger than a face, therefore an average face may not be helpful. Instead, just take a picture of a face at different distances from the camera and record the distance from the camera along with the pixel width of the face for several distances. After plotting the two variables on a graph, use a trendline to come up with a predictive model.
0.673066
false
1
5,523
2018-05-16 17:31:21.103
Split a PDF file into two columns along a certain measurement in Python?
I have a ton of PDF files that are laid out in two columns. When I use PyPDF2 to extract the text, it reads the entire first column (which are like headers) and the entire second column. This makes splitting on the headers impossible. It's laid out in two columns: ____ __________ |Col1 Col2 | |Col1 Col2 | |Col1 Col2 | |Col1 Col2 | ____ __________ I think I need to split the PDF in half along the edge of the column, then read each column left to right. It's 2.26 inches width on an 8x11 PDF. I can also get the coordinates using PyPDF2. Does anyone have any experience doing this or know how I would do it? Edit: When I extractText using PyPDF2, the ouput has no spaces: Col1Col1Col1Col1Col2Col2Col2Col2
Using pdfminer.six successfully read from left to right with spaces in between.
0.386912
false
1
5,524
2018-05-17 16:34:01.880
how to make a copy of an sqlalchemy object (data only)
I get a db record as an sqlalchemy object and I need to consult the original values during some calculation process, so I need the original record till the end. However, the current code modifies the object as it goes and I don't want to refactor it too much at the moment. How can I make a copy of the original data? The deepcopy seems to create a problem, as expected. I definitely prefer not to copy all the fields manually, as someone will forget to update this code when modifying the db object.
You can have many options here to copy your object.Two of them which I can think of are : Using __dict__ it will give the dictionary of the original sqlalchemy object and you can iterate through all the attributes using .keys() function which will give all the attributes. You can also use inspect module and getmembers() to get all the attributes defined and set the required attributes using setattr() method.
0
false
1
5,525
2018-05-18 06:14:11.447
basic serial port contention
I am using a pi3 which talks to an arduino via serial0 (ttyAMA0) It all works fine. I can talk to it with minicom, bidirectionally. However, a python based server also wants this port. I notice when minicom is running, the python code can write to serial0 but not read from it. At least minicom reports the python server has sent a message. Can someone let me know how this serial port handles contention, if at all? I notice running two minicom session to the same serial port wrecks both sessions. Is it possible to have multiple writers and readers if they are coordinated not to act at the same time? Or can there be multiple readers (several terms running cat /dev/serial0) I have googled around for answers but most hits are about using multiple serial ports or getting a serial port to work at all. Cheers
Since two minicoms can attempt to use the port and there are collisions minicom must not set an advisory lock on local writes to the serial port. I guess that the first app to read received remote serial message clears it, since serial doesn't buffer. When a local app writes to serial, minicom displays this and it gets sent. I'm going to make this assumed summary when a local process puts a message on the serial port everyone can see it and it gets sent to remote. when a remote message arrives on serial, the first local process to get it, gets it. The others can't see it. for some reason, minicom has privilege over arriving messages. This is why two minicoms break the message.
0.386912
false
1
5,526
2018-05-18 14:53:02.983
Effective passing of large data to python 3 functions
I am coming from a C++ programming background and am wondering if there is a pass by reference equivalent in python. The reason I am asking is that I am passing very large arrays into different functions and want to know how to do it in a way that does not waste time or memory by having copy the array to a new temporary variable each time I pass it. It would also be nice if, like in C++, changes I make to the array would persist outside of the function. Thanks in advance, Jared
Python handles function arguments in the same manner as most common languages: Java, JavaScript, C (pointers), C++ (pointers, references). All objects are allocated on the heap. Variables are always a reference/pointer to the object. The value, which is the pointer, is copied. The object remains on the heap and is not copied.
0.999329
false
1
5,527
2018-05-19 10:36:50.560
How to find symbolic derivative using python without sympy?
I need to make a program which will differentiate a function, but I have no idea how to do this. I've only made a part which transforms the regular expression(x ^ 2 + 2 for example ) into reverse polish notation. Can anybody help me with creating a program which will a find symbolic derivatives of expression with + * / - ^
Hint: Use a recursive routine. If an operation is unary plus or minus, leave the plus or minus sign alone and continue with the operand. (That means, recursively call the derivative routine on the operand.) If an operation is addition or subtraction, leave the plus or minus sign alone and recursively find the derivative of each operand. If the operation is multiplication, use the product rule. If the operation is division, use the quotient rule. If the operation is exponentiation, use the generalized power rule. (Do you know that rule, for u ^ v? It is not given in most first-year calculus books but is easy to find using logarithmic differentiation.) (Now that you have clarified in a comment that there will be no variable in the exponent, you can use the regular power rule (u^n)' = n * u^(n-1) * u' where n is a constant.) And at the base of the recursion, the derivative of x is 1 and the derivative of a constant is zero. The result of such an algorithm would be very un-simplified but it would meet your stated requirements. Since this algorithm looks at an operation then looks at the operands, having the expression in Polish notation may be simpler than reverse Polish or "regular expression." But you could still do it for the expression in those forms. If you need more detail, show us more of your work.
1.2
true
1
5,528
2018-05-19 21:46:47.500
how to get the distance of sequence of nodes in pgr_dijkstra pgrouting?
I have an array of integers(nodes or destinations) i.e array[2,3,4,5,6,8] that need to be visited in the given sequence. What I want is, to get the shortest distance using pgr_dijkstra. But the pgr_dijkstra finds the shortest path for two points, therefore I need to find the distance of each pair using pgr_dijkstra and adding all distances to get the total distance. The pairs will be like 2,3 3,4 4,5 5,6 6,8. Is there any way to define a function that takes this array and finds the shortest path using pgr_dijkstra. Query is: for 1st pair(2,3) SELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads',2,3, false); for 2nd pair(3,4) SELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads'***,3,4,*** false) for 3rd pair(4,5) SELECT * FROM pgr_dijkstra('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads'***,4,5,*** false); NOTE: The array size is not fixed, it can be different. Is there any way to automate this in postgres sql may be using a loop etc? Please let me know how to do it. Thank you.
If you want all pairs distance then use select * from pgr_apspJohnson ('SELECT gid as id,source, target, rcost_len AS cost FROM finalroads)
0
false
1
5,529
2018-05-21 10:54:03.443
Using aws lambda to render an html page in aws lex chatbot
I have built a chatbot using AWS Lex and lambda. I have a use case where in a user enters a question (For example: What is the sale of an item in a particular region). I want that once this question is asked, a html form/pop up appears that askes the user to select the value of region and item from dropdown menus and fills the slot of the question with the value selected by the user and then return a response. Can some one guide how can this be achieved? Thanks.
Lex has something called response cards where your can add all the possible values. These are called prompts. The user can simply select his/her choice and the slot gets filled. Lex response cards work in Facebook and slack. In case of custom channel, you will have to custom develop the UI components.
0
false
1
5,530
2018-05-22 07:22:36.627
How to install image library in python 3.6.4 in windows 7?
I am new to Python and I am using Python 3.6.4. I also use PyCharm editor to write all my code. Please let me know how can I install Image library in Windows 7 and would it work in PyCharm too.
From pycharm, goto settings -> project Interpreter Click on + button on top right corner and you will get pop-up window of Available packages. Then search for pillow, PIL image python packages. Then click on Install package to install those packages.
1.2
true
1
5,531
2018-05-23 00:28:20.643
I have downloaded eclipse and pydev, but I am unsure how to get install django
I am attempting to learn how to create a website using python. I have been going off the advice of various websites including stackoverflow. Currently I can run code in eclipse using pydev, but I need to install django. I have no idea how to do this and I don't know who to ask or where to begin. Please help
I would recommend the following: Install virtual environment $pip install virtualenv Create a new virtualenvironment $ virtualenv django-venv Activate virtual environment & use $ source django-venv/bin/activate And install django as expected (django-venv)$ pip install django==1.11.13 (Replace with django version as needed)
0
false
1
5,532
2018-05-23 14:46:15.693
Proper way of streaming JSON with Django
i have a webservice which gets user requests and produces (multiple) solution(s) to this request. I want to return a solution as soon as possible, and send the remaining solutions when they are ready. In order to do this, I thought about using Django's Http stream response. Unfortunately, I am not sure if this is the most adequate way of doing so, because of the problem I will describe below. I have a Django view, which receives a query and answers with a stream response. This stream returns the data returned by a generator, which is always a python dictionary. The problem is that upon the second return action of the stream, the Json content breaks. If the python dictionary, which serves as a response, is something like {key: val}, after the second yield the returned response is {key: val} {key: val}, which is not valid Json. Any suggestions on how to return multiple Json objects at different moments in time?
Try decoding with something like for example import json json.dumps( {key: val} {key: val}, separators=('}', ':')) #check it
0
false
1
5,533
2018-05-23 15:52:41.077
pycharm won't let me run from desktop
I have been using pycharm for a while now, and I have to say that I am a real fan of it's features. I have one issue though, when I try to run a .py file from either the desktop or command prompt, I am instead prompted to use the run feature in pycharm. I consider this an issue because if I try to create a program for someone who doesn't know how to code, they would probably be scared off by opening pycharm. I don't, however, want to uninstall pycharm because it is so useful when writing code. Does anyone have any Ideas for me? By the way, I am using a dell Inspiron 15 7000 Gaming laptop with the current version of Windows 10 installed.
You can try running the direct path of the file, I'm not sure what you have tried. If you wanted to run it as I just described you would do: py C:\~AppData\Local\Programs\Python\Python36-32\hello.py If you move the file into your current working directory when programming, you should just be able to run py hello.py.
1.2
true
1
5,534
2018-05-23 20:49:52.333
Calling database handler class in a python thread
I'm programming a bit of server code and the MQTT side of it runs in it's own thread using the threading module which works great and no issues but now I'm wondering how to proceed. I have two MariaDB databases, one of them is local and the other is remote (There is a good and niche reason for this.) and I'm writing a class which handles the databases. This class will start new threads of classes that submits the data to their respected databases. If conditions are true, then it tells the data to start a new thread to push data to one database, if they are false, the data will go to the other database. The MQTT thread has a instance of the "Database handler" class and passes data to it through different calling functions within the class. Will this work to allow a thread to concentrate on MQTT tasks while another does the database work? There are other threads as well, I've just never combined databases and threads before so I'd like an opinion or any information that would help me out from more seasoned programmers.
Writing code that is "thread safe" can be tricky. I doubt if the Python connector to MySQL is thread safe; there is very little need for it. MySQL is quite happy to have multiple connections to it from clients. But they must be separate connections, not the same connection running in separate threads. Very few projects need multi-threaded access to the database. Do you have a particular need? If so let's hear about it, and discuss the 'right' way to do it. For now, each of your threads that needs to talk to the database should create its own connection. Generally, such a connection can be created soon after starting the thread (or process) and kept open until close to the end of the thread. That is, normally you should have only one connection per thread.
0
false
1
5,535
2018-05-25 18:54:02.363
python logging multiple calls after each instantiation
I have multiple modules and they each have their own log. The all write to the log correctly however when a class is instantiated more than once the log will write the same line multiple times depending on the number of times it was created. If I create the object twice it will log every messages twice, create the object three times it will log every message three times, etc... I was wondering how I could fix this without having to only create each object only once. Any help would be appreciated.
I was adding the handler multiple times after each instantiation of a log. I checked if the handler had already been added at the instantiation and that fixed the multiple writes.
0
false
1
5,536
2018-05-28 15:00:34.117
using c extension library with gevent
I use celery for doing snmp requests with easysnmp library which have a C interface. The problem is lots of time is being wasted on I/O. I know that I should use eventlet or gevent in this kind of situations, but I don't know how to handle patching a third party library when it uses C extensions.
Eventlet and gevent can't monkey-patch C code. You can offload blocking calls to OS threads with eventlet.tpool.execute(library.io_func)
0.386912
false
1
5,537
2018-05-29 02:13:44.043
How large data can Python Ray handle?
Python Ray looks interesting for machine learning applications. However, I wonder how large Python Ray can handle. Is it limited by memory or can it actually handle data that exceeds memory?
It currently works best when the data fits in memory (if you're on a cluster, then that means the aggregate memory of the cluster). If the data exceeds the available memory, then Ray will evict the least recently used objects. If those objects are needed later on, they will be reconstructed by rerunning the tasks that created them.
1.2
true
1
5,538
2018-05-29 18:31:38.537
Discord bot with user specific counter
I'm trying to make a Discord bot in Python that a user can request a unit every few minutes, and later ask the bot how many units they have. Would creating a google spreadsheet for the bot to write each user's number of units to be a good idea, or is there a better way to do this?
Using a database is the best option. If you're working with a small number of users and requests you could use something even simpler like a text file for ease of use, but I'd recommend a database. Easy to use database options include sqlite (use the sqlite3 python library) and MongoDB (I use the mongoengine python library for my Slack bot).
0
false
1
5,539
2018-05-29 21:28:22.547
How execute python command within virtualenv with Visual Studio Code
I have created virtual environment named virualenv. I have scrapy project and I am using there some programs installed in my virtualenv. When I run it from terminal in VSC I can see errors even when I set up my virtual environment via Ctrl+Shift+P -> Python: Select Interpreter -> Python 3.5.2(virtualenv). Interpreter works in some way, I can import libs without errors etc, but I am not possible to start my scrapy project from terminal. I have to activate my virtual environment first via /{path_to_virtualenv}/bin/activate. Is there a way, how to automatically activate it? Now I am using PyCharm and it is possible there, but VSC looks much better according to me.
One way I know how, Start cmd Start you virtual env (helloworld) \path\etc> code . It will start studio code in this environment. Hope it helps
0.386912
false
1
5,540
2018-05-30 15:56:33.700
TensorFlow debug: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed
I'm new (obviously) to python, but not so new to TensorFlow I've been trying to debug my program using breakpoint, but everytime I try to check the content of a tensor in the variable view of my Visual Studio Code debugger, the content doesn't show I get this warning in the console: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed. I'm a bit confused on how to fix this issue. Do I have to wait for an update of TensorFlow before it works?
You can simply stop at the break point, and switch to DEBUG CONSOLE panel, and type var.shape. It's not that convenient, but at least you don't need to write any extra debug code in your code.
0
false
2
5,541
2018-05-30 15:56:33.700
TensorFlow debug: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed
I'm new (obviously) to python, but not so new to TensorFlow I've been trying to debug my program using breakpoint, but everytime I try to check the content of a tensor in the variable view of my Visual Studio Code debugger, the content doesn't show I get this warning in the console: WARNING:tensorflow:Tensor._shape is private, use Tensor.shape instead. Tensor._shape will eventually be removed. I'm a bit confused on how to fix this issue. Do I have to wait for an update of TensorFlow before it works?
Probably yes you may have to wait. In the debug mode a deprecated function is being called. You can print out the shape explicitly by calling var.shape() in the code as a workaround. I know not very convenient.
0
false
2
5,541
2018-05-30 16:38:21.447
Django storages S3 - Store existing file
I have django 1.11 with latest django-storages, setup with S3 backend. I am trying to programatically instantiate an ImageFile, using the AWS image link as a starting point. I cannot figure out how to do this looking at the source / documentation. I assume I need to create a file, and give it the path derived from the url without the domain, but I can't find exactly how. The final aim of this is to programatically create wagtail Image objects, that point to S3 images (So pass the new ImageFile to the Imagefield of the image). I own the S3 bucket the images are stored in it. Uploading images works correctly, so the system is setup correctly. Update To clarify, I need to do the reverse of the normal process. Normally a physical image is given to the system, which then creates a ImageFile, the file is then uploaded to S3, and a URL is assigned to the File.url. I have the File.url and need an ImageFile object.
It turns out, in several models that expect files, when using DjangoStorages, all I had to do is instead of passing a File on the file field, pass the AWS S3 object key (so not a URL, just the object key). When model.save() is called, a boto call is made to S3 to verify an object with the provided key is there, and the item is saved.
1.2
true
1
5,542
2018-05-31 22:09:08.750
import sklearn in python
I installed miniconda for Windows10 successfully and then I could install numpy, scipy, sklearn successfully, but when I run import sklearn in python IDLE I receive No module named 'sklearn' in anaconda prompt. It recognized my python version, which was 3.6.5, correctly. I don't know what's wrong, can anyone tell me how do I import modules in IDLE ?
Why bot Download the full anaconda and this will install everything you need to start which includes Spider IDE, Rstudio, Jupyter and all the needed modules.. I have been using anaconda without any error and i will recommend you try it out.
1.2
true
1
5,543
2018-06-01 01:04:30.917
Pycharm Can't install TensorFlow
I cannot install tensorflow in pycharm on windows 10, though I have tried many different things: went to settings > project interpreter and tried clicking the green plus button to install it, gave me the error: non-zero exit code (1) and told me to try installing via pip in the command line, which was successful, but I can't figure out how to make Pycharm use it when it's installed there tried changing to a Conda environment, which still would not allow me to run tensorflow since when I input into the python command line: pip.main(['install', 'tensorflow']) it gave me another error and told me to update pip updated pip then tried step 2 again, but now that I have pip 10.0.1, I get the error 'pip has no attribute main'. I tried reverted pip to 9.0.3 in the command line, but this won't change the version used in pycharm, which makes no sense to me. I reinstalled anaconda, as well as pip, and deleted and made a new project and yet it still says that it is using pip 10.0.1 which makes no sense to me So in summary, I still can't install tensorflow, and I now have the wrong version of pip being used in Pycharm. I realize that there are many other posts about this issue but I'm pretty sure I've been to all of them and either didn't get an applicable answer or an answer that I understand.
what worked for is this; I installed TensorFlow on the command prompt as an administrator using this command pip install tensorflow then I jumped back to my pycharm and clicked the red light bulb pop-up icon, it will have a few options when you click it, just select the one that says install tensor flow. This would not install in from scratch but basically, rebuild and update your pycharm workspace to note the newly installed tensorflow
0
false
1
5,544
2018-06-02 08:27:36.887
How should I move my completed Django Project in a Virtual Environment?
I started learning django a few days back and started a project, by luck the project made is good and I'm thinking to deploy it. However I didn't initiate it in virtual environment. have made a virtual environment now and want to move project to that. I want to know how can I do that ? I have created requirements.txt whoever it has included all the irrelevant library names. How can I get rid of them and have only that are required for the project.
Django is completely unrelated to the environment you run it on. The environment represents which python version are you using (2,3...) and the libraries installed. To answer your question, the only thing you need to do is run your manage.py commands from the python executable in the new virtual environment. Of course install all of the necessary libraries in the new environment if you haven't already did so. It might be a problem if you created a python3 environment while the one you created was in python2, but at that point it's a code portability issue.
1.2
true
1
5,545
2018-06-03 08:14:39.850
Train CNN model with multiple folders and sub-folders
I am developing a convolution neural network (CNN) model to predict whether a patient in category 1,2,3 or 4. I use Keras on top of TensorFlow. I have 64 breast cancer patient data, classified into four category (1=no disease, 2= …., 3=….., 4=progressive disease). In each patient's data, I have 3 set of MRI scan images taken at different dates and inside each MRI folder, I have 7 to 8 sub folders containing MRI images in different plane (such as coronal plane/sagittal plane etc). I learned how to deal with basic “Cat-Dog-CNN-Classifier”, it was easy as I put all the cat & dog images into a single folder to train the network. But how do I tackle the problem in my breast cancer patient data? It has multiple folders and sub-solders. Please suggest.
Use os.walk to access all the files in sub-directories recursively and append to the dataset.
-0.135221
false
1
5,546
2018-06-03 14:02:27.027
How can I change the default version of Python Used by Atom?
I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the "script" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.
Yes, there is. After starting Atom, open the script you wish to run. Then open command palette and select 'Python: Select interpreter'. A list appears with the available python versions listed. Select the one you want and hit return. Now you can run the script by placing the cursor in the edit window and right-clicking the mouse. A long menu appears and you should choose the 'Run python in the terminal window'. This is towards the bottom of the long menu list. The script will run using the interpreter you selected.
0
false
4
5,547
2018-06-03 14:02:27.027
How can I change the default version of Python Used by Atom?
I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the "script" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.
I would look in the atom installed plugins in settings.. you can get here by pressing command + shift + p, then searching for settings. The only reason I suggest this is because, plugins is where I installed swift language usage accessibility through a plugin that manages that in atom. Other words for plugins on atom would be "community packages" Hope this helps.
0
false
4
5,547
2018-06-03 14:02:27.027
How can I change the default version of Python Used by Atom?
I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the "script" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.
I came up with an inelegant solution that may not be universal. Using platformio-ide-terminal, I simply had to call python3.9 instead of python or python3. Not sure if that is exactly what you're looking for.
0
false
4
5,547
2018-06-03 14:02:27.027
How can I change the default version of Python Used by Atom?
I have started using Atom recently and for the past few days, I have been searching on how to change the default version used in Atom (The default version is currently python 2.7 but I want to use 3.6). Is there anyone I can change the default path? (I have tried adding a profile to the "script" package but it still reverts to python 2.7 when I restart Atom. Any help will be hugely appreciated!! Thank you very much in advance.
I am using script 3.18.1 in Atom 1.32.2 Navigate to Atom (at top left) > Open Preferences > Open Config folder. Now, Expand the tree as script > lib > grammars Open python.coffee and change 'python' to 'python3' in both the places in command argument
0.986614
false
4
5,547
2018-06-04 05:13:38.857
Line by line data from Google cloud vision API OCR
I have scanned PDFs (image based) of bank statements. Google vision API is able to detect the text pretty accurately but it returns blocks of text and I need line by line text (bank transactions). Any idea how to go about it?
In Google Vision API there is a method fullTextAnnotation which returns a full text string with \n specifying the end of the line, You can try that.
0
false
1
5,548
2018-06-04 20:20:23.930
XgBoost accuracy results differ on each run, with the same parameters. How can I make them constant?
The 'merror' and 'logloss' result from XGB multiclass classification differs by about 0.01 or 0.02 on each run, with the same parameters. Is this normal? I want 'merror' and 'logloss' to be constant when I run XGB with the same parameters so I can evaluate the model precisely (e.g. when I add a new feature). Now, if I add a new feature I can't really tell whether it had a positive impact on my model's accuracy or not, because my 'merror' and 'logloss' differ on each run regardless of whether I made any changes to the model or the data fed into it since the last run. Should I try to fix this and if I should, how can I do it?
Managed to solve this. First I set the 'seed' parameter of XgBoost to a fixed value, as Hadus suggested. Then I found out that I used sklearn's train_test_split function earlier in the notebook, without setting the random_state parameter to a fixed value. So I set the random_state parameter to 22 (you can use whichever integer you want) and now I'm getting constant results.
0
false
1
5,549
2018-06-04 23:38:16.783
How to keep python programming running constantly
I made a program that grabs the top three new posts on the r/wallpaper subreddit. It downloads the pictures every 24 hours and adds them to my wallpapers folder. What I'm running into is how to have the program running in the background. The program resumes every time I turn the computer on, but it pauses whenever I close the computer. Is there a way to close the computer without pausing the program? I'm on a mac.
Programs can't run when the computer is powered off. However, you can run a computer headlessly (without mouse, keyboard, and monitor) to save resources. Just ensure your program runs over the command line interface.
0
false
1
5,550
2018-06-05 04:53:45.747
Pandas - Read/Write to the same csv quickly.. getting permissions error
I have a script that I am trying to execute every 2 seconds.. to begin it reads a .csv with pd.read_csv. Then executes modifications on the df and finally overwrites the original .csv with to_csv. I'm running into a PermissionError: [Errno 13] Permission denied: and from my searches I believe it's due to trying to open/write too often to the same file though I could be wrong. Any suggestions how to avoid this? Not sure if relevant but the file is stored in one-drive folder. It does save on occasion, seemingly randomly. Increasing the timeout so the script executes slower helps but I want it running fast! Thanks
Close the file that you are trying to read and write and then try running your script. Hope it helps
-0.201295
false
1
5,551