text
stringlengths 226
34.5k
|
---|
Maintaining 'focus' on Leap Motion Python (IDLE) code execution
Question: When executing Leap-Motion-centric code in the Python IDLE, switching to
another window makes the IDLE disregard the Leap controller and stop
processing `frame`s. How can this be avoided so that, say, the Leap gestures
can be used to interact with other windows?
Not really relevant, but code to reproduce this problem:
import Leap
from Leap import *
class FocusListener(Leap.Listener):
def on_frame(self, controller):
frame = controller.frame()
print frame
def main():
# Create a sample listener and controller
listener = FocusListener()
controller = Leap.Controller()
controller.add_listener(listener)
while (1):
listener.on_frame(controller)
if __name__ == "__main__":
main()
PS: Might this have something to do with the fact that I'm 'synthetically'
looping the frame with the `while`?
Answer: To get frames while your app isn't focused, you need to set the "background
frames" policy:
controller.set_policy_flags(Leap.Controller.POLICY_BACKGROUND_FRAMES);
See:
<https://developer.leapmotion.com/documentation/python/api/Leap.Controller.html#Leap.Controller.set_policy_flags>
|
Python threading.thread.start() doesn't return control to main thread
Question: I'm trying to a program that executes a piece of code in such a way that the
user can stop its execution at any time without stopping the main program. I
thought I could do this using threading.Thread, but then I ran the following
code in IDLE (Python 3.3):
from threading import *
import math
def f():
eval("math.factorial(1000000000)")
t = Thread(target = f)
t.start()
The last line doesn't return: I eventually restarted the shell. Is this a
consequence of the Global Interpreter Lock, or am I doing something wrong? I
didn't see anything specific to this problem in the threading documentation
(<http://docs.python.org/3/library/threading.html>)
I tried to do the same thing using a process:
from multiprocessing import *
import math
def f():
eval("math.factorial(1000000000)")
p = Process(target = f)
p.start()
p.is_alive()
The last line returns False, even though I ran it only a few seconds after I
started the process! Based on my processor usage, I am forced to conclude that
the process never started in the first place. Can somebody please explain what
I am doing wrong here?
Answer: > Thread.start() never returns! Could this have something to do with the C
> implementation of the math library?
As [@eryksun](http://stackoverflow.com/questions/22138190/python-threading-
thread-start-doesnt-return-control-to-main-thread#comment33634685_22138509)
pointed out in the comment: [math.factorial() is implemented as a C
function](http://hg.python.org/cpython/file/d37f963394aa/Modules/mathmodule.c#l1087)
that doesn't release GIL so no other Python code may run until it returns.
Note: `multiprocessing` version should work as is: each Python process has its
own GIL.
* * *
`factorial(1000000000)` has hundreds millions of digits. Try `import time;
time.sleep(10)` as dummy calculation instead.
If you have issues with multithreaded code in IDLE then try the same code from
the command line, to make sure that the error persists.
If `p.is_alive()` returns `False` after `p.start()` is already called then it
might mean that there is an error in `f()` function e.g., `MemoryError`.
On my machine, `p.is_alive()` returns `True` and one of cpus is at 100% if I
paste your code from the question into Python shell.
Unrelated: remove wildcard imports such as `from multiprocessing import *`.
They may shadow other names in your code so that you can't be sure what a
given name means e.g., `threading` could define `eval` function (it doesn't
but it could) with a similar but different semantics that might break your
code silently.
> I want my program to be able to handle ridiculous inputs from the user
> gracefully
If you pass user input directly to `eval()` then the user can do _anything_.
> Is there any way to get a process to print, say, an error message without
> constructing a pipe or other similar structure?
It is an ordinary Python code:
print(message) # works
The difference is that if several processes run `print()` then the output
might be garbled. You could use a lock to synchronize `print()` calls.
|
MapReduce: How to keep track of states across multiple lines in the mapper (say for counting trigrams)?
Question: I'm trying to write a MapReduce program for computing
[Trigrams](http://en.wikipedia.org/wiki/Trigram) using the mrjob framework in
Python. So far, this is what I have:
from mrjob.job import MRJob
class MRTrigram(MRJob):
def mapper(self, _, line):
w = line.split()
for idx,word in enumerate(w):
if idx < len(w) - 2:
# Generate a trigram using the current word and next 2 words
trigram = w[idx] + " " + w[idx + 1] + " " + w[idx + 2]
yield trigram, 1
def reducer(self, key, values):
yield sum(values), key
# ignore this part - its just standard bolierplate for mrjob!
if __name__ == '__main__':
MRTrigram.run()
As it can be seen, I've not handled the case where a trigram is split across
lines (say, "it was" at the end of line 3, "the best of times" at beginning of
line 4 - but my code would not capture the trigram "it was the" in this
case!).
How do I go about preserving states across multiple map calls, ensuring that
no matter however the mappers are assigned jobs by the underlying runtime,
only trigrams across consecutive lines are counted? I thought of storing the
last 2 words of each line in a persistent data structure inside the MRTrigram
class, but then I realized I could not guarantee if I was comparing words
across lines i and i+1 (and not lines i, j, where j can be line anywhere in
the document!).
Any ideas to set me on the right track?
Answer: You might get a hint as to how this could be done by [writing a custom
protocol](http://pythonhosted.org/mrjob/guides/writing-mrjobs.html#writing-
protocols), but I believe mrjob takes stream input delimited by the new line
character before you can add a customized behavior (i.e., forming key and
value), so it might not be possible with mrjob.
If you are using Hadoop (i.e., native Java), then you can write a custom input
format that takes multiline text and parse a key-value pair out of it.
|
Drawing rectangle or line using mouse events in open cv using python
Question: I am trying to draw a straight line between two coordinates which would be
obtained by clicking on the image or by mouse events. I am able to draw
individual circles on clicking the mouse, but cannot figure out how to draw
line between those points. When I am using this code, I can only print the
start and end coordinates, not draw a line between them.
import numpy as np
import cv2
import cv2.cv as cv
boxes = []
def on_mouse(event, x, y, flags, params):
if event == cv.CV_EVENT_LBUTTONDOWN:
print 'Start Mouse Position: '+str(x)+', '+str(y)
sbox = [x, y]
boxes.append(sbox)
elif event == cv.CV_EVENT_LBUTTONUP:
print 'End Mouse Position: '+str(x)+', '+str(y)
ebox = [x, y]
boxes.append(ebox)
count = 0
while(1):
count += 1
img = cv2.imread('img.jpg',0)
img = cv2.blur(img, (3,3))
cv2.namedWindow('real image')
cv.SetMouseCallback('real image', on_mouse, 0)
cv2.imshow('real image', img)
if count < 50:
if cv2.waitKey(33) == 27:
cv2.destroyAllWindows()
break
elif count >= 50:
if cv2.waitKey(0) == 27:
cv2.destroyAllWindows()
break
count = 0
Somehow, I am not able to extract the coordinates outside the loop. Can
someone please suggest how to draw lines or rectangles between the points I
click on the image?
Answer: You can refer below C++ code which I used to crop image
#include <iostream>
#include "opencv2/opencv.hpp"
#include <stdio.h>
using namespace std;
using namespace cv;
Mat src,img,ROI;
Rect cropRect(0,0,0,0);
Point P1(0,0);
Point P2(0,0);
const char* winName="Crop Image";
bool clicked=false;
int i=0;
char imgName[15];
void checkBoundary(){
//check croping rectangle exceed image boundary
if(cropRect.width>img.cols-cropRect.x)
cropRect.width=img.cols-cropRect.x;
if(cropRect.height>img.rows-cropRect.y)
cropRect.height=img.rows-cropRect.y;
if(cropRect.x<0)
cropRect.x=0;
if(cropRect.y<0)
cropRect.height=0;
}
void showImage(){
img=src.clone();
checkBoundary();
if(cropRect.width>0&&cropRect.height>0){
ROI=src(cropRect);
imshow("cropped",ROI);
}
rectangle(img, cropRect, Scalar(0,255,0), 1, 8, 0 );
imshow(winName,img);
}
void onMouse( int event, int x, int y, int f, void* ){
switch(event){
case CV_EVENT_LBUTTONDOWN :
clicked=true;
P1.x=x;
P1.y=y;
P2.x=x;
P2.y=y;
break;
case CV_EVENT_LBUTTONUP :
P2.x=x;
P2.y=y;
clicked=false;
break;
case CV_EVENT_MOUSEMOVE :
if(clicked){
P2.x=x;
P2.y=y;
}
break;
default : break;
}
if(clicked){
if(P1.x>P2.x){ cropRect.x=P2.x;
cropRect.width=P1.x-P2.x; }
else { cropRect.x=P1.x;
cropRect.width=P2.x-P1.x; }
if(P1.y>P2.y){ cropRect.y=P2.y;
cropRect.height=P1.y-P2.y; }
else { cropRect.y=P1.y;
cropRect.height=P2.y-P1.y; }
}
showImage();
}
int main()
{
cout<<"Click and drag for Selection"<<endl<<endl;
cout<<"------> Press 's' to save"<<endl<<endl;
cout<<"------> Press '8' to move up"<<endl;
cout<<"------> Press '2' to move down"<<endl;
cout<<"------> Press '6' to move right"<<endl;
cout<<"------> Press '4' to move left"<<endl<<endl;
cout<<"------> Press 'w' increas top"<<endl;
cout<<"------> Press 'x' increas bottom"<<endl;
cout<<"------> Press 'd' increas right"<<endl;
cout<<"------> Press 'a' increas left"<<endl<<endl;
cout<<"------> Press 't' decrease top"<<endl;
cout<<"------> Press 'b' decrease bottom"<<endl;
cout<<"------> Press 'h' decrease right"<<endl;
cout<<"------> Press 'f' decrease left"<<endl<<endl;
cout<<"------> Press 'r' to reset"<<endl;
cout<<"------> Press 'Esc' to quit"<<endl<<endl;
src=imread("src.png",1);
namedWindow(winName,WINDOW_NORMAL);
setMouseCallback(winName,onMouse,NULL );
imshow(winName,src);
while(1){
char c=waitKey();
if(c=='s'&&ROI.data){
sprintf(imgName,"%d.jpg",i++);
imwrite(imgName,ROI);
cout<<" Saved "<<imgName<<endl;
}
if(c=='6') cropRect.x++;
if(c=='4') cropRect.x--;
if(c=='8') cropRect.y--;
if(c=='2') cropRect.y++;
if(c=='w') { cropRect.y--; cropRect.height++;}
if(c=='d') cropRect.width++;
if(c=='x') cropRect.height++;
if(c=='a') { cropRect.x--; cropRect.width++;}
if(c=='t') { cropRect.y++; cropRect.height--;}
if(c=='h') cropRect.width--;
if(c=='b') cropRect.height--;
if(c=='f') { cropRect.x++; cropRect.width--;}
if(c==27) break;
if(c=='r') {cropRect.x=0;cropRect.y=0;cropRect.width=0;cropRect.height=0;}
showImage();
}
return 0;
}

|
Error running Django project in virtualenv
Question: I have a django project which works fine without virtualenv. But now I'm
putting it in a virtualenv and it doesn't run. Without virtualenv:
python manage.py runserver --settings=Janta.settings.local
This works fine. With virtualenv when I do the same as above I get:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/moni/.virtualenvs/janta_proj/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/home/moni/.virtualenvs/janta_proj/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/moni/.virtualenvs/janta_proj/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 261, in fetch_command
commands = get_commands()
File "/home/moni/.virtualenvs/janta_proj/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 107, in get_commands
apps = settings.INSTALLED_APPS
File "/home/mon/.virtualenvs/janta_proj/local/lib/python2.7/site-packages/django/conf/__init__.py", line 54, in __getattr__
self._setup(name)
File "/home/mon/.virtualenvs/janta_proj/local/lib/python2.7/site-packages/django/conf/__init__.py", line 49, in _setup
self._wrapped = Settings(settings_module)
File "/home/moni/.virtualenvs/janta_proj/local/lib/python2.7/site-packages/django/conf/__init__.py", line 132, in __init__
% (self.SETTINGS_MODULE, e)
ImportError: Could not import settings 'Janta.settings.local' (Is it on sys.path? Is there an import error in the settings file?): No module named celery
This is what comes when I try installing celery in the virtualenv:
Requirement already satisfied (use --upgrade to upgrade): Celery in /usr/local/lib/python2.7/dist-packages
Requirement already satisfied (use --upgrade to upgrade): pytz>dev in /usr/local/lib/python2.7/dist-packages (from Celery)
Requirement already satisfied (use --upgrade to upgrade): billiard>=3.3.0.13,<3.4 in /usr/local/lib/python2.7/dist-packages (from Celery)
Requirement already satisfied (use --upgrade to upgrade): kombu>=3.0.8,<4.0 in /usr/local/lib/python2.7/dist-packages (from Celery)
Requirement already satisfied (use --upgrade to upgrade): anyjson>=0.3.3 in /usr/local/lib/python2.7/dist-packages (from kombu>=3.0.8,<4.0->Celery)
Requirement already satisfied (use --upgrade to upgrade): amqp>=1.4.0,<2.0 in /usr/local/lib/python2.7/dist-packages (from kombu>=3.0.8,<4.0->Celery)
Cleaning up..
What am I doing wrong?
Answer: Using `sudo` will force everything to install globally, and virtualenv is _the
solution_ to that problem. It allows you to create virtual environments which
have their own independently installed packages, so you can avoid having
everything installed globally.
Recreate the virtualenv, install all of your needed packages using `pip
install packagename` and you should be up and running.
|
Accessing Flask server from my web page
Question: This is a question continue from this question
[here](http://stackoverflow.com/questions/22088719/can-flask-use-with-jquery-
post).
I am trying to control a servo motor using the image buttons on my web page.
My servo controller is in the form of python script (cameraservo2.py) and I am
using jQuery to `post` the data to the python function. The conclusion that I
get from asking "how to run python script from webpage" is by using 'Flask'
which is totally new to me. However I have installed it successfully using
just `pip install Flask`. (let me know if i miss out anything?)
I have my `index.html, cameraservo3.py and routes.py` in my `/var/www`folder.
My webserver is by default running and I can access it by my Raspberry Pi IP
address from another network computer.
This is my `routes.py` code:
from flask import Flask, jsonify, render_template, request
from cameraservo3 import turnCamera
app = Flask(__name__)
@app.route('/turn_servo', methods=['POST'])
def turn_servo_ajax():
direction = request.form['direction']
cam_result = turnCamera(direction=direction)
return '<div> {} </div>'.format(cam_result)
if __name__ == '__main__':
app.debug = True
app.run(host='0.0.0.0')
Part of my jQuery script in index.html:
$('#left_button').click(function(){
$.post("/turn_servo", {direction:"left"}).done(function (reply) {
$('#camerapos').empty().append(reply);
alert("left button clicked");});
});
part of my html:
<div id="control_button">
<img src="button_left.png" id="left_button" height="50" width="50">
<img src="button_right.png" id="right_button" height="50" width="50">
<p id="camerapos"> test </p>
</div>
cameraservo2.py can be found in the answer for [my
question](http://stackoverflow.com/questions/22088719/can-flask-use-with-
jquery-post) there. I run `python routes.py` and it gave me
* Running on http://0.0.0.0:5000/
* Restarting with reloader
But the script (cameraservo2.py) doesn't get executed when I click the
left_button. What's wrong? Which part have I done wrong??
The quickstart guide of Flask isn't very helpful as well. :/
Answer: You'll run into the [same-origin policy](https://en.wikipedia.org/wiki/Same-
origin_policy) restrictions unless you serve the `index.html` file from the
same host and port number. It's easiest to just add the `index.html` page to
your Flask server too.
Add a `/` route that serves the page that will do the AJAX post. You could use
a template to fill in the route here for `$.post()` to. If using Jinja2 for
the template, that would be:
@app.route('/')
def homepage():
return render_template('index.html')
and the file `index.html` in the `templates` subdirectory of your Flask
project with:
$('#left_button').click(function(){
$.post("{{ url_for('turn_servo_ajax') }}", {direction:"left"}).done(function (reply) {
$('#camerapos').empty().append(reply);
alert("left button clicked");});
});
where the `{{ }}` part is Jinja2 template syntax, and
[`url_for()`](https://flask.readthedocs.org/en/latest/api/#flask.url_for)
returns a fully-formed URL for the `turn_servo_ajax` view function.
|
Send multiple emails in the same thread using python and gmail
Question: I have a program running. When that program gets a result, it sends me an
email using this function:
def send_email(message):
import smtplib
gmail_user = OMITTED
gmail_pwd = OMITTED
FROM = OMITTED
TO = OMITTED #must be a list
try:
#server = smtplib.SMTP(SERVER)
server = smtplib.SMTP("smtp.gmail.com", 587) #or port 465 doesn't seem to work!
server.ehlo()
server.starttls()
server.login(gmail_user, gmail_pwd)
server.sendmail(FROM, TO, message)
#server.quit()
server.close()
print 'successfully sent the mail'
except:
print "failed to send mail"
Disclaimer: I found this code somewhere here on Stack Overflow. It is not
mine. I cut out some parts of it as they seemed to have no special meaning.
Sometimes my code gets many results, and I get 150+ different emails in less
than 20 seconds.
How can I modify the function above in order for the program to send me all
the results in the same thread?
In case you are not getting what my idea is, I want my inbox to look like
this:
[email protected](150) ...
... (other emails from other senders)
instead of:
[email protected] ...
[email protected] ...
[email protected] ...
[email protected] ...
[email protected] ...
...
[email protected] ...
... (other emails from other senders)
**EDIT**
To solve the problem, all I needed to do was reinsert the parts of the code I
had previously deleted. The full function is this one:
def send_email(TEXT):
import smtplib
gmail_user = OMITTED
gmail_pwd = OMITTED
FROM = OMITTED
TO = OMITTED #must be a list
SUBJECT = "Big brother candidate"
#TEXT = "Testing sending mail using gmail servers"
# Prepare actual message
message = """\From: %s\nTo: %s\nSubject: %s\n\n%s
""" % (FROM, ", ".join(TO), SUBJECT, TEXT)
try:
#server = smtplib.SMTP(SERVER)
server = smtplib.SMTP("smtp.gmail.com", 587) #or port 465 doesn't seem to work!
server.ehlo()
server.starttls()
server.login(gmail_user, gmail_pwd)
server.sendmail(FROM, TO, message)
#server.quit()
server.close()
print 'successfully sent the mail'
except:
print "failed to send mail"
Answer: This doesn't appear to be a question about sending emails, but rather how to
organise them to GMail will thread them correctly.
See [this page](http://www.sensefulsolutions.com/2010/08/how-does-email-
threading-work-in-gmail.html) for a description on how threading works.
Basically you need subsequent emails to include "Re: " at the start of the
subject line. Since you don't show the code that generates the message I can't
say how you might do that.
|
How to know where an object was instantiated in python?
Question: I define a class in a given python module. From a few other python files I
will create instances of said class. The instances register themselves at
object creation, ie during `__init__()`, in a singleton registry object. From
a third type of python file I would like to access the registry, look at the
objects therein and be able to figure out in which files these objects were
created beforehand.
A code sample might look as follows:
Python module file : '/Users/myself/code/myobjectmodule.py':
@singleton
class Registry(object):
def __init__(self):
self.objects = {}
class MyObject(object):
def __init__(self, object_name):
self.object_name = object_name
Registry().objects[self.object_name] = self
`singleton` decorator according to
<http://www.python.org/dev/peps/pep-0318/#examples>
Instance creation python files :
'/Users/myself/code/instance_creation_python_file.py':
from myobjectmodule import MyObject
A = MyObject('Foo')
Third python file : '/Users/myself/code/registry_access.py':
from myobjectmodule import Registry
registry = Registry()
foo = registry.objects['Foo']
Now, I would like to have a method `foo.get_file_of_object_creation()`.
How can I implement this method?
### Edit:
The reason for this approach is the following scenario:
1\. A framework defines a set of objects that shall specify data sources and
contain the data after loading (MyObject).
2\. Apps making use of this framework shall specify these objects and make use
of them. Each app is saved in a .py file or a folder that also specifies the
name of the app via its name.
3\. An engine provides functionality for all apps, but needs to know, for some
features, which of the objects originate from which app / file.
Answer: All the caveats about this only being a good idea for debugging aside, you can
use the `inspect` module.
import inspect
def get_caller():
return inspect.stack()[2] # 1 is get_caller's caller
def trace_call():
_, filename, line, function, _, _ = get_caller()
print("Called by %r at %r:%d" % (function, filename, line))
def main():
trace_call()
main()
produces
Called by 'main' at 'trace.py':11
|
Compare node value from xml with String in Python
Question: After searching a lot and trying a lot I am not able to compare the node from
xml with string entered from User in python script. I hope it is due to a type
mismatch because I am getting value from XML in unicode format, please suggest
the way ASAP to compare two strings successfully. Thanks in Advance.
My Python script:
from xml.dom.minidom import *
def codin(code):
document = 'fourth.xml'
xmldoc = parse(document)
itemlist = xmldoc.getElementsByTagName('item')
kool = itemlist[0].attributes['name'].value
print kool
if code == kool:
print type(kool)
print type(code)
print "found"
else:
print "not found"
for s in itemlist :
if code in s.attributes['name'].value:
print "Country code matched "
country = s.firstChild.nodeValue
print country
print type(country)
else:
print "not found"
codin('001')
XML data:
<data>
<items>
<item name="001">India</item>
<item name="002">China</item>
<item name="003">Spain</item>
<item name="004">Pakistan</item>
</items>
</data>
Answer: It's not clear what you are expecting. Following code reads value from xml
file and store it as dictionary so that, you can do your comparison in
dictionary.
from xml.dom.minidom import *
def codin(code):
document = 'fourth.xml'
xmldoc = parse(document)
items = xmldoc.getElementsByTagName('items')
kool = ""
countryKool = {}
for n in items:
rv = getChild(n,'item')
for v in rv:
country = v.childNodes[0].nodeValue
attr = v.getAttributeNode('name')
if attr:
kool = attr.nodeValue.strip()
print "One of item is " , country, " and attribute is ",kool
countryKool[kool] = country
if code in countryKool:
print "found"
else:
print "not found"
print "Mapping of Country and kool ", countryKool #contains mapping for country and kool
def getChild(n,v):
for child in n.childNodes:
if child.localName==v:
yield child
codin('001')
Output:
One of item is India and attribute is 001
One of item is China and attribute is 002
One of item is Spain and attribute is 003
One of item is Pakistan and attribute is 004
found
Mapping of Country and kool {u'003': u'Spain', u'002': u'China', u'001': u'India', u'004': u'Pakistan'}
|
tweepy verifier runtime error
Question: I am trying to run a simple app using twitter API wraper called tweepy (with
Python), and I can't get past the verifier step.
My code is really simple.
from flask import Flask
from flask import request
import flask
import tweepy
session=dict()
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
try:
redirect_url = auth.get_authorization_url()
session['request_token']=(auth.request_token.key, auth.request_token.secret)
except tweepy.TweepError:
print 'Error! Failed to get request token.'
verifier = request.GET.get('oauth_verifier')
It really is the code provided by tweepy documentation, but for some reason,
it keeps returning a runtime error.
Runtime Error: working outside of request context
Anyone ?
Answer: Executing the script show that the error happens on `verifier =
request.GET.get('oauth_verifier')`, and googling for the error message shows
this error is related to Flask.
So I guess Flask just doesn’t like using `request.GET.get` outside a function
called by Flask (which could be what they call a “request context”).
Basically, you should execute the last line only somewhere it makes sense to
display data (a web server can only display data if there is a web browser
waiting for a response…)
|
image saving in python (matplotlib)
Question: In the code that I am working on, I have the following line:
import pylab as pl
pl.imsave(out_dir+'/'+fname.split('/')[-1],masked_im,vmin = 0, vmax = 1,cmap = 'gray')
However, I keep on getting the error that Bbox.from_bounds takes four
arguments and I have given five. I've been looking at the Python scripts for
pylab and I still can't seem to understand what may be causing this error.
Once I traced back the error, I arrived at this:
The Figure class within matplotlib has the following in the **init** function:
self.bbox_inches = Bbox.from_bounds(0, 0, *figsize)
This seems to be causing the error. The image I'm trying to save is an array
in the shape of (256, 256, 3). It's figure size is 256x256. In this case
wouldn't from_bounds take four arguments ((0,0,256,256)). If not, what may be
the fifth argument and how may it be fixed?
Answer: Have you tried setting cmap to 'rgb'? I looks to me as if you have an RGB
image (256x256x3) but tell pylab to treat it as a grayscale image (256x256)
|
Is there a way to sort a list in python until the first sorted k elements are found?
Question: I have a normal boring list of non sorted numbers. From that list I need to
take the first k elements after sorting. The thing is that if the list is
considerably long and k is considerably small sorting the entire list seems
like a waste. I came up with an algorithmic solution for this, but requires me
to write my own implementation for sorting, my question is: is there a way to
get the same efficiency using something already implemented in python?
**UPDATE:**
Just to clarify, I know this will give the answer I need:
`sorted(boring_list)[:n]`
But my concern is efficiency: **I don't need to sort the whole list for
this.**
Answer: You can use the [`heapq`](http://docs.python.org/2/library/heapq.html) module,
in particular its
[`nlargest`](http://docs.python.org/2/library/heapq.html#heapq.nlargest) or
[`nsmallest`](http://docs.python.org/2/library/heapq.html#heapq.nsmallest)
functions.
Alternatively just build the heap and call
[`heappop()`](http://docs.python.org/2/library/heapq.html#heapq.heappop). This
should take O(n) time to build the heap and O(k*log(n)) to retrieve the `k`
elements.
* * *
Here's a very simple and small benchmark:
In [1]: import random, heapq
In [2]: seq = [random.randint(-5000, 5000) for _ in range(35000)]
In [3]: %timeit sorted(seq)[:75]
100 loops, best of 3: 14.5 ms per loop
In [4]: %%timeit
...: s = seq[:]
...: heapq.nsmallest(75, s)
...:
100 loops, best of 3: 4.05 ms per loop
In [5]: %%timeit
...: s = seq[:]
...: heapq.heapify(s)
...: for _ in range(75): heapq.heappop(s)
...:
100 loops, best of 3: 2.41 ms per loop
I have no idea why `nsmallest` is so much slower then calling `heappop`
directly. In fact I should have timed it without copying `seq` but still:
In [6]: %%timeit
...: heapq.nsmallest(75, seq)
...:
100 loops, best of 3: 3.82 ms per loop
Increasing the length by 100 times:
In [12]: %timeit sorted(seq)[:75]
1 loops, best of 3: 1.9 s per loop
In [13]: %%timeit
...: heapq.nsmallest(75, seq)
...:
1 loops, best of 3: 352 ms per loop
In [14]: %%timeit
...: s = seq[:]
...: heapq.heapify(s)
...: for _ in range(75): heapq.heappop(s)
...:
1 loops, best of 3: 264 ms per loop
* * *
Note: to counter F.J biased profiling:
In [13]: a = list(range(1000000))
In [14]: random.shuffle(a)
In [15]: %timeit sorted(a)
1 loops, best of 3: 985 ms per loop
In [16]: %%timeit
...: s = a[:]
...: heapq.heapify(s)
...:
1 loops, best of 3: 284 ms per loop
As you can see `heapify` is quite faster then sorting even on 1000000 elements
lists.
|
Python - Calculate histogram of image
Question: I'm working on teaching myself the basics of computerized image processing,
and I am teaching myself Python at the same time.
Given an image `x` of dimensions 2048x1354 with 3 channels, efficiently
calculate the histogram of the pixel intensities.
import numpy as np, cv2 as cv
img = cv.imread("image.jpg")
bins = np.zeros(256, np.int32)
for i in range(0, img.shape[0]):
for j in range(0, img.shape[1]):
intensity = 0
for k in range(0, len(img[i][j])):
intensity += img[i][j][k]
bins[intensity/3] += 1
print bins
My issue is that this code runs pretty slowly, as in ~30 seconds. How can I
speed this up and be more Pythonic?
Answer: it's impossible to do this(i.e without removing the for loop) in pure python.
Python's for loop construct has too many things going on to be fast. If you
really want to keep the for loop, the only solution is numba or cython but
these have their own set of issues. Normally, such loops are written in
c/c++(most straightforward in my opinion) and then called from python, it's
main role being that of a scripting language.
Having said that, opencv+numpy provides enough useful routines so that in 90%
of cases, it's possible to simply use built in functions without having to
resort to writing your own pixel level code.
Here's a solution in numba without changing your looping code. on my computer
it's about 150 times faster than pure python.
import numpy as np, cv2 as cv
from time import time
from numba import jit,int_,uint8
@jit(argtypes=(uint8[:,:,:],int_[:]),
locals=dict(intensity=int_),
nopython=True
)
def numba(img,bins):
for i in range(0, img.shape[0]):
for j in range(0, img.shape[1]):
intensity = 0
for k in range(0, len(img[i][j])):
intensity += img[i][j][k]
bins[intensity/3] += 1
def python(img,bins):
for i in range(0, img.shape[0]):
for j in range(0, img.shape[1]):
intensity = 0
for k in range(0, len(img[i][j])):
intensity += img[i][j][k]
bins[intensity/3] += 1
img = cv.imread("image.jpg")
bins = np.zeros(256, np.int32)
t0 = time()
numba(img,bins)
t1 = time()
#print bins
print t1 - t0
bins[...]=0
t0 = time()
python(img,bins)
t1 = time()
#print bins
print t1 - t0
|
Python lists not working properly
Question:
import random
words = ["Football" , "Happy" ,"Sad", "Love", "Human"]
for word in words:
word = random.choice(words)
print(word)
words.remove(word)
Why does the above code only print out 3 words instead of all 5? Am I trying
to achieve printing the words from `words`in a random order in an incorrect
way?
Answer: You can't modify a list (by adding or removing elements) while iterating over
it, the behaviour is undefined. Here's a possible alternative for what you're
doing that doesn't have that problem:
random.shuffle(words)
for word in words:
print(word)
|
python counting and appending to list
Question: I'm trying to count how times something occurs in a list. Is it possible to
set a variable to move through each index and count it. I want to append how
many times each one is counter to a list.
I want it to look like this. Forget the while loop, it's just to show that I'm
looping this. example. And if possible to to import a library to make a
shortcut, or if its the only way.
while True:
index = 0
L = ["brown", "brown", "brown", "red", "red", "yellow", "yellow"]
numberOfTimes = L.count([index])
index = index + numberOfTimes
numberOfTimesList.append(numberOfTimes)
I'd then want to make another list and so that I'd only see brown once like
this:
["brown", "red", "yellow"] [3, 2, 2]
Answer: Use
[`collections.counter`](http://docs.python.org/2/library/collections.html):
from collections import Counter
L = ["brown", "brown", "brown", "red", "red", "yellow", "yellow"]
cnt = Counter(L)
print cnt
print cnt.keys(), cnt.values()
Output:
Counter({'brown': 3, 'yellow': 2, 'red': 2})
['brown', 'yellow', 'red'] [3, 2, 2]
The resulting counter object can be manipulated as a dictionary, with
additional convenient routines such as `cnt.most_common(n)` which will return
the `n` most common elements and their counts.
|
palindromic numbers in python
Question: Trying to find the largest palindrome that's the product of two three-digit
numbers. Before I look up the infinitely more efficient and - more importantly
- working solution, could you tell me what's wrong with my code? I just keep
getting the empty set.
def palindrome():
n = 100
m = 100
palind = []
while n<=999:
while m<=999:
prod = n * m
if str(prod) == str(prod)[::-1] and prod > palind[0]:
palind.pop(0)
palind.append(prod)
return palind
m = m + 1
n = n + 1
return palind
print palindrome()
Answer: This shortcuts when it's impossible to return an i*j > the largest recorded
and correctly returns 906609 (note, if you're in python 2, the below would
work for you, but you'd prefer to use `xrange` instead of `range` to avoid
creating unnecessary lists in memory):
def palindrome(floor=0, upto=999):
'''
return the largest palindrome product for all number from (but not including)
floor to (and including) upto
'''
start = upto
largest = None
for i in range(start, floor, -1): # decreasing from upto
if i * upto < largest: # if True, impossible for higher product from combo
break
for j in range(start, i-1, -1): # decrease from upto to not including i-1
product = i*j
if str(product) == str(product)[::-1]:
if product > largest:
largest = product
return largest
Usage:
>>> palindrome(99,999)
906609
>>> palindrome(10,13)
121
>>> palindrome(0,10)
9
The short-cutting is important because if given a very large number, it can
take quite a while to return:
>>> palindrome(upto=100000000)
9999000000009999L
I also created a generator that hits every single combination from 0 to 999,
and it returns 906609.
def palindrome(upto=1000):
return max(i*j for i in range(upto) for j in range(upto)
if str(i*j) == str(i*j)[::-1])
But when running this palindrome as in:
>>> palindrome(upto=100000000)
The complete search will search all 100000000^2, and take far too long.
I first had written it like this, with the idea that it would short-cut and
avoid iterating over every possible combination, but this is incorrect, it
returns 888888:
def palindrome():
start = 999
largest = 0
for i in range(start, 0, -1): # decreasing from 999
if i * 999 < largest:
return largest
for j in range(start, i, -1): # decreasing from 999 to i
if str(i*j) == str(i*j)[::-1]:
largest = i*j
It first multiplies 999 times 999, then 998 times 999, then
998*998
997*999
997*998
997*997
...
But the results aren't monotonically decreasing (that is, each result is not
guaranteed to be smaller than the previous.)
|
OpenShift mysql connection issues in python
Question: I previously wrote my app using local development servers, and now that I have
moved it onto an openshift small gear almost all works except for mysql
connections.
In my code I have the line:
self.db = MySQLdb.connect(host, username, password, dbname)
When I review the openshift error log, the following error is reported:
_mysql_exceptions.OperationalError: (2002, "Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)")
I think that python is trying to connect using a UNIX socket as opposed to an
INET one, but I'm not sure how to change this behavior. Any help is much
appreciated.
Answer: Not specific to MySQLdb: if you use `localhost` as hostname, a MySQL client
using the MySQL C libraries will try to connect using UNIX socket (or named
pipe on Windows). There are 2 ways around this, but you'll need to grant extra
permissions to make it work for both:
## Use IP address 127.0.0.1
Use IP address 127.0.0.1 instead of the localhost hostname. This will make
MySQL client connect using TCP/IP.
## Use option files
The other way is to force the protocol using using option files. For example,
in your `~/.my.cnf` (or any file you want), add the following:
[python]
protocol=tcp
Now use the connection arguments to read the option file and group:
import MySQLdb
cnx = MySQLdb.connect(host='localhost', user='scott', passwd='tiger',
read_default_file='~/.my.cnf',
read_default_group='python')
The group name does not need to be `python`, but it is good not to use `mysql`
or `client` as it might interfere with other MySQL tools (unless you want that
of course).
For setting up permissions, you'll need to use the IP address of localhost,
something like:
mysql> GRANT SELECT TO yourdb.* TO 'scott'@'127.0.0.1' IDENTIFIED BY ...;
(Site note: MySQL database drivers such as MySQL Connector/Python do not
consider `localhost` to be special and connect through TCP/IP right away and
you have to explicitly use the `unix_socket`.)
|
How to enable textsearch in mongodb using python?
Question: I have written `textSearchedEnabled=true` in the following script but a syntax
error occurs. I can not understand how to enable the text search in mongo db
using python.
import json
import pymongo # pip install pymongo
from bson import json_util # Comes with pymongo
from pymongo import MongoClient
--setParameter textSearchEnabled=true
client = pymongo.MongoClient('mongodb://user:[email protected]:33499/enron')
db = client.enron
mbox = db.mbox
# Create an index if it doesn't already exist
mbox.ensure_index([("$**", "text")], name="TextIndex")
# Get the collection stats (collstats) on a collection
# named "mbox"
print json.dumps(db.command("collstats", "mbox"), indent=1)
# Use the db.command method to issue a "text" command
# on collection "mbox" with parameters, remembering that
# we need to use json_util to handle serialization of our JSON
print json.dumps(db.command("text", "mbox",
search="raptor",
limit=1),
indent=1, default=json_util.default)
Answer: Normally textSearch is enabled via the mongodb.conf or when starting mongodb.
See: [Enable Text Search](http://docs.mongodb.org/manual/tutorial/enable-text-
search/) documentation.
You can also run it via an admin command if the user has the admin rights:
client.admin.command('setParameter', textSearchEnabled=True)
|
python: input is not matching
Question:
import os
import fileinput
filenames1=os.listdir("./chi_square_metal_region_1")
filenames1.sort()
for line in fileinput.input("./test_input.dat"):
for eachfile in filenames1:
if eachfile == line:
print yes
I don't get any errors when I run this code, but it's not printing 'yes',
which is the expected output.
It should print 'yes' twice as I there are two files in input which match with
`eachfile`. Why is it not printing the expected output?
Answer: The problem is that:
for line in fileinput.input("./test_input.dat"):
the variable line will contain '\n' at the end. Try this:
import os
import fileinput
filenames1=os.listdir("./chi_square_metal_region_1")
filenames1.sort()
for line in fileinput.input("./test_input.dat"):
for eachfile in filenames1:
if eachfile == line[:-1]:
print yes
|
Python redmine: wiki page text
Question: I use [python-redmine](https://github.com/maxtepkeev/python-redmine/) and want
to get wiki page text, but get error - attribute isn't exists (but text
exists), here is my code:
from redmine import Redmine
redmine = Redmine('http://redmine.example.com', username='user', password='1234')
projects = redmine.project.all()
for project in projects:
print('Project: '+project.name)
try:
for page in project.wiki_pages:
try:
print('Title: '+page.title)
except:
print('Title: none')
try:
print('Content: '+page.text)
except:
print('Content: none')
print('\n===========================\n')
except:
print('None')
How can I get text from wiki page? Help me pease!
pS: Python 3, python-redmine installed by pip
Answer: Rails wraps attribute to methods so it is hard to say where you use attribute
where it is just a method.
[Here](https://github.com/redmine/redmine/blob/2.4-stable/app/models/wiki_page.rb#L25)
you can see that `wiki_page` has one content and
[here](https://github.com/redmine/redmine/blob/2.4-stable/app/models/wiki_page.rb#L139)
you can see method definition for `text` (attribute `text` belongs to
content!).
I don't have experience to work with `python-redmine` but I can suppose that
you need to call `page.content.text` not `page.text`. Or fetch `page.content`
some other way.
|
Error while creating an object of an other class in a class of type Thread in Python
Question: Level: Beginner
I am using python v2.7 and wxPython v3.0 on windows 7 32-bit.
**My app** : I have 3 classes. One class is `gui(wx.Frame)` and other is
`TestThread(Thread)` and the third is `labels()`.
**Problem** : I am trying to create an object of the `gui(wx.Frame)` class in
`TestThread(Thread)` class, but I am getting an error as given below:
Traceback (most recent call last):
File "C:\test\post.py", line 11, in <module>
class TestThread(Thread):
File "C:\test\post.py", line 12, in TestThread
guiObj = gui()
NameError: name 'gui' is not defined
However if I try to call the `createPanels()` of `gui(wx.Frame)` class from
the `TestThread(Thread)` class like this `wx.CallAfter(gui().createPanels())`
then I get following error:
Exception in thread Thread-1:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 810, in __bootstrap_inner
self.run()
File "C:\test\post.py", line 24, in run
wx.CallAfter(gui().createPanels())
TypeError: __init__() takes exactly 4 arguments (1 given)
I think the reason is something related to the `__init__()` of the
`gui(wx.Frame)` I didn't understand the reason.
**Update:** I tried to create an object of `labels()` class in
`TestThread(Thread)` class, I get the same error as shown in first case above.
Is there something special about this `TestThread(Thread)` class?
The complete code is provided below and can be [downloaded
here](https://db.tt/ZjqgqJwH) to avoid identation problems:
#!/usr/bin/env python
from random import randrange
import wx
import wx.lib.scrolledpanel
from threading import Thread
from wx.lib.pubsub import setuparg1
from wx.lib.pubsub import pub as Publisher
##################################################
class TestThread(Thread):
guiObj = gui()
def __init__(self):
Thread.__init__(self)
self.start() # start the thread
def run(self):
wx.CallAfter(guiObj.createPanels())
time.sleep(5)
##############################################
class gui(wx.Frame):
def __init__(self, parent, id, title):
screenWidth = 800
screenHeight = 450
screenSize = (screenWidth, screenHeight)
wx.Frame.__init__(self, None, id, title, size=screenSize)
self.locationFont = locationFont = wx.Font(15, wx.MODERN, wx.NORMAL, wx.BOLD)
mainSizer = wx.BoxSizer(wx.VERTICAL)
self.sizer = sizer = wx.BoxSizer(wx.VERTICAL)
self.panel = panel = wx.lib.scrolledpanel.ScrolledPanel(self, -1, style=wx.SIMPLE_BORDER)
panel.SetupScrolling()
panel.SetBackgroundColour('#FFFFFF')
panel.SetSizer(sizer)
mainSizer.Add(panel, 15, wx.EXPAND|wx.ALL)
self.SetSizer(mainSizer)
def createPanels(self):
k = 0
labelObj = labels()
locations = labelObj.getLabel()
print locations
for i in locations:
sPanels = 'sPanel'+str(k)
sPanels = wx.Panel(self.panel)
label = str(k+1)
text = wx.StaticText(sPanels, -1, label0)
text.SetFont(self.locationFont)
text.SetForegroundColour('#0101DF')
self.sizer.Add(sPanels, 0, wx.ALL, 5)
self.sizer.Add(wx.StaticLine(self.panel), 0, wx.ALL|wx.EXPAND, 0)
k += 1
TestThread()
################################################
class labels():
def getLabel(self):
mylist =[]
i = randrange(10)
for k in range(1,i+1):
mylist.append(k)
return mylist
###############################################
if __name__=='__main__':
app = wx.App()
frame = gui(parent=None, id=-1, title="Test")
frame.Show()
app.MainLoop()
Thank you for your time!
Answer: **For the first issue:**
`guiObj = gui()` is executed when Python first runs the file, which causes two
problems:
* Python does not know about `gui` when executing it
* All instances of `TestThread` will have the same `gui` instance
To fix both of them, you have to put `guiObj = gui()` in the constructor of
`TestThread`
If you only want to fix the first one, put the declaration of `gui` before the
declaration of `TestThread`.
**For the second issue:**
Using `gui()` is wrong because the constructor of `gui` takes three arguments
(four actually, but the first one is implicit. You have to call it with three
arguments, like you did at the end of the program: `gui(parent=None, id=-1,
title="Test")`
|
Modules for correlate2d in Python
Question: I want to calculate the correlation between two matrices using
[`correlate2d`](http://docs.scipy.org/doc/scipy-0.7.x/reference/generated/scipy.stsci.convolve.correlate2d.html)
(code: `corr = correlate2d(im, im, fft = True)`).
`correlate2d` was part of scipy and is now under
[stsci_python](http://www.stsci.edu/institute/software_hardware/pyraf/stsci_python).
Currently I am loading modules as follows:
import scipy
from scipy import *
import numpy as np
import stsci
from stsci import *
I get the error `name 'correlate2d' is not defined`. What modules do I need to
load for this?
Answer: SciPy has correlate2d in the signal sub-package:
import scipy.signal
followed by:
corr = scipy.signal.correlate2d(matrix1,matrix2)
SciPy sub-packages need to be imported separately; stsci may be the same.
-Aldo
|
ImportError: No module named pkg_resources after I upgrade python from 2.6.6 to 2.7.3
Question: When I run `pip install xxx` or `easy_install xxx`, I met this problem after
upgrading Python from 2.6.6 to 2.7.3:
Traceback (most recent call last):
File "/usr/bin/pip", line 5, in <module>
from pkg_resources import load_entry_point
ImportError: No module named pkg_resources
Could someone give me some advice ?
Answer: first try to run python26 or python27 to see what happening.
check if setting PYTHONPATH in shell. unset it!
or set it `PYTHONPATH=/usr/lib/python2.7:/usr/lib/python2.7/site-packages/`
|
Celery Beat Windows Simple Example (not with Django)
Question: I'm really struggling to set up a periodic task using Celery Beat on Windows 7
(unfortunately that is what I'm dealing with at the moment). The app that will
be using celery is written with CherryPy, so the Django libraries are not
relevant here. All I'm looking for is a simple example of how to start the
Celery Beat Process in the background. The FAQ section says the following, but
I haven't been able to actually do it yet:
Windows
The -B / –beat option to worker doesn’t work?¶
Answer: That’s right. Run celery beat and celery worker as separate services
instead.
My project layout is as follows:
proj/
__init__.py (empty)
celery.py
celery_schedule.py
celery_settings.py (these work
tasks.py
celery.py:
from __future__ import absolute_import
from celery import Celery
from proj import celery_settings
from proj import celery_schedule
app = Celery(
'proj',
broker=celery_settings.BROKER_URL,
backend=celery_settings.CELERY_RESULT_BACKEND,
include=['proj.tasks']
)
# Optional configuration, see the application user guide.
app.conf.update(
CELERY_TASK_RESULT_EXPIRES=3600,
CELERYBEAT_SCHEDULE=celery_schedule.CELERYBEAT_SCHEDULE
)
if __name__ == '__main__':
app.start()
tasks.py
from __future__ import absolute_import
from proj.celery import app
@app.task
def add(x, y):
return x + y
celery_schedule.py
from datetime import timedelta
CELERYBEAT_SCHEDULE = {
'add-every-30-seconds': {
'task': 'tasks.add',
'schedule': timedelta(seconds=3),
'args': (16, 16)
},
}
Running "celery worker --app=proj -l info" from the command line (from the
parent directory of "proj") starts the worker thread just fine and I can
execute the add task from the Python terminal. However, I just can't figure
out how to start the beat service. Obviously the syntax is probably incorrect
as well because I haven't gotten past the missing --beat option.
Answer: Just start another process via a new terminal window, make sure you are in the
correct directory and execute the command celery beat (no '--' needed
preceding the beat keyword).
If this does not solve your issue, rename your celery_schedule.py file to
celeryconfig.py and include it in your celery.py file as:
app.config_from_object('celeryconfig') right above your **name** == main
then spawn a new celery beat process: celery beat
|
Python2 sax parser, best speed and performance for large files?
Question: So Ive been using suds with great benefit to consume a webservice.
Hit an issue with performance, for some data the cpu would spike hard, it
would take more than 60s to complete the request, which is served by gunicorn,
suds to webservice and so on.
Looking into it with line_profiler, objgraph, memory_profiler etc, I find the
culprit is it about takes 13s to parse a 9.2mb xml file, which is the response
from the webservice.
That can not be normal right? Just 9.2mb and I see 99% of the time is spent
parsing it, and the parsing is done by "from xml.sax import make_parser" which
means standard python?
Any faster xml parsers out there for big files?
Ill look into exactly what kind of structure is in the XML, but so far I know
its "UmResponse" which contains around 7000 "Document" elements with each
contains 10-20 lines of elements.
EDIT: Investigating further I see half of that 13s is spent in the suds
Handler in suds/sax/ ... hm could be suds problem and not python library, of
course.
EDIT2: suds unmarshaller used most of the time spent processing this, about
50s, parsing with sax was also slow, pysimplesoap which uses xml.minidom is
taking about 13s and lots of memory. However lxml.etree is below 2s and
objectify is also very fast, fast enough to use it instead of ElementTree
(which is faster than cElementTree for this specific xml here, 0.5s for one
0.17s for other)
Solution: Suds allows parameter retxml to be true, to give back the XML
without parsing and unmarshalling, from there I can do it faster with lxml.
Answer: Suds parsing with sax took time and even much more the unmarhsalling method in
suds src bindings/binding which uses the class umx/Typed quite a lot.
Solution, bypass all of that: Pass retxml=True to the client so that suds
doesnt do parsing and unmarshalling, awesome option by suds! Instead doing it
with lxml, which I found to be the fastest, somehow even faster than
cElementTree.
from lxml import objectify
from lxml.etree import XMLParser
Now another problem was that the xml had huge txt noded, more than 10mb, so
lxml would bail, the XMLParser needs the flag huge_tree=True to swallow and
process the large data file. Set it like this, the set_element_class_lookup is
whats really of great benefit, without it you dont really get an
ObjectifedElement back.
parser = XMLParser(remove_blank_text=True, huge_tree=True)
parser.set_element_class_lookup(objectify.ObjectifyElementClassLookup())
objectify.set_default_parser(parser)
obj = objectify.fromstring(ret_xml)
# iter here and return Body or Body[0] or whatever you need
#so all code which worked with suds unmarshaller works with objectified aswell
Then the rest of the code which looked up elements by property when suds had
unmarshalled it worked fine (just after returning the Body of the soap
envelope), no need to hassle with xpath or iteraparse xml elements.
objectify does it job in 1-2s compared to 50-60s for suds unmarshalling.
|
Uploading video to YouTube and adding it to playlist using YouTube Data API v3 in Python
Question: I wrote a script to upload a video to YouTube using YouTube Data API v3 in the
python with help of example given in [Example
code](https://developers.google.com/youtube/v3/guides/uploading_a_video).
And I wrote another script to add uploaded video to playlist using same
YouTube Data API v3 you can be seen
[here](https://github.com/alokmahor/add_to_youtube_playlist/blob/master/playlist.py)
After that I wrote a single script to upload video and add that video to
playlist. In that I took care of authentication and scops still I am getting
permission error. here is my new script
#!/usr/bin/python
import httplib
import httplib2
import os
import random
import sys
import time
from apiclient.discovery import build
from apiclient.errors import HttpError
from apiclient.http import MediaFileUpload
from oauth2client.file import Storage
from oauth2client.client import flow_from_clientsecrets
from oauth2client.tools import run
# Explicitly tell the underlying HTTP transport library not to retry, since
# we are handling retry logic ourselves.
httplib2.RETRIES = 1
# Maximum number of times to retry before giving up.
MAX_RETRIES = 10
# Always retry when these exceptions are raised.
RETRIABLE_EXCEPTIONS = (httplib2.HttpLib2Error, IOError, httplib.NotConnected,
httplib.IncompleteRead, httplib.ImproperConnectionState,
httplib.CannotSendRequest, httplib.CannotSendHeader,
httplib.ResponseNotReady, httplib.BadStatusLine)
# Always retry when an apiclient.errors.HttpError with one of these status
# codes is raised.
RETRIABLE_STATUS_CODES = [500, 502, 503, 504]
CLIENT_SECRETS_FILE = "client_secrets.json"
# A limited OAuth 2 access scope that allows for uploading files, but not other
# types of account access.
YOUTUBE_UPLOAD_SCOPE = "https://www.googleapis.com/auth/youtube.upload"
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
# Helpful message to display if the CLIENT_SECRETS_FILE is missing.
MISSING_CLIENT_SECRETS_MESSAGE = """
WARNING: Please configure OAuth 2.0
To make this sample run you will need to populate the client_secrets.json file
found at:
%s
with information from the APIs Console
https://code.google.com/apis/console#access
For more information about the client_secrets.json file format, please visit:
https://developers.google.com/api-client-library/python/guide/aaa_client_secrets
""" % os.path.abspath(os.path.join(os.path.dirname(__file__),
CLIENT_SECRETS_FILE))
def get_authenticated_service():
flow = flow_from_clientsecrets(CLIENT_SECRETS_FILE, scope=YOUTUBE_UPLOAD_SCOPE,
message=MISSING_CLIENT_SECRETS_MESSAGE)
storage = Storage("%s-oauth2.json" % sys.argv[0])
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run(flow, storage)
return build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,
http=credentials.authorize(httplib2.Http()))
def initialize_upload(title,description,keywords,privacyStatus,file):
youtube = get_authenticated_service()
tags = None
if keywords:
tags = keywords.split(",")
insert_request = youtube.videos().insert(
part="snippet,status",
body=dict(
snippet=dict(
title=title,
description=description,
tags=tags,
categoryId='26'
),
status=dict(
privacyStatus=privacyStatus
)
),
# chunksize=-1 means that the entire file will be uploaded in a single
# HTTP request. (If the upload fails, it will still be retried where it
# left off.) This is usually a best practice, but if you're using Python
# older than 2.6 or if you're running on App Engine, you should set the
# chunksize to something like 1024 * 1024 (1 megabyte).
media_body=MediaFileUpload(file, chunksize=-1, resumable=True)
)
vid=resumable_upload(insert_request)
#Here I added lines to add video to playlist
#add_video_to_playlist(youtube,vid,"PL2JW1S4IMwYubm06iDKfDsmWVB-J8funQ")
#youtube = get_authenticated_service()
add_video_request=youtube.playlistItems().insert(
part="snippet",
body={
'snippet': {
'playlistId': "PL2JW1S4IMwYubm06iDKfDsmWVB-J8funQ",
'resourceId': {
'kind': 'youtube#video',
'videoId': vid
}
#'position': 0
}
}
).execute()
def resumable_upload(insert_request):
response = None
error = None
retry = 0
vid=None
while response is None:
try:
print "Uploading file..."
status, response = insert_request.next_chunk()
if 'id' in response:
print "'%s' (video id: %s) was successfully uploaded." % (
title, response['id'])
vid=response['id']
else:
exit("The upload failed with an unexpected response: %s" % response)
except HttpError, e:
if e.resp.status in RETRIABLE_STATUS_CODES:
error = "A retriable HTTP error %d occurred:\n%s" % (e.resp.status,
e.content)
else:
raise
except RETRIABLE_EXCEPTIONS, e:
error = "A retriable error occurred: %s" % e
if error is not None:
print error
retry += 1
if retry > MAX_RETRIES:
exit("No longer attempting to retry.")
max_sleep = 2 ** retry
sleep_seconds = random.random() * max_sleep
print "Sleeping %f seconds and then retrying..." % sleep_seconds
time.sleep(sleep_seconds)
return vid
if __name__ == '__main__':
title="sample title"
description="sample description"
keywords="keyword1,keyword2,keyword3"
privacyStatus="public"
file="myfile.mp4"
vid=initialize_upload(title,description,keywords,privacyStatus,file)
print 'video ID is :',vid
I am not able to figure out what is wrong. I am getting permission error. both
script works fine independently.
could anyone help me figure out where I am wrong or how to achieve uploading
video and adding that too playlist.
Answer: I got the answer actually in both the independent script scope is different.
scope for uploading is "<https://www.googleapis.com/auth/youtube.upload>"
scope for adding to playlist is "<https://www.googleapis.com/auth/youtube>"
as scope is different so I had to handle authentication separately.
|
Embedding Python 3.3 in C++ from zipped standard library on Windows XP
Question: I want to embed Python 3.3.4 in my C++ application so that:
* Python's standard library is always taken from a zip archive alongside my app's executable (shouldn't depend on any environment vars etc);
* my own custom .py modules are imported from _another_ folder _or_ zip archive alongside the executable.
And, in fact, I've almost managed to do it right. The only thing that still
does not work is importing the standard library from a ZIP archive: it works
ok as a simple directory, but whenever I try to zip it, initialization fails
with the following error:
Fatal Python error: Py_Initialize: unable to load the file system codec
Is it even possible with latest Python? I've googled a lot for it and lots of
sources claim that putting correct "python33.zip" near the executable should
work. Still, my experiments prove otherwise. What am I missing?
Here's my test code - a minimal console application made by MS Visual Studio
2010, running on Windows XP SP3, with some comments as to what I tried and
what are the results:
#include "stdafx.h"
#include "python.h"
int _tmain(int argc, _TCHAR* argv[])
{
// calling or not calling Py_SetProgramName doesn't seem to change anything
//Py_SetProgramName(argv[0]);
// python_lib is a directory with contents of python33/Lib
// python_lib.zip is an equivalent ZIP archive with contents of python33/Lib (without any top-level subdirs)
// _scripts.dat is a ZIP archive containing a custom script (hello.py)
//Py_SetPath(L"python_lib;_scripts.dat"); // works fine! (non-zipped standard library, zipped custom script)
Py_SetPath(L"python_lib.zip;_scripts.dat"); // both std library and scripts are zipped - fails with error "unable to load the file system codec" during Py_Initialize()
Py_Initialize();
PyRun_SimpleString("from time import time,ctime\n"
"print('Today is',ctime(time()))\n");
PyRun_SimpleString("import hello"); // runs hello.py from inside _scripts.dat (works fine if Py_Initialize succeeds)
Py_Finalize();
return 0;
}
Answer: This problem was recently discovered and documented in [Python Issue
20621](http://bugs.python.org/issue20621). A fix for it will be released in
Python 3.3.5; 3.3.5 release candidate 2 is now available for testing.
<http://www.python.org/download/releases/3.3.5/>
|
fast read less structure ascii data file in numpy
Question: I would like to read a data grid (3D array of floats) from .xsf file. (format
documentation is here <http://www.xcrysden.org/doc/XSF.html> the
BEGIN_BLOCK_DATAGRID_3D block )
the problem is that data are in 5 columns and if the number of elements
Nx*Ny*Nz is not divisible by 5 than **the last line can have any length**. For
this reason I'm not able to use **_numpy.genfromtxt()_** _**of
numpy.loadtxt()_** ...
I made a subroutine which does solve the problem, but is terribly slow (
because it use tight loops probably ). The files i want to read are large (
>200 MB 200x200x200 = 8000000 numbers in ASCII )
Is there any **really fast way how to read such unfriendly formats** in python
/ numpy into ndarray?
* * *
xsf datagrids looks like this (example for shape=(3,3,3))
BEGIN_BLOCK_DATAGRID_3D
BEGIN_DATAGRID_3D_this_is_3Dgrid
3 3 3 # number of elements Nx Ny Nz
0.0 0.0 0.0 # grid origin in real space
1.0 0.0 0.0 # grid size in real space
0.0 1.0 0.0
0.0 0.0 1.0
0.000 1.000 2.000 5.196 8.000 # data in 5 columns
1.000 1.414 2.236 5.292 8.062
2.000 2.236 2.828 5.568 8.246
3.000 3.162 3.606 6.000 8.544
4.000 4.123 4.472 6.557 8.944
1.000 1.414 # this is the problem
END_DATAGRID_3D
END_BLOCK_DATAGRID_3D
Answer: I got something working with Pandas and Numpy. Pandas will fill in nan values
for the missing data.
import pandas as pd
import numpy as np
df = pd.read_csv("xyz.data", header=None, delimiter=r'\s+', dtype=np.float, skiprows=7, skipfooter=2)
data = df.values.flatten()
data = data[~np.isnan(data)]
result = data.reshape((data.size/3, 3))
Output
>>> result
array([[ 0. , 1. , 2. ],
[ 5.196, 8. , 1. ],
[ 1.414, 2.236, 5.292],
[ 8.062, 2. , 2.236],
[ 2.828, 5.568, 8.246],
[ 3. , 3.162, 3.606],
[ 6. , 8.544, 4. ],
[ 4.123, 4.472, 6.557],
[ 8.944, 1. , 1.414]])
|
Trying to Parse JSON date to POST to another System (Python)
Question: I am trying to write a script to GET project data from Insightly and post to
10000ft. Essentially, I want to take any newly created project in one system
and create that same instance in another system. Both have the concept of a
'Project'
I am extremely new at this but I only to GET certain Project parameters in
Insightly to pass into the other system (PROJECT_NAME, LINKS:ORGANIZATION_ID,
DATE_CREATED_UTC) to name a few.
I plan to add logic to only POST projects with a DATE_CREATED_UTC > yesterday,
but I am clueless on how to setup the script to grab the JSON strings and
create python variables (JSON datestring to datetime). Here is my current
code. I am simply just printing out some of the variables I require to get
comfortable with the code.
import urllib, urllib2, json, requests, pprint, dateutil
from dateutil import parser
import base64
#Set the 'Project' URL
insightly_url = 'https://api.insight.ly/v2.1/projects'
insightly_key =
api_auth = base64.b64encode(insightly_key)
headers = {
'GET': insightly_url,
'Authorization': 'Basic ' + api_auth
}
req = urllib2.Request(insightly_url, None, headers)
response = urllib2.urlopen(req).read()
data = json.loads(response)
for project in data:
project_date = project['DATE_CREATED_UTC']
project_name = project['PROJECT_NAME']
print project_name + " " + project_date
Any help would be appreciated
Edits:
I have updated the previous code with the following:
for project in data:
project_date = datetime.datetime.strptime(project['DATE_CREATED_UTC'], '%Y-%m-%d %H:%M:%S').date()
if project_date > (datetime.date.today() - datetime.timedelta(days=1)):
print project_date
else:
print 'No New Project'
This returns every project that was created after yesterday, but now I need to
isolate these projects and post them to the other system
Answer: Here is an example of returning a
[`datetime`](http://docs.python.org/3.3/library/datetime.html#datetime.datetime)
object from a parsed string. We will use the
[`datetime.strptime`](http://docs.python.org/3.3/library/datetime.html#datetime.datetime.strptime)
method to accomplish this. Here is a [list of the format
codes](http://docs.python.org/3.3/library/datetime.html#strftime-strptime-
behavior) you can use to create a format string.
>>> from datetime import datetime
>>> date_string = '2014-03-04 22:30:55'
>>> format = '%Y-%m-%d %H:%M:%S'
>>> datetime.strptime(date_string, format)
datetime.datetime(2014, 3, 4, 22, 30, 55)
As you can see, the `datetime.strptime` method returns a `datetime` object.
|
Concurrently run two functions that take parameters and return lists?
Question: I understand that two functions can run in parallel using `multiprocessing` or
`threading` modules, e.g. [Make 2 functions run at the same
time](http://stackoverflow.com/questions/2957116/make-2-functions-run-at-the-
same-time) and [Python multiprocessing for parallel
processes](http://stackoverflow.com/questions/8753306/python-multiprocessing-
for-parallel-processes).
But the above examples only use print function. **Is it possible to run
functions that return a list in parallel in python, if so, how?**
I've tried with threading:
from threading import Thread
def func1(x):
return [i*i for i in x]
def func2(x):
return [i*i*i for i in x]
nums = [1,2,3,4,5]
p1 = Thread(target = func1(nums)).start()
p2 = Thread(target = func2(nums)).start()
print p1
print p2
but i got the follow error:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 808, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 761, in run
self.__target(*self.__args, **self.__kwargs)
TypeError: 'list' object is not callable
None
None
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 808, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 761, in run
self.__target(*self.__args, **self.__kwargs)
TypeError: 'list' object is not callable
I've tried inputing `args` parameter as a tuple, instead of a variable:
import threading
from threading import Thread
def func1(x):
return [i*i for i in x]
def func2(x):
return [i*i*i for i in x]
nums = [1,2,3,4,5]
p1 = Thread(target = func1, args=(nums,)).start()
p2 = Thread(target = func2, args=(nums,)).start()
print p1, p2
but it only returns `None None`, the desired output should be:
**[out]:**
[1, 4, 9, 16, 25] [1, 8, 27, 64, 125]
Answer: Thread's target function cannot return a value. Or, I should say, the `return`
value is ignored and as such, not communicated back to spawning thread. But
here's a couple things you can do:
1) Communicate back to spawning thread using `Queue.Queue`. Note the wrapper
around the original functions:
from threading import Thread
from Queue import Queue
def func1(x):
return [i*i for i in x]
def func2(x):
return [i*i*i for i in x]
nums = [1,2,3,4,5]
def wrapper(func, arg, queue):
queue.put(func(arg))
q1, q2 = Queue(), Queue()
Thread(target=wrapper, args=(func1, nums, q1)).start()
Thread(target=wrapper, args=(func2, nums, q2)).start()
print q1.get(), q2.get()
2) Use `global` to access result lists in your threads, as well as the
spawning process:
from threading import Thread
list1=list()
list2=list()
def func1(x):
global list1
list1 = [i*i for i in x]
def func2(x):
global list2
list2 = [i*i*i for i in x]
nums = [1,2,3,4,5]
Thread(target = func1, args=(nums,)).start()
Thread(target = func2, args=(nums,)).start()
print list1, list2
|
Convert a list of sets to a set of sets (to find the unique elements)
Question: I want to find the unique elements in `A =[set([1,2]),set([1,2]), set([1])]`
in Python. I tried set(A); it didn't work. Is there any easy way to do it?
Answer: Convert your sets to [`frozenset()`
objects](http://docs.python.org/2/library/stdtypes.html#frozenset):
set(frozenset(s) for s in A)
A `frozenset()` is an immutable set object, and more importantly, hashable.
Thus it can be stored in a `set()`.
Demo:
>>> A = [set([1,2]),set([1,2]), set([1])]
>>> set(frozenset(s) for s in A)
set([frozenset([1, 2]), frozenset([1])])
|
Querying an Excel spreadsheet and returning results? But which program?
Question: A company I am working for has recently asked the head tech to compile a
spreadsheet that has every single computer name listed, along with the person
that uses that computer. The idea is that Help Desk can have this spreadsheet
open, so when the user calls, they can find their name and therefor their
associated computer number.
However, this seems a bit clunky to me. We have around 1000 users, so that
spreadsheet would just be a pain imho. I thought it might be easier to have a
simple program that asks you for the User's name, you type it in, and it
returns the computer number.
So, I am assuming this program would reference this spreadsheet and then
return your information. Of course, it doesn't have to reference the
spreadsheet, it could reference something else.
I am only experienced with Python, and only because I took a Introductory
class on it for college. However, I am thinking a Batch file would work
better? Or would something else work better? I am taking this as a chance to
learn and perhaps offer a neat utility to our Help Desk, and was just looking
for advice. :)
Thank you.
Answer: If you're planning on using python with the excel, there's two (most common,
there are others) ways to go:
1. use the windows API, which lets you run native excel commands through python. i.e.:
from win32com.client import Dispatch
excel = Dispatch('Excel.Application')
excel.Visible = True #if you want to see it
sheet = excel.Workbooks.open('Book1').Sheets('Sheet1')
# get value from a cell:
sheet.Cells(1, 1).value
# get range
sheet.Range('A1:A2000')
# etc. google about vlookup and other functionalities. You can run anything you would in excel through python here, it's just an API
this will be the fastest most efficient method, as it uses native excel
functionalities and those are optimized. On the other hand, it's not very
user-friendly, as the windows API is kinda messy (that's my opinion at least).
2. Now if you're like me, and prefer to go pythonic, you can use the excellent [xlrd library](http://www.python-excel.org/) which is very easy to use
import xlrd
book = xlrd.open_workbooks('Book1')
sheet = book.sheet_by_name('Sheet1')
sheel.col(1)
I'm not gonna delve any deeper. You'll have to figure it out yourself, and as
you can see, it ain't too hard. Also, note that if you'd like to also _write
to the file_ you'll need another library called xlwt.
Of course, there's also a third option - pass the data to a proper database
(SQLite will probably suffice) and use SQL queries to fetch the data. That
would be the most fast and efficient way to go, if that's important to you
(SQL _was_ created for this exact purpose, unlike excel. so uh... y'know).
Good luck!
|
numpy correlation coefficient: np.dot(A, A.T) on large arrays causing seg fault
Question: NOTE:
Speed is not as important as getting a final result.
However, some speed up over worst case is required as well.
I have a large array A:
A.shape=(20000,265) # or possibly larger like 50,000 x 265
I need to compute the correlation coefficients.
np.corrcoeff # internally casts the results as doubles
I just borrowed their code and wrote my own cov/corr not casting into doubles,
since I really only need 32 bit floats.And I ditch the conj() since my data
are always real.
cov = A.dot(A.T)/n #where A is an array of 32 bit floats
diag = np.diag(cov)
corr = cov / np.sqrt(np.mutliply.outer(d,d))
I still run out of memory and I'm using a large memory machine, 264GB
I've been told, that the fast C libraries, are probably using a routine which
breaks the dot product up into pieces, and to optimize this, the number of
elements is padded to a power of 2.
I don't really need to compute the symmetric half of the correlation
coefficient matrix. However, I don't see a way to do this in reasonable amount
of time doing it "manually", with python loops.
Does anybody know of a way to ask numpy for a decent dot product routine, that
balances memory usage with speed...?
Cheers
UPDATE:
Funny how writing these questions tends to help me find the language for a
better google query.
Found this:
http://wiki.scipy.org/PerformanceTips
Not sure that I follow it....so, please comment or provide answers about this
solution, your own ideas, or just general commentary on this type of problem.
TIA
EDIT: I apologize because my array is really much bigger than I thought. array
size is actually 151,000 x 265 I''m running out of memory on a machine with
264 GB with at least 230 GB free.
I'm surprised that the numpy call to blas dgemm and being careful with C order
arrays didn't do squat.
Answer: Python compiled with intel's mkl will run this with 12GB of memory in about 30
seconds:
>>> A = np.random.rand(50000,265).astype(np.float32)
>>> A.dot(A.T)
array([[ 86.54410553, 64.25226593, 67.24698639, ..., 68.5118103 ,
64.57299805, 66.69223785],
...,
[ 66.69223785, 62.01016235, 67.35866547, ..., 66.66306305,
65.75863647, 86.3017807 ]], dtype=float32)
If you do not have access to in intel's MKL download python
[anaconda](http://continuum.io/downloads) and install the accelerate package
which has a trial version for 30 days or free for academics that contains a
mkl compile. Various other C++ BLAS libraries should work also- even if it
copies the array from C to F it should not take more then ~30GB of memory.
The only thing that I can think of that your installation is trying to do is
try to hold the entire 50,000 x 50,000 x 265 array in memory which is quite
frankly terrible. For reference a float32 50,000 x 50,000 array is only 10GB,
while the aforementioned array is 2.6TB...
If its a gemm issue you can try a chunk gemm formula:
def chunk_gemm(A, B, csize):
out = np.empty((A.shape[0],B.shape[1]), dtype=A.dtype)
for i in xrange(0, A.shape[0], csize):
iend = i+csize
for j in xrange(0, B.shape[1], csize):
jend = j+csize
out[i:iend, j:jend] = np.dot(A[i:iend], B[:,j:jend])
return out
This will be slower, but will hopefully get over your memory issues.
|
python pandas read_csv how to parse microsecond
Question: I have csv file with microsecond as time.
Time,Bid
2014-03-03 23:30:30:224323224323,0.8925
2014-03-03 23:30:30:224390224390,0.892525
2014-03-03 23:30:30:224408224408,0.892525
2014-03-03 23:30:30:364299364299,0.892525
how do i parse microsecond into Time index with read_csv() or other function
read_json maybe?
Thank you!
Answer: Following on from @Jeff's comment you can do the following:
In [29]:
import pandas as pd
# specifically set the Time column to object dtype
df = pd.read_csv(r'c:\data\temp1.txt', dtype={'Time':object})
df
Out[29]:
Time Bid
0 2014-03-03 23:30:30:224323224323 0.892500
1 2014-03-03 23:30:30:224390224390 0.892525
2 2014-03-03 23:30:30:224408224408 0.892525
3 2014-03-03 23:30:30:364299364299 0.892525
[4 rows x 2 columns]
In [32]:
# trim the erroneous data
df.Time=df.Time.apply(lambda x: x[:-6])
# now apply to_datetime and pass the format string
df.Time = pd.to_datetime(df.Time, format='%Y-%m-%d %H:%M:%S:%f')
df.dtypes
Out[32]:
Time datetime64[ns]
Bid float64
dtype: object
|
scrapy and relative paths
Question: I am not straightforward with python, I have been trying for hours now to cut
off few char from multiple elements in a list. All scrapped links that I want
to follow with my spider are relative, here are just few lines of my output:
[u'../../../info/Auto/Dutch/']
[u'../../../info/Automazing/All_Star/']
[u'../../../info/AutoW/Passion/']
I am trying to get rid of
"../../.."
This is my code:
def parse(self, response):
hxs = HtmlXPathSelector(response)
links = hxs.select('//div[@class="partInnerDiv"]/table/tbody/tr')
items = []
for link in links:
item = myItem()
item ['url'] = link.select('th/a/@href').extract()
print complete_url(item ['url'])
items.append(item)
return items
def complete_url(string):
url = string
#if string.startswith("../"):
# return string[5:]
return url.replace('../../', '')
From the code above you can see that I tried few methods, but nothing works
for me :(
exceptions.AttributeError: 'list' object has no attribute 'replace'
Any advice appreciated.
Answer: You are looking for
[`urlparse.urljoin()`](http://docs.python.org/2/library/urlparse.html#urlparse.urljoin):
import urlparse
urlparse.urljoin(u'http://www.mysite.com/a/b/c/', u'../../../info/Auto/Dutch/')
# => u'http://www.mysite.com/info/Auto/Dutch/'
In your callback it would become:
def parse(self, response):
hxs = HtmlXPathSelector(response)
links = hxs.select('//div[@class="partInnerDiv"]/table/tbody/tr')
items = []
for link in links:
item = myItem()
item ['url'] = [urlparse.urljoin(response.url, u)
for u in link.select('th/a/@href').extract()]
items.append(item)
return items
This code assumes Python 2.x; in Python 3, the function has been renamed to
`urllib.parse.urljoin()`, but Scrapy does not currently work with Python 3.
|
Scrapy - Non-ascii-character declared, but no encoding declared
Question: I'm attempting to scrape some basic data off this site as an exercise to learn
more about scrapy, and as proof of concept for a university project:
<http://steamdb.info/sales/>
When I was using the scrapy shell I was able to get the information I wanted
using the following XPath:
sel.xpath(‘//tbody/tr[1]/td[2]/a/text()’).extract()
which should return the title of the game of the first row of the table, in
the structure:
<tbody>
<tr>
<td></td>
<td><a>stuff I want here</a></td>
...
And it does, in the shell.
However, when I attempt to put this into a spider (steam.py):
1 from scrapy.spider import BaseSpider
2 from scrapy.selector import HtmlXPathSelector
3 from steam_crawler.items import SteamItem
4 from scrapy.selector import Selector
5
6 class SteamSpider(BaseSpider):
7 name = "steam"
8 allowed_domains = ["http://steamdb.info/"]
9 start_urls = ['http://steamdb.info/sales/?displayOnly=all&category=0&cc=uk']
10 def parse(self, response):
11 sel = Selector(response)
12 sites = sel.xpath("//tbody")
13 items = []
14 count = 1
15 for site in sites:
16 item = SteamItem()
17 item ['title'] = sel.xpath('//tr['+ str(count) +']/td[2]/a/text()').extract().encode('utf-8')
18 item ['price'] = sel.xpath('//tr['+ str(count) +']/td[@class=“price-final”]/text()').extract().encode('utf-8')
19 items.append(item)
20 count = count + 1
21 return items
I get the following error:
ricks-mbp:steam_crawler someuser$ scrapy crawl steam -o items.csv -t csv
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 5, in <module>
pkg_resources.run_script('Scrapy==0.20.0', 'scrapy')
File "build/bdist.macosx-10.9-intel/egg/pkg_resources.py", line 492, in run_script
File "build/bdist.macosx-10.9-intel/egg/pkg_resources.py", line 1350, in run_script
for name in eagers:
File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/EGG-INFO/scripts/scrapy", line 4, in <module>
execute()
File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/cmdline.py", line 143, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/cmdline.py", line 89, in _run_print_help
func(*a, **kw)
File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/cmdline.py", line 150, in _run_command
cmd.run(args, opts)
File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/commands/crawl.py", line 47, in run
crawler = self.crawler_process.create_crawler()
File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/crawler.py", line 87, in create_crawler
self.crawlers[name] = Crawler(self.settings)
File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/crawler.py", line 25, in __init__
self.spiders = spman_cls.from_crawler(self)
File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/spidermanager.py", line 35, in from_crawler
sm = cls.from_settings(crawler.settings)
File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/spidermanager.py", line 31, in from_settings
return cls(settings.getlist('SPIDER_MODULES'))
File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/spidermanager.py", line 22, in __init__
for module in walk_modules(name):
File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/utils/misc.py", line 68, in walk_modules
submod = import_module(fullpath)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/xxx/scrape/steam/steam_crawler/spiders/steam.py", line 18
SyntaxError: Non-ASCII character '\xe2' in file /xxx/scrape/steam/steam_crawler/spiders/steam.py on line 18, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details
I have a feeling that all I need to do is somehow tell scrapy that the
characters will follow utf-8 not ascii - as there are £'s etc. But from what I
can gather, its supposed to gather this information from the head of the page
its scraping which in the case of this site is:
<meta charset="utf-8">
Which leaves me baffled! Any insight/reading that isn't the scrapy docs
themselves I would be interested in too!
Answer: seems like you are using `“` instead of double quotes `"`
btw, a better practice to loop on all table rows would be something like:
for tr in sel.xpath("//tr"):
item = SteamItem()
item ['title'] = tr.xpath('td[2]/a/text()').extract()
item ['price'] = tr.xpath('td[@class="price-final"]/text()').extract()
yield item
|
How to set up Python packages
Question: I want this structure:
Zimp/controller/game_play -->How do I: import Zimp/model/game_play module in
the easiest way? Zimp/model/game_play
I made a folder called controller and a folder called model. Within those
folders I put an empty `__init__.py` file (don't know why that would do
anything). I didn't make a model.py file or a controller.py file. It didn't
work. I just made a model.py and a controller.py that are empty except for the
main block that automatically appears when creating a new module. No
difference.
In controller/game_play.py I tried: `from ..model import game_play_model`
It says value error: attempted relative import in non-package
Is the idea not to actually put them in separate directories? What is the
norm?
Thanks
Answer: The problem is you're trying to execute a subpackage module directly, see
answers to the question [_Attempted relative import in non-package even with
__init__.py_](http://stackoverflow.com/questions/11536764/attempted-relative-
import-in-non-package-even-with-init-py).
First I think you need to set up your directory file structure like this:
Zimp/ top-level package
__init__.py package initalization
controller/ subpackage
__init__.py subpackage initalization
game_play.py subpackage module
model/ subpackage
__init__.py subpackage initalization
game_play_model.py subpackage module
The `__init__.py` files can all be empty as they just indicate that the
directory is a [sub]package.
For illustrative purposes let's say the`game_play_model.py`file contained:
print 'hello from game_play_model.py'
and the`game_play.py`file contains the following to detect when it was being
executed directly and adds the name of the parent of its folder -- `Zimp` --
to the Python search path -- thus allowing you to then directly import other
things from the package when it's run that way.
if __name__ == '__main__' and __package__ is None:
import sys, os.path as path
# append parent of the directory the current file is in
sys.path.append(path.dirname(path.dirname(__file__)))
import model.game_play_model
print 'hello from game_play.py'
And you executed it directly with something like`python game_play.py`it would
output:
hello from game_play_model.py
hello from game_play.py
|
Generating all permutations excluding cyclic rotations in Python
Question: I need to create a (practice) program for currency arbitrage that detects
profitable "loops" given a series of exchange rates. So there might be
different values for USD->JPY, JPY->USD, USD->EUR, and so on. In order to
detect profitability, however, I first need to enumerate all possible loops --
USD->JPY->EUR->USD is one example, but USD->EUR->JPY->USD is a distinct
example using the same currencies since it may hit different exchange rates.
If I ignore the last part of the loop, which will always be the same as the
origin, it seems to be the case that every currency can only exist at most
once in the "best" loop, as if a currency exists more than once it would
actually be two different loops (at least one of which would still be
profitable).
Similarly, I can ignore loops that are just translations of already tested
loops: USD->JPY->ASD is the same as JPY->ASD->USD.
So, given input like _[USD,JPY,EUR,ASD]_ I need something that would return:
(USD,JPY,EUR,ASD)
(USD,JPY,ASD,EUR)
(USD,EUR,ASD,JPY)
(USD,EUR,JPY,ASD)
(USD,ASD,EUR,JPY)
(USD,ASD,JPY,EUR)
Answer: This solution uses the yield from syntax introduced in Python 3.3. Like the
built-in itertools.permutations(), this:
* Is a generator and does not require storing anything
* Will yield an empty tuple if passed length 0
* Assumes every item in the permuted object is itself unique
from itertools import permutations
def unique_cyclic_permutations(thing, length):
if length == 0:
yield (); return
for x in permutations(thing[1:], length - 1):
yield (thing[0],) + x
if length < len(thing):
yield from unique_cyclic_permutations(thing[1:], length)
The algorithm works by choosing a pivot, fixing it at the beginning, and then
permuting the rest of the objects. In the case of a non-full length
permutation, there will also be some permutations that don't include the pivot
object at all. In this case, the generator recursively calls itself while
excluding the original pivot.
|
How can I find the MAC address of a client on the same network, using Python-Flask?
Question: I am looking for a python way grab the client's MAC address. All requests are
over the same network. I am looking for something similar to perform `arp -n
<Client_IP>` on the router.
Answer: Not sure but you can always get Ip address using the request object like
`request.remote_addr` and for that you have to
`import request` and then you can pass this ip to this function
import netifaces as nif
def mac_for_ip(ip):
'Returns a list of MACs for interfaces that have given IP, returns None if not found'
for i in nif.interfaces():
addrs = nif.ifaddresses(i)
try:
if_mac = addrs[nif.AF_LINK][0]['addr']
if_ip = addrs[nif.AF_INET][0]['addr']
except IndexError, KeyError: #ignore ifaces that dont have MAC or IP
if_mac = if_ip = None
if if_ip == ip:
return if_mac
return None
from [here](http://stackoverflow.com/questions/159137/getting-mac-address).
|
add text in bins of histogram in wxpython
Question: How to write text inside bins(bars) of histograms in wxpython?
import csv
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.figure import Figure
data1 = np.random.normal(5.0,3.0,1000)
with open('test.csv', 'wb') as f:
writer = csv.writer(f)
plt.hist(data1)
plt.show()
Answer: You can't write text inside the bins directly. This is because the bins are
simple Pactch objects.
However, have no despair! There is a path you can go to achieve this:
* iterate over bins, get their boundaries
* create a text label
* place it in the correct boundaries
To obtain the bin boundaries see my answer here: [How to color bars who make
up 50% of the data?](http://stackoverflow.com/questions/21227883/how-to-color-
bars-who-make-up-50-of-the-data)
|
SocketServer ThreadingMixIn purpose of server_thread
Question: In the example of a asynchronous (threading) SocketServer
<http://docs.python.org/2/library/socketserver.html> a server thread (called
server_thread) is started, to start new threads for each request. Due to some
problems catching KeyboardInterrupts, I started looking for similar code and
found that there's no apparent difference when NOT using a server thread, but
ctrl-c actually works.
Even though my code works I'd very much like to know
1) Why does not a simple 'try' to catch KeyboardInterrupt work, when using the
server_thread?
2) What good does the server_thread from the example serve - as opposed to my
somewhat simpler example?
From the python SocketServer example, catching keyboardinterrupt in try does
not work:
if __name__ == "__main__":
server = ThreadedTCPServer(serverAddr, SomeCode)
<snip>
# Start a thread with the server -- that thread will then start one
# more thread for each request
server_thread = threading.Thread(target=server.serve_forever)
server_thread.start()
My simpler example, ctrl-c works.
if __name__ == "__main__":
server = ThreadedTCPServer(serverAddr, SomeCode)
try:
server.serve_forever()
print "ctrl-c to exit"
except KeyboardInterrupt:
print "interrupt received, exiting"
server.shutdown()
Answer: 1) That's a general problem. When you do CTRL+C then a signal is sent to the
process. In the process the main thread catches the signal and (if not handled
properly) the main thread gets interrupted. But that signal does not kill
other threads. And Python won't quit as long as there are non-daemon threads
running (because that would not be safe). If you know what you are doing you
can add this:
server_thread = threading.Thread(target=server.serve_forever)
server_thread.daemon = True
server_thread.start()
and it should work now (assuming you do something after
`server_thread.start()` like waiting - otherwise Python will just quit, it
won't wait for daemon threads). However remember that you may kill your server
during some important operation. In order to avoid that you should implement
some sort of graceful kill:
import signal
if __name__ == "__main__":
server = ThreadedTCPServer(serverAddr, SomeCode)
# some code
server_thread = threading.Thread(target=server.serve_forever)
server_thread.start()
# some code
try:
signal.pause() # wait for a signal, perhaps in a loop?
except:
server.shutdown() # graceful quit
2) It just starts a server in a separate thread. Perhaps the idea was that you
can do other operations in the meantime? If you only want to run the server
there is no need to do that.
Also the reason might be the one I've stated above: graceful quit. If you just
interrupt the server it will die, perhaps during some important operation.
|
python mock: @wraps(f) problems
Question: I want to test a simple decorator I wrote:
It looks like this:
#utilities.py
import other_module
def decor(f):
@wraps(f)
def wrapper(*args, **kwds):
other_module.startdoingsomething()
try:
return f(*args, **kwds)
finally:
other_module.enddoingsomething()
return wrapper
Then I test it using python-mock:
#test_utilities.py
def test_decor(self):
mock_func = Mock()
decorated_func = self.utilities.decor(mock_func)
decorated_func(1,2,3)
self.assertTrue(self.other_module.startdoingsomething.called)
self.assertTrue(self.other_module.enddoingsomething.called)
mock_func.assert_called_with(1,2,3)
But it kicks back with:
Traceback (most recent call last):
File "test_utilities.py", line 25, in test_decor
decorated_func = Mock(wraps=self.utilities.decor(mock_func))
File "utilities.py", line 35, in decor
@wraps(f)
File "/usr/lib/python2.7/functools.py", line 33, in update_wrapper
setattr(wrapper, attr, getattr(wrapped, attr))
File "/usr/local/lib/python2.7/dist-packages/mock.py", line 660, in __getattr__
raise AttributeError(name)
AttributeError: __name__
I know functools.wraps() is just a helper wrapper. So if I take it out the
test works.
Can I get Mock to play nice with functools.wraps()?
Python 2.7.3
Answer: Just give your mock that attribute:
mock_func.__name__ = 'foo'
That's it really.
Demo:
>>> from functools import wraps
>>> from mock import Mock
>>> def decor(f):
... @wraps(f)
... def wrapper(*args, **kwds):
... return f(*args, **kwds)
... return wrapper
...
>>> mock_func = Mock()
>>> decor(mock_func)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in decor
File ".../opt/lib/python2.7/functools.py", line 33, in update_wrapper
setattr(wrapper, attr, getattr(wrapped, attr))
File ".../lib/python2.7/site-packages/mock.py", line 660, in __getattr__
raise AttributeError(name)
AttributeError: __name__
>>> mock_func.__name__ = 'foo'
>>> decor(mock_func)
<function foo at 0x10c4321b8>
Setting `__name__` is perfectly fine; the `@wraps` decorator simply copies
over the `__name__` attribute to the wrapper, and on function objects that
attribute is normally set to a string value. It's a writable attribute on
functions, in any case, and as long as you use strings `function.__name__` can
be set to any value.
|
Python code does not work, while counter does not work
Question: I am currently working on a project for class and we have come up with a
problem. When we run the code, it usually crashes. I am guessing it runs
infinitely. This program is also using Tkinter. Here is the code:
import tkinter as tk
from tkinter import *
# import the random module
import random``
# set control variables
guess = 0
counter = 0
number=random.randint(0,100)
themain=number
def myGuess():
guess = float(enter1.get())
counter= 10
while guess != number:
counter = counter - 1
if guess == number:
result = "Congratulations!"
howmany= "You have ", counter, "Tries to spare"
label2.config(text=result)
label3.config(text=howmany)
elif guess > number:
result = "To high"
howmany= "You have ", counter, "Tries left"
label2.config(text=result)
label3.config(text=howmany)
elif guess < number:
result = "To low"
howmany= "You have ", counter, "Tries left"
label2.config(text=result)
label3.config(text=howmany)
root=tk.Tk()
root.title("Guessing Game")
#-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
label1 = tk.Label(root, text=' Enter value:')
enter1 = tk.Entry(root, bg='red')
btn1 = tk.Button(root, text=' Enter Number', command=myGuess)
label2 = tk.Label(root, text='')
label3 = tk.Label(root, text='')
#-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
label1.grid(row=0, column=0)
enter1.grid(row=0, column=1, padx=5, pady=5)
btn1.grid(row=2, column=0, pady=5)
label2.grid(row=2, column=1)
label3.grid(row=3, column=0)
I hope you guys can help!
Answer: It seems that you never change the value of `guess`. For crash we need more
information like the traceback.
|
Encode MIME multipart with binary data in Python?
Question: How can I construct a MIME multipart message in Python? I've tried the `email`
package of Python but it appears broken -- it doesn't properly do binary
sections (sets their `Content-Transfer-Encoding` to base64 and leaves the data
as binary). Note it is very important the actual data is binary encoded, and
not base64, for my application. I need to minimize the size.
This is the code I tried.
from email import message, mime, generator, encoders
from email.mime import multipart, text, image
from cStringIO import StringIO
import os
m = multipart.MIMEMultipart( "related" )
part = text.MIMEText( "text", "plain" )
part.set_payload( "hello" )
part.add_header( 'Content-Disposition', 'asset', name='abc' )
m.attach( part )
part = image.MIMEImage( "image", "x-other" )
part.set_payload( os.urandom(200) )
m.attach( part )
fp = StringIO()
g = generator.Generator( fp, mangle_from_ = False, maxheaderlen = 1000 )
g.flatten(m)
print( fp.getvalue() )
Answer: You can force whichever `Content-Transfer-Encoding` you want like this:
part = image.MIMEImage( "image", "x-other", encoders.encode_noop )
part.set_payload( os.urandom(200) )
part.add_header( 'Content-Transfer-Encoding', 'binary' )
m.attach( part )
|
Python @properties raising an error
Question: I am trying to write a class to pass the following unittest:
import unittest
from property_address import *
class TestAddresses(unittest.TestCase):
def setUp(self):
self.home = Address( name='Steve Holden', street_address='1972 Flying Circus', city='Arlington', state='VA', zip_code='12345' )
def test_name(self):
self.assertEqual(self.home.name, 'Steve Holden')
self.assertRaises(AttributeError, setattr, self.home, 'name', 'Daniel Greenfeld')
def test_state(self):
self.assertEqual(self.home.state, 'VA')
self.assertRaises(StateError, setattr, self.home, 'state', 'Not a state')
self.home.state = 'CO'
self.assertEqual(self.home.state, 'CO')
The part I am having issues with is the self.assertRaises(StateError, setattr,
self.home, 'state', 'Not a state')
I can't seem how to get a StatError to be raised.
The code I am using is:
class Address(object):
states = ['IA', 'KS', 'UT', 'VA', 'NC', 'NE', 'SD', 'AL', 'ID', 'FM', 'DE', 'AK', 'CT', 'PR', 'NM', 'MS', 'PW', 'CO', 'NJ', 'FL', 'MN',
'VI', 'NV', 'AZ', 'WI', 'ND', 'PA', 'OK', 'KY', 'RI', 'NH', 'MO', 'ME', 'VT', 'GA', 'GU', 'AS', 'NY', 'CA', 'HI', 'IL', 'TN',
'MA', 'OH', 'MD', 'MI', 'WY', 'WA', 'OR', 'MH', 'SC', 'IN', 'LA', 'MP', 'DC', 'MT', 'AR', 'WV', 'TX']
def __init__(self,name, street_address, city, state, zip_code):
self._name = name
self._street_address = street_address
self._city = city
self._state = state
self._zip_code = zip_code
@property
def name(self):
return self._name.title()
@property
def state(self):
return self._state
@state.setter
def state(self,value):
if value in self.states:
self._state = value
else:
raise ### This is where I am stuck
do I need to create a new @property for StateError, or should I work it into
state def somehow.
Answer: You need to raise a `StateError` exception; that is all:
@state.setter
def state(self,value):
if value not in self.states:
raise StateError(value)
self._state = value
This does require you to have defined the exception class first, of course:
class StateError(Exception):
"""Invalid state value used"""
Demo:
>>> class StateError(Exception): pass
...
>>> class Address(object):
... states = ['IA', 'KS', 'UT', 'VA', 'NC', 'NE', 'SD', 'AL', 'ID', 'FM', 'DE', 'AK', 'CT', 'PR', 'NM', 'MS', 'PW', 'CO', 'NJ', 'FL', 'MN',
... 'VI', 'NV', 'AZ', 'WI', 'ND', 'PA', 'OK', 'KY', 'RI', 'NH', 'MO', 'ME', 'VT', 'GA', 'GU', 'AS', 'NY', 'CA', 'HI', 'IL', 'TN',
... 'MA', 'OH', 'MD', 'MI', 'WY', 'WA', 'OR', 'MH', 'SC', 'IN', 'LA', 'MP', 'DC', 'MT', 'AR', 'WV', 'TX']... @property
... def state(self):
... return self._state
... @state.setter
... def state(self,value):
... if value not in self.states:
... raise StateError(value)
... self._state = value
...
>>> a = Address()
>>> a.state = 'VA'
>>> a.state = 'Nonesuch'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 11, in state
__main__.StateError: Nonesuch
|
Debugging Heapsort in Python
Question: I know how to program in Java, I am very new to python. I am trying to
implement HeapSort in Python but I am unable to get where this code went
wrong. Can anyone help ?
This is my implementation:
class HeapSort:
def sort(self,list):
self.p = list
self.N = len(list)
for k in range(N/2,1,-1):
sink(k,N)
while N> 1:
exch(1,N)
N -=1
sink(1,N)
# Helper Functions to restore the heap invariant
def sink(k,N):
while 2*k <= N:
j = 2*k
if (j<N & (j < j+1)):
j += 1
if (j < k):
break
exch(k,j)
k = j
def exch(i,j):
p[i],p[j] = p[j],p[i]
# Helper Functions to debug
def isSorted(list):
for k in range(1,len(list)):
return False
return True
L = [6, 4, 2, 8, 1, 9, 3, 0, 12, 5]
print(L)
h = HeapSort()
h.sort(L)
print(L)
The output I am getting is
[6, 4, 2, 8, 1, 9, 3, 0, 12, 5]
NameError: "name 'HeapSort' is not defined"
module body in heapsort.py at line 26
class HeapSort:
function HeapSort in heapsort.py at line 64
h = HeapSort()
Answer: IF your indentation is accurate, it is causing you to try to call HeapSort in
your definition of HeapSort.
Instead you would want
class HeapSort():
...
def main():
L = [6, 4, 2, 8, 1, 9, 3, 0, 12, 5]
print(L)
h = HeapSort()
h.sort(L)
print(L)
if __name__ == '__main__':
main()
This allows you to run the file in which you define your class, and if other
files import the class, it will not run the code.
**EDIT**
The above will fix the error that you are getting, but if you see @Matt's
comments, but you are running tests _within_ your `heapsort.py` file, you
should move these to an external `heapsort-tests.py` file and import your
class with
from heapsort import HeapSort
**EDIT 2**
If you are treating this as a class you want to use and pass around as an
instantiated object, then you need to pass `self` to all of your methods and
call them with `self.method_name()`, i.e. `self.sink(x,y)`. If not, you would
sort by calling something like `HeapSort.sort(L)` instead of creating `h`.
|
Trouble parsing comments with praw
Question: I'm trying to scan a particular subreddit to see the how many times a comment
appears in the top submissions.
I haven't been able to get any indication that it is actually reading the
message, as it won't print the body of the message at all. _Note:_ sr =
subreddit phrase = phrase that's being looked for
I'm still new to praw and python (only picked it up in the last hour) but I've
had a fair amount of experience in c.
Any help would be appreciated.
submissions = r.get_subreddit(sr).get_top(limit=1)
for submission in submissions:
comments = praw.helpers.flatten_tree(submission.replace_more_comments(limit=None, threshold=0))
for comment in comments:
print(comment.body.lower())
if comment.id not in already_done:
if phrase in comment.body.lower():
phrase_counter = phrase_counter + 1
Answer: The `Submission.replace_more_comments` return a list of the `MoreComment`
objects that were _NOT_ replaced. So if you're calling it with `limit=None`
and `threshold=0` then it will return an empty list. See the
[`replace_more_comments`](https://praw.readthedocs.org/en/latest/pages/code_overview.html#praw.objects.Submission.replace_more_comments)
docstring. Here's a full example of how to use both `replace_more_comments`
and `flatten_tree`. For more information see the [comment
parsing](https://praw.readthedocs.org/en/latest/pages/comment_parsing.html)
page in our documentation.
import praw
r = praw.Reddit(UNIQUE_AND_DESCRIPTIVE_USERAGENT_CONTAINING_YOUR_REDDIT_USERNAME)
subreddit = r.get_subreddit('python')
submissions = subreddit.get_top(limit=1)
for submission in submissions:
submission.replace_more_comments(limit=None, threshold=0)
flat_comments = praw.helpers.flatten_tree(submission.comments)
for comment in flat_comments:
print(comment.body)
|
Split string into different labels
Question: I'm trying to split a string into words and then putting each word on a
different label. I found here a code that can split and print each word:
my_phrase="The split method returns a list of the words in the string"
my_split_words = my_phrase.split()
for each_word in my_split_words:
print each_word
But how to make a loop for instead of printing, generating labels?
I'm using Python 2.7 with Kivy for the GUI. Thanks in advance!
Sorry if my formatting is wrong, first post here :)
**Edit 1:**
My code looks like this right now:
from kivy.app import App
from kivy.uix.scatter import Scatter
from kivy.uix.label import Label
from kivy.uix.floatlayout import FloatLayout
from kivy.uix.boxlayout import BoxLayout
class TestApp(App):
def build(self):
b = BoxLayout(orientation='vertical')
f = FloatLayout()
s = Scatter()
l = [Label(text=word) for word in "The split method returns a list of the words in the string".split()]
f.add_widget(s)
s.add_widget(l)
b.add_widget(f)
return b
if __name__ == "__main__":
TestApp().run()
After @Hugh Bothwell answer I tried to replace the old L label for the
multiple labels generated on the split, but it didn't work :T
**Edit2:** Now my code is working fine, thanks everyone. It takes the input
from the user, then split the string into scatter labels. It is a little
messy, but it will do the job!
class TestApp(App):
def build(self):
ti = TextInput(font_size=30,
size_hint_y=None,
height=50,
text='default')
b = BoxLayout(orientation='vertical')
f = FloatLayout()
def SplitIntoLabels(*args):
f.clear_widgets()
for word in new_list:
s = Scatter(size_hint_x=None, size_hint_y=None, do_rotation=False)
l = Label(text=word, font_size=50)
s.add_widget(l)
f.add_widget(s)
s.size=l.size
ti.bind(text=SplitIntoLabels)
b.add_widget(ti)
b.add_widget(f)
return b
if __name__ == "__main__":
TestApp().run()
Answer:
from kivy.uix.label import Label
my_phrase = "The split method returns a list of the words in the string"
labels = [Label(text=word) for word in my_phrase.split()]
**Edit:**
for lab in labels:
s.add_widget(lab)
or, more directly,
for word in my_phrase.split():
s.add_widget(Label(text=word))
|
Regex - how to capture many words
Question: I have a simple regex question:
Given a string like `"test-class"` what regex should I use to get
`['test','class']` (in python context)
Answer: You don't need a regex; just use
[`str.split()`](http://docs.python.org/2/library/stdtypes.html#str.split):
>>> 'test-class'.split('-')
['test', 'class']
A regex solution is still to split:
>>> import re
>>> re.split(r'-', 'test-class')
['test', 'class']
|
join() threads without holding the main thread in python
Question: I have a code that calls to threads over a loop, something like this:
def SubmitData(data):
# creating the relevant command to execute
command = CreateCommand(data)
subprocess.call(command)
def Main():
while(True):
# generating some data
data = GetData()
MyThread = threading.Thread(target=SubmitData,args=(data,))
MyThread.start()
obviously, I don't use `join()` on the **threads**. My question is how to
`join()` those **threads** without making the main thread wait for them? Do I
even need to `join()` them? what will happend if I won't `join()` them?
some important points:
* the _while_ loop is suppose to for a very long time (couple of days)
* the _command_ itself is not very long (few seconds)
I'm using threading for Performance so if someone have a better idea instead,
I would like to try it out.
Answer: `Popen()` doesn't block. Unless `CreateCommand()` blocks, you could call
`SubmitData()` in the main thread:
from subprocess import Popen
processes = []
while True:
processes = [p for p in processes if p.poll() is None] # leave only running
processes.append(Popen(CreateCommand(GetData()))) # start a new one
* * *
> Do I even need to join() them? what will happend if I won't join() them?
No. You don't need to join them. All non-daemonic threads are joined
automatically when the main thread exits.
|
why are '(single quote) or "(double quote) not allowed in subprocess.check_output() in python?
Question: I am using subprocess.check_output() method to execute commands from within
the python script.
There are some commands that need "(double quotes) to be present in syntax.
Here's one example:
>drozer console connect -c "run app.package.info -a com.package.name"
It throws error if we remove "(double quotes) from above command.
I did the following :
string = '\"run app.package.info -a com.package.name\"'
command = ['/usr/bin/drozer','console','connect','-c',string]
output = subprocess.check_output(command)
This yields me to error:
*** Unknown syntax: "run app.package.info -a com.package.name"
Please Note : commands without quotes are running through
subprocess.check_output, so code works properly.
How can I solve this issue of quotes? Any help would be highly appreciated.
Thanks
Answer: You don’t need the double quotes.
The reason you need them with the shell command is that the shell is parsing
the command line from a string, and it uses them as an indication that the
text `run app.package.info -a com.package.name` should be placed in a _single_
argument:
#!/usr/bin/env python
import subprocess
c_arg = 'run app.package.info -a com.package.name'
command = ['/usr/bin/drozer', 'console', 'connect', '-c', c_arg]
output = subprocess.check_output(command)
print("Got %r" % (output,))
When you’re using code to start a process, you explicitly specify individual
arguments, and no shell parsing is going on, so there’s no need for the extra
quotes.
|
Use python mechanize to log into pages with NTLM authentication
Question: I want to use mechanize to log into a page and retrieve some information. But
however I try to authenticate It just fails with Error code **HTTP 401** , as
you can see below:
r = br.open('http://intra')
File "bui...e\_mechanize.py", line 203, in open
File "bui...g\mechanize\_mechanize.py", line 255,
in _mech_openmechanize._response.httperror_seek_wrapper: HTTP Error 401: Unauthorized
This is my code so far:
import mechanize
import cookielib
# Browser
br = mechanize.Browser()
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
# Browser options
br.set_handle_equiv(True)
# br.set_handle_gzip(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
# Follows refresh 0 but not hangs on refresh > 0
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1)
# If the protected site didn't receive the authentication data you would
# end up with a 410 error in your face
br.add_password('http://intra', 'myusername', 'mypassword')
# User-Agent (this is cheating, ok?)
br.addheaders = [('User-agent', 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')]
# Open some site, let's pick a random one, the first that pops in mind:
# r = br.open('http://google.com')
r = br.open('http://intra')
html = r.read()
# Show the source
print html
What am I doing wrong? visiting `http://intra` (internal page) with e.g.
chrome, it pops open a windows and asks for username/password once and then
all is good.
The dialogue which pops open looks like this:

Answer: After tons of reaserch I managed to find out the reason behind this.
Find of all the site uses a so called [NTLM
authentication](http://hc.apache.org/httpclient-
legacy/authentication.html#Authentication_Schemes), which is not supported by
mechanize. This can help to find out the authentication mechanism of a site:
wget -O /dev/null -S http://www.the-site.com/
So the code was modified a little bit:
import sys
import urllib2
import mechanize
from ntlm import HTTPNtlmAuthHandler
print("LOGIN...")
user = sys.argv[1]
password = sys.argv[2]
url = sys.argv[3]
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
passman.add_password(None, url, user, password)
# create the NTLM authentication handler
auth_NTLM = HTTPNtlmAuthHandler.HTTPNtlmAuthHandler(passman)
browser = mechanize.Browser()
handlersToKeep = []
for handler in browser.handlers:
if not isinstance(handler,
(mechanize._http.HTTPRobotRulesProcessor)):
handlersToKeep.append(handler)
browser.handlers = handlersToKeep
browser.add_handler(auth_NTLM)
response = browser.open(url)
response = browser.open("http://www.the-site.com")
print(response.read())
and finally mechanize needs to be patched, as mentioned
[here](http://stackoverflow.com/questions/13649964/python-mechanize-with-ntlm-
getting-attributeerror-httpresponse-instance-has-no):
--- _response.py.old 2013-02-06 11:14:33.208385467 +0100
+++ _response.py 2013-02-06 11:21:41.884081708 +0100
@@ -350,8 +350,13 @@
self.fileno = self.fp.fileno
else:
self.fileno = lambda: None
- self.__iter__ = self.fp.__iter__
- self.next = self.fp.next
+
+ if hasattr(self.fp, "__iter__"):
+ self.__iter__ = self.fp.__iter__
+ self.next = self.fp.next
+ else:
+ self.__iter__ = lambda self: self
+ self.next = lambda self: self.fp.readline()
def __repr__(self):
return '<%s at %s whose fp = %r>' % (
|
Open shell in Python
Question: How can I open in Python a unix shell, type a command and some other inputs
and close the unix shell?
Example commands and inputs:
telnet 127.0.0.1:6000
user
pass
save-all
restart
Greets
miny
Answer: You can have a look at the `pexpect` module and more precisely the interact
function.
See documentation [here](http://pexpect.sourceforge.net/pexpect.html#spawn-
interact).
Basically, you juste spawn your sheel, program or whatever you want, and
interact with it like you would normally do.
import pexepect
p = pexpect.spawn('/bin/bash')
p.interact()
Then you escape with an escape character as explained in the doc.
|
Rewriting Java BigInteger function in python
Question: I'm trying to rewrite some javacode in a python script. One part of that is to
deduce a simple number from a sha256 hash.
in java this function is called:
public static Long getId(byte[] publicKey) {
byte[] publicKeyHash = Crypto.sha256().digest(publicKey);
BigInteger bigInteger = new BigInteger(1, new byte[] {publicKeyHash[7], publicKeyHash[6], publicKeyHash[5],
publicKeyHash[4], publicKeyHash[3], publicKeyHash[2], publicKeyHash[1], publicKeyHash[0]});
return bigInteger.longValue();
}
The publicKey is binairy so I can't print it here, but the publicKeyHash I use
for testing is:
d9d5c57971eefb085e3abaf7a5a4a6cdb8185f30105583cdb09ad8f61886ec65
To my understandin the third line of this Java code converts d9d5c579 a
number. The number that belongs to the hash above is 4273301882745002507
Now I'm looking for a piece / line of python code to generate that same number
from that hash.
def getId(publicKey):
publicKeyHash = binascii.hexlify(publicKey)
p = publicKeyHash
return(struct.unpack("Q",struct.pack("cccccccc",p[7],p[6],p[5],p[4],p[3],p[2],p[1],p[0]))[0])
Was a first attempt however this clearly doesn't work, it does return a number
but not the correct one. Is there anyone here familiar with both languages and
able to help my translate this function?
Answer: How about (untested):
import hashlib
publicKeyHash = hashlib.sha256.digest(publicKey)
bigInt = 0
for byte in publicKeyHash[:7]:
bigInt <<= 8
bigInt |= byte
This worked:
from hashlib import sha256
import json
import struct
import binascii
def getId(publicKey):
publicKeyHash = sha256(publicKey)
p = publicKeyHash.digest()
b = bytearray(p[:8])
b.reverse()
bigInt = 0
for byte in b:
bigInt <<= 8
bigInt |= byte
#print bigInt
return(bigInt)
|
How do I optimize tens of thousands of substring searches in instance attributes in Python?
Question: I'm trying to write a program which autocompletes user input which may be one
of the following: an airport's three letter IATA code, a city's name, a city's
name in one of given languages, an airport's name, a country's name, a state's
name.
The airport data is all located in instances of an `Airport` class, which has
the `.match()` method, determining if any of the relevant attributes starts
with the user's input.
Here's all relevant code:
class Location(object):
def __init__(self, code, location_type):
self.code = code # Country/city/state/airport codes. Format varies.
self.type = location_type
self.name = self.get_name() # String containing name of location.
if self.type == 'city':
self.localizations = self.get_localizations()
# The above is a dictionary, keys are locales (ex. 'fr-FR'), values
# are the translated city names in the specified locale.
def match(self, pattern, match_code=False, locales=[]):
if match_code: # Fires if we only match for the 3 letter IATA code
return pattern.match(self.code)
if not self.name: # Some instances don't have names
return None
if locales and self.localizations: # Fires if there's languages given
for locale in locales:
match = pattern.match(self.localizations.get(locale, ''))
if match:
return locale
return None
return pattern.match(self.name)
class Airport(Location):
def __init__(self, airport, city=None, state=None, country=None):
self.code = airport
self.type = 'airport'
self.name = self.get_name()
self.city = Location(city, 'city')
self.state = Location(state, 'state')
self.country = Location(country, 'country')
matches = []
pattern = re.compile('^' + keyword, re.I) # Keyword is the user's input
for airport in airports: # airports is a list of Airport instances
if airport.match(pattern, match_code=True):
matches.append(airport.create_match('airport', 100))
elif (airport.city.match(pattern)
or airport.city.match(pattern, locales=locales)):
if airport.city.match(pattern):
matches.append(airport.create_match('locality', 70))
else:
locale = airport.city.match(pattern, locales=locales)
matches.append(airport.create_match('localised_locality', 70,
locale=locale))
elif airport.match(pattern):
matches.append(airport.create_match('airport', 50))
elif airport.country.match(pattern):
matches.append(airport.create_match('country', 30))
elif airport.state.match(pattern):
matches.append(airport.create_match('state', 30))
According to my testing, the `Airport.match()` method is what takes up
practically all the time. There's currently 9451 `Airport` instances, and a
search takes around 50ms on my PC.
My program is what creates all these instances at startup, loading them from
XML files, so I can make modifications to the source data, if necessary.
Answer: I think you’re going about this backwards. What do I mean by that? Well, it
seems to me that your list of things to match against is static (relatively),
while your user is going to be entering data one character at a time. What you
should probably do is put all of the things you might autocomplete to into a
sorted array, then every time the user types another character, find the first
item in the array that matches the prefix entered by the user.
You can optimise by remembering the last place you got to, so that e.g. if a
user types 'S', when you get the next character you start searching at the
first 'S' in your array; if they type 'SF', then you search starting at 'SF'
and so on.
### Update
Here’s an example, based on what you wrote above:
import bisect
# Construct the search array
search_array = [(l.code.lower(), l) for l in locations] + [(l.name.lower(), l) for l in locations] + [(a.city.lower(), a) for a in airports] + [(a.state.lower(), a) for a in airports] + [(a.country.lower(), a) for a in airports]
search_array.sort()
# Now, assume the user enters 'S'; we do
new_entry = bisect.bisect_left(search_array, ('S'.lower(), None))
if new_entry < len(search_array):
found = search_array[new_entry]
if found[0].startswith('S'.lower()):
entry = new_entry
autocompletion = found[0]
# Let's say they now enter 'F'; we do
new_entry = bisect.bisect_left(search_array, ('SF'.lower(), None), entry)
if new_entry < len(search_array):
found = search_array[new_entry]
if found[0].startswith('SF'.lower()):
entry = new_entry
autocompletion = found[0]
and so on. Obviously this is just knocked together in Stack Overflow as an
example.
|
Python tkinter inserting string in texttbox out of a function
Question: Trying to add text from a function into a textfield, but can't figure out how.
Every time when the Start button is clicked it should add text to the
textfield.
import Tkinter
class GuiCreate:
def __init__(self,parent):
#Textbox
window = Frame(width=620, height=50)
window.place(x=25,y=320)
vscroll = Scrollbar(window)
hscroll = Scrollbar(window, orient='horizontal')
# create instance variable with "self"
self.listbox = Text(window, height=10)
self.listbox.pack(side=LEFT, fill=X, padx=5, pady=5, expand=1)
vscroll.config(command=self.listbox.yview, relief=SUNKEN)
hscroll.config(command=self.listbox.xview, relief=SUNKEN)
self.listbox.config(yscrollcommand=vscroll.set, relief=SUNKEN)
self.listbox.config(xscrollcommand=hscroll.set)
f7 = Frame(width=30, height=20)
f7.place(x=20,y=260)
srcButton = Button(f7, text="START", command=self.startProcess)
srcButton.pack(side='left')
def startProcess(self):
textinsert = 'abcdefg'
self.listbox.insert('end', textinsert)
root = Tk()
root.title("Clipfinder")
root.geometry('650x550+200+100')
root.configure(background = 'gray')
gui=GuiCreate(root)
root.mainloop()
Getting the Error: AttributeError: GuiCreate instance has no attribute
'listbox'
How can I send the string out of a function into the textbox? THX
Answer:
def __init__(self, parent):
#Textbox
window = Frame(width=620, height=50)
window.place(x=25,y=320)
vscroll = Scrollbar(window)
hscroll = Scrollbar(window, orient='horizontal')
self.listbox = Text(window, height=10)
self.listbox.pack(side=LEFT, fill=X, padx=5, pady=5, expand=1)
vscroll.config(command=self.listbox.yview, relief=SUNKEN)
hscroll.config(command=self.listbox.xview, relief=SUNKEN)
self.listbox.config(yscrollcommand=vscroll.set, relief=SUNKEN)
self.listbox.config(xscrollcommand=hscroll.set)
f7 = Frame(width=30, height=20)
f7.place(x=20,y=260)
srcButton = Button(f7, text="START", command=self.startProcess)
srcButton.pack(side='left')
Forgot to add listbox as an attribute. Otherwise it is just local to the init
method..
|
Limit which classes in a .py file are importable from elsewhere
Question: I have a python source file with a class defined in it, and a class from
another module imported into it. Essentially, this structure:
from parent import SuperClass
from other import ClassA
class ClassB(SuperClass):
def __init__(self): pass
What I want to do is look in this module for all the classes defined in there,
and only to find ClassB (and to overlook ClassA). Both ClassA and ClassB
extend SuperClass.
The reason for this is that I have a directory of plugins which are loaded at
runtime, and I get a full list of the plugin classes by introspecting on each
.py file and loading the classes which extend SuperClass. In this particular
case, ClassB uses the plugin ClassA to do some work for it, so is dependent
upon it (ClassA, meanwhile, is not dependent on ClassB). The problem is that
when I load the plugins from the directory, I get 2 instances of ClassA, as it
gets one from ClassA's file, and one from ClassB's file.
For packages there is the approach:
__all__ = ['module_a', 'module-b']
to explicitly list the modules that you can import, but this lives in the
`__init__.py` file, and each of the plugins is a .py file not a directory in
its own right.
The question, then, is: can I limit access to the classes in a .py file, or do
I have to make each one of them a directory with its own init file? Or, is
there some other clever way that I could distinguish between these two
classes?
Answer: You meant "for packages there is the approach...". Actually, that works for
every module (`__init__.py` **is** a module, just with special semantics). Use
`__all__` inside the plugin modules and that's it.
But remember: `__all__` only limits what you import using `from xxxx import
*`; you can still access the rest of the module, and there's no way to avoid
that using the standard Python import mechanism.
If you're using some kind of active introspection technique (eg. exploring the
namespace in the module and then importing classes from it), you could check
if the class comes from the same file as the module itself.
You could also implement your own import mechanism (using `importlib`, for
example), but that may be overkill...
Edit: for the "check if the class come from the same module":
Say that I have two modules, `mod1.py`:
class A(object):
pass
and `mod2.py`:
from mod1 import A
class B(object):
pass
Now, if I do:
from mod2 import *
I've imported both `A` and `B`. But...
>>> A
<class 'mod1.A'>
>>> B
<class 'mod2.B'>
as you see, the classes carry information about where did they originated. And
actually you can check it right away:
>>> A.__module__
'mod1'
>>> B.__module__
'mod2'
Using that information you can discriminate them easily.
|
strip() and strip(string.whitespace) give different results despite documentation suggesting they should be the same
Question: I have a Unicode string with some non-breaking spaces at the beginning and
end. I get different results when using `strip()` vs.
`strip(string.whitespace)`.
>>> import string
>>> s5 = u'\xa0\xa0hello\xa0\xa0'
>>> print s5.strip()
hello
>>> print s5.strip(string.whitespace)
hello
The documentation for `strip()` says, "If omitted or `None`, the `chars`
argument defaults to removing whitespace." The documentation for
`string.whitespace` says, "A string containing all characters that are
considered whitespace."
So if `string.whitespace` contains all characters that are considered
whitespace, then why are the results different? Does it have something to do
with Unicode?
I am using Python 2.7.6
Answer: From the documentation of the
[`string.whitespace`](http://docs.python.org/3.1/library/string.html#string.whitespace):
> A string containing all **ASCII characters** that are considered whitespace.
> This includes the characters space, tab, linefeed, return, formfeed, and
> vertical tab.
It's the same under python3, where all non-ASCII constants where removed. (In
python2 some constants could be influenced by `locale` settings).
Hence the difference in behaviour is quite obvious since `strip()` _does_
remove any _unicode_ whitespace, while `strip(string.whitespace)` removes only
ASCII spaces. Your string clearly contains non-ASCII spaces.
|
Removing newline characters not working
Question: Im trying to remove the `\n` from a list created in a function. My code for
removing it doesnt seem to be working. Im not getting an error either??
**CODE**
#!/usr/bin/python
"""
Description:
Basic Domain bruteforcer
Usage:
your_script.py (-f <file>) (-d <domain>) [-t 10] [-v]
your_script.py -h | --help
Arguments:
-f --file File to read potential Sub-domains from. (Required)
-d --domain Domain to bruteforce. (Required)
Options:
-h --help Show this screen.
-p --proxy Proxy address and port. [default: http://127.0.0.1:8080] (Optional)
-t --thread Thread count. (Optional)
-v --verbose Turn debug on. (Optional)
"""
from docopt import docopt
def fread(dwords, *args):
flist = open(dwords).readlines()
#print current list
print flist
nlist = flist
for i in nlist:
i.rstrip('\n')
return nlist
if __name__ == "__main__":
arguments = docopt(__doc__, version='0.1a')
# print new list with removed \n
print fread(arguments['--file'])
Answer: Strings are not mutable, `i.rstrip('\n')` _returns_ a new string.
Use a list comprehension:
def fread(dwords):
flist = open(dwords).readlines()
return [s.rstrip('\n') for s in flist]
or, since you are reading the whole file into memory anyway,
`str.splitlines()`:
def fread(dwords):
return open(dwords).read().splitlines()
|
Python function invoked before definition
Question: I am confused about the below given code in Python where a function has been
called before its definition. Is it possible? Is it because the function does
not return a value?
from Circle import Circle
def main():
myCircle = Circle()
n = 5
printAreas(myCircle, n) #The function is called here
def printAreas(c, times):
xxxx
xxxx
main()
Answer: What happens in your program:
1. `main` is defined, with a reference to `printAreas` in its body—note, this is just a reference, not a call
2. `printAreas` is defined
3. `main` is invoked
4. `main` calls `printAreas`.
So all is good—you are allowed to reference any names you want at any time you
want, as long as you ensure these names will have been defined (bound to a
value) by the time the code containing the reference is executed:
def foo():
print bar # reference to as-of-yet non-existent bar
# calling foo here would be an error
bar = 3
foo() # prints 3
|
Iterating through individual files in os.walk in Python in an idiomatic fashion
Question: I started with some code I got from [another stackoverflow
question](http://stackoverflow.com/questions/2865278/in-python-how-to-find-
all-the-files-under-a-directory-including-the-files-in-s/2865328#2865328) to
generate full paths for all the files in a directory tree:
import os
def recursive_file_gen(mydir):
for root, dirs, files in os.walk(mydir):
for file in files:
yield os.path.join(root, file)
I wanted to add memoization, and it seemed to me the easiest way to achieve
was just to make recursive_file_gen return a list instead:
def recursive_file_list(mydir):
result = []
for root, dirs, files in os.walk(mydir):
for file in files:
result.append(os.path.join(root, file))
return result
This code is fine, really. It's certainly not hard to figure out what's going
on. But putting together a list using successive `append` operations isn't
fully Pythonic. My guess is there's a better way using Python syntax, probably
via a list comprehension, and learning it would be edifying.
Of course I could write:
def recursive_file_list(mydir):
return list(recursive_file_gen(mydir))
and memoize that. But I'm wondering if there's anything cleaner.
Answer: How about?
result = [os.path.join(root, file) for root, dir, files in os.walk(mydir) for file in files]
|
python creating sqlite word dictionary
Question: I am trying to create a database of english words, But no values are being put
into the database ,
I am reading each word from a file and using it's sha256 hash to represent the
definition of the word, But when i execute the script the database stays the
same and is not being filled with the words, the size of the database also
stays the same.
Why is the database not being filled with the values?
#!/usr/bin/python
# -*- coding: utf-8 -*-
import sqlite3 as lite
import sys
import hashlib
word_file = open("words.txt", "r")
con = lite.connect('words.db')
cur = con.cursor()
cur.execute("CREATE TABLE IF NOT EXISTS words(word TEXT PRIMARY KEY, definition TEXT);")
for word in word_file.read().split():
cur.execute("INSERT OR REPLACE INTO words VALUES(\"%s\", \"%s\")" % (word, hashlib.sha256(word).hexdigest()))
word_file.close()
# cur.execute("SELECT * FROM words WHERE word = 'hello'")
# print cur.fetchall()
con.close()
Answer: I believe you have to `commit` your changes, for your `con`
<http://docs.python.org/2/library/sqlite3.html>
|
Navigating a website in python, scraping, and posting
Question: There are many good resources already on stackoverflow but I'm still having an
issue. I've visited these sources:
* [how to submit query to .aspx page in python](http://stackoverflow.com/questions/1480356/how-to-submit-query-to-aspx-page-in-python)
* [Submitting a post request to an aspx page](http://stackoverflow.com/questions/6269064/submitting-a-post-request-to-an-aspx-page)
* [Scrapping aspx webpage with Python using BeautifulSoup](http://stackoverflow.com/questions/20729569/scrapping-aspx-webpage-with-python-using-beautifulsoup)
* <http://www.pythonforbeginners.com/cheatsheet/python-mechanize-cheat-sheet>
I'm attempting to visit
<http://www.latax.state.la.us/Menu_ParishTaxRolls/TaxRolls.aspx> and select a
Parish. I believe this forces a post and allows me to select a year, which
posts again, and allows for yet more selection. I've written my script a few
different ways following the above sources and haven't successfully been able
to submit the site to allow me to enter a year.
My current code
import urllib
from bs4 import BeautifulSoup
import mechanize
headers = [
('Accept','text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'),
('Origin', 'http://www.indiapost.gov.in'),
('User-Agent', 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17'),
('Content-Type', 'application/x-www-form-urlencoded'),
('Referer', 'http://www.latax.state.la.us/Menu_ParishTaxRolls/TaxRolls.aspx'),
('Accept-Encoding', 'gzip,deflate,sdch'),
('Accept-Language', 'en-US,en;q=0.8'),
]
br = mechanize.Browser()
br.addheaders = headers
url = 'http://www.latax.state.la.us/Menu_ParishTaxRolls/TaxRolls.aspx'
response = br.open(url)
# first HTTP request without form data
soup = BeautifulSoup(response)
# parse and retrieve two vital form values
viewstate = soup.findAll("input", {"type": "hidden", "name": "__VIEWSTATE"})
eventvalidation = soup.findAll("input", {"type": "hidden", "name": "__EVENTVALIDATION"})
formData = (
('__EVENTVALIDATION', eventvalidation[0]['value']),
('__VIEWSTATE', viewstate[0]['value']),
('__VIEWSTATEENCRYPTED',''),
)
try:
fout = open('C:\\GIS\\tmp.htm', 'w')
except:
print('Could not open output file\n')
fout.writelines(response.readlines())
fout.close()
I've also attempted this in the shell and what I entered plus what I received
(modified to cut down on the bulk) can be found <http://pastebin.com/KAW5VtXp>
Anyway I try to change the value in the Parish dropdown list and post I get
taken to a webmaster login page.
Am I approaching this the correct way? Any thoughts would be extremely
helpful.
Thanks!
Answer: I ended up using selenium.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Firefox()
driver.get("http://www.latax.state.la.us/Menu_ParishTaxRolls/TaxRolls.aspx")
elem = driver.find_element_by_name("ctl00$ContentPlaceHolderMain$ddParish")
elem.send_keys("TERREBONNE PARISH")
elem.send_keys(Keys.RETURN)
elem = driver.find_element_by_name("ctl00$ContentPlaceHolderMain$ddYear")
elem.send_keys("2013")
elem.send_keys(Keys.RETURN)
elem = driver.find_element_by_id("ctl00_ContentPlaceHolderMain_rbSearchField_1")
elem.click()
APN = 'APN # here'
elem = driver.find_element_by_name("ctl00$ContentPlaceHolderMain$txtSearch")
elem.send_keys(APN)
elem.send_keys(Keys.RETURN)
# Access the PDF
elem = driver.find_element_by_link_text('Generate Report')
elem.click()
elements = driver.find_elements_by_tag_name('a')
elements[1].click()
|
Disappearing Axes, LogLog Plot Python
Question: I am running a loop to plot a bunch of different lines but it makes the most
sense to plot them on a loglog plot (dealing with about 9 orders of
magnitude). They plot how they should with a loglog plot but the axes/axes
labels are disappearing only when I try to log plot them.
#THIS IS FOR A NORMAL PLOT
import matplotlib.pyplot as plt
plt.figure()
for ii in list1:
plt.plot(xarray[:], yarray[ii,:])
plt.show()
I have tried adding the following:
plt.xscale('log')
and
plt.yscale('log')
Alternatively I tried
plt.loglog(xarray[:], yarray[ii,:])
plt.semilogy(xarray[:], yarray[ii,:])
Any help would be great, I don't have much experience with plotting, but
making axes appear should be pretty simple I would think. Thanks.
[My plot without the
axes](http://i43.photobucket.com/albums/e366/sheogorath09/figure1_zps2589024c.jpg)
EDIT: I am also getting the following error call from the traceback. I just
did a clean reinstall of matplotlib and still having the same problems (it
plots but no axes)
>Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/artist.py", line 55, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/figure.py", line 1034, in draw
func(*args)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/artist.py", line 55, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/axes.py", line 2086, in draw
a.draw(renderer)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/artist.py", line 55, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/axis.py", line 1093, in draw
renderer)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/axis.py", line 1042, in _get_tick_bboxes
extent = tick.label1.get_window_extent(renderer)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/text.py", line 754, in get_window_extent
bbox, info, descent = self._get_layout(self._renderer)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/text.py", line 329, in _get_layout
ismath=ismath)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/backends/backend_macosx.py", line 151, in get_text_width_height_descent
self.mathtext_parser.parse(s, self.dpi, prop)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/mathtext.py", line 3009, in parse
self.__class__._parser = Parser()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/mathtext.py", line 2193, in __init__
- ((lbrace + float_literal + rbrace)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'NoneType'
Answer: Turns out that this problem was the result of the same problem as seen: [In
this Problem](http://stackoverflow.com/questions/5419439/matplotlib-pyplot-on-
os-x-with-64-bit-python-from-python-org?lq=1)
I did a complete uninstall of python, reinstalled python (the 32 bit version
for Mac OSX 10.3 and higher), matplotlib, scipy, and numpy. And now the axes
are plotting correctly.
|
Validating URLs in Python
Question: I've been trying to figure out what the best way to validate a URL is
(specifically in Python) but haven't really been able to find an answer. It
seems like there isn't one known way to validate a URL, and it depends on what
URLs you think you may need to validate. As well, I found it difficult to find
an easy to read standard for URL structure. I did find the RFCs 3986 and 3987,
but they contain much more than just how it is structured.
Am I missing something, or is there no one standard way to validate a URL?
Answer: This looks like it might be a duplicate of [How do you validate a URL with a
regular expression in Python?](http://stackoverflow.com/questions/827557/how-
do-you-validate-a-url-with-a-regular-expression-in-python)
(I would make a comment, but I don't have enough reputation).
You should be able to use the urlparse library described there.
>>> from urlparse import urlparse
>>> urlparse('actually not a url')
ParseResult(scheme='', netloc='', path='actually not a url', params='', query='', fragment='')
>>> urlparse('http://google.com')
ParseResult(scheme='http', netloc='google.com', path='', params='', query='', fragment='')
call `urlparse` on the string you want to check and then make sure that the
`ParseResult` has attributes for `scheme` and `netloc`
|
Trouble using mkvirtualenv after installing OS X Mavericks
Question: I recently installed OS X Mavericks. I can access my previously created
virtual environments, but I have trouble creating a new one:
Christophers-MacBook-Pro-2:~ christopherspears$ mkvirtualenv bottle_todo
-bash: /usr/local/bin/virtualenv: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory
I looked into /usr/local/bin/virtualenv:
#!/usr/local/opt/python/bin/python2.7
# EASY-INSTALL-ENTRY-SCRIPT: 'virtualenv==1.10.1','console_scripts','virtualenv'
__requires__ = 'virtualenv==1.10.1'
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.exit(
load_entry_point('virtualenv==1.10.1', 'console_scripts', 'virtualenv')()
)
Sure enough the path /usr/local/opt/python/bin/python2.7 does not exit.
Earlier on, I had this issue:
[Terminal issue with virtualenvwrapper after Mavericks
Upgrade](http://stackoverflow.com/questions/19549824/terminal-issue-with-
virtualenvwrapper-after-mavericks-upgrade/19550535#19550535)
I tried updating virtualenv to no avail:
christohersmbp2:~ christopherspears$ pip install virtualenv
Requirement already satisfied (use --upgrade to upgrade): virtualenv in /Library/Python/2.7/site-packages
Cleaning up...
christohersmbp2:~ christopherspears$ pip install --upgrade virtualenv
Requirement already up-to-date: virtualenv in /Library/Python/2.7/site-packages
Cleaning up...
christohersmbp2:~ christopherspears$ mkvirtualenv test
-bash: /usr/local/bin/virtualenv: /usr/local/opt/python/bin/python2.7: bad interpreter: No such file or directory
Answer: I fixed it. I had to uninstall and reinstall virtualenv:
christohersmbp2:bin christopherspears$ sudo pip uninstall virtualenv
Password:
Uninstalling virtualenv:
/Library/Python/2.7/site-packages/virtualenv-1.11.4.dist-info/DESCRIPTION.rst
/Library/Python/2.7/site-packages/virtualenv-1.11.4.dist-info/METADATA
/Library/Python/2.7/site-packages/virtualenv-1.11.4.dist-info/RECORD
/Library/Python/2.7/site-packages/virtualenv-1.11.4.dist-info/WHEEL
/Library/Python/2.7/site-packages/virtualenv-1.11.4.dist-info/entry_points.txt
/Library/Python/2.7/site-packages/virtualenv-1.11.4.dist-info/pydist.json
/Library/Python/2.7/site-packages/virtualenv-1.11.4.dist-info/top_level.txt
/Library/Python/2.7/site-packages/virtualenv.py
/Library/Python/2.7/site-packages/virtualenv.pyc
/Library/Python/2.7/site-packages/virtualenv_support/__init__.py
/Library/Python/2.7/site-packages/virtualenv_support/__init__.pyc
/Library/Python/2.7/site-packages/virtualenv_support/pip-1.5.4-py2.py3-none-any.whl
/Library/Python/2.7/site-packages/virtualenv_support/setuptools-2.2-py2.py3-none-any.whl
/usr/local/bin/virtualenv
/usr/local/bin/virtualenv-2.7
Proceed (y/n)? y
Successfully uninstalled virtualenv
christohersmbp2:bin christopherspears$ sudo pip install virtualenv
Downloading/unpacking virtualenv
Downloading virtualenv-1.11.4-py2.py3-none-any.whl (1.7MB): 1.7MB downloaded
Installing collected packages: virtualenv
Successfully installed virtualenv
Cleaning up...
Now everything seems to work:
christohersmbp2:bin christopherspears$ cat virtualenv
#!/usr/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from virtualenv import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())
christohersmbp2:bin christopherspears$ mkvirtualenv test
New python executable in test/bin/python
Installing setuptools, pip...done.
(test)christohersmbp2:bin christopherspears$ deactivate
christohersmbp2:bin christopherspears$ workon
bottle_tutorial
rango_tutorial
scrapestat
test
christohersmbp2:bin christopherspears$ rmvirtualenv test
Removing test...
|
Remote Sensors Protocol with C++
Question: I am working on a project with the raspberry Pi and Scratch. I need to use the
Remote Sensors Protocol with C++. I have tried porting the Python code across
but i cannot get C++ to return the null values.
The original Python code looks like this:
import socket
from array import array
HOST = '192.168.1.101'
PORT = 42001
scratchSock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
scratchSock.connect((HOST, PORT))
def sendCMD(cmd):
n = len(cmd)
a = array('c')
a.append(chr((n >> 24) & 0xFF))
a.append(chr((n >> 16) & 0xFF))
a.append(chr((n >> 8) & 0xFF))
a.append(chr(n & 0xFF))
scratchSock.send(a.tostring() + cmd)
sendCMD('sensor-update "dave" 201')
My Attempt in C++ looks like this:
char* scratchencode(string cmd)
{
int cmdlength;
cmdlength = cmd.length();
char* combind = new char[20];
const char * sCmd = cmd.c_str();
char append[]={(cmdlength >> 24) & 0xFF, (cmdlength >> 16) & 0xFF, (cmdlength >> 8) & 0xFF, (cmdlength & 0xFF)};
strcpy(combind,append);
strcpy(combind,sCmd);
return combind;
}
Needles to say it doesn't work, Can anyone help with the porting the code, i
have tried to miminc the python code and the orgial doument at
<http://wiki.scratch.mit.edu/wiki/Remote_Sensors_Protocol> but have had no
success.
Chris
Answer: I have solved the problem, Thank you Paweł Stawarz. Your advice was just what
i needed, i converted the entire function to work with strings and it worked
first time.
The code is as follows:
string scratchencode(string cmd)
{
int cmdlength; // holds the length of cmd
string combind; // used to store the concatenated Packet
string mgsSize; // used to store Message size
cmdlength = cmd.length(); // length of CMD
//convert intiger to a 4 byte 32-bit big-Endian number, using bit shifting.
mgsSize = (cmdlength >> 24);
mgsSize += (cmdlength >> 16);
mgsSize += (cmdlength >> 8);
mgsSize += cmdlength;
combind = mgsSize + cmd; // concatenate mgsSize and cmd producing a structure of [size][size][size][size][string CMD (size bytes long)]
return combind; // return the string
}
|
Python: datetime64 issues with range
Question: I am trying to have a vector of seconds between two time intervals:
import numpy as np
import pandas as pd
date="2011-01-10"
start=np.datetime64(date+'T09:30:00')
end=np.datetime64(date+'T16:00:00')
range = pd.date_range(start, end, freq='S')
For some reason when I `print range` I get:
[2011-01-10 17:30:00, ..., 2011-01-11 00:00:00]
So the length is 23401 which is what I want but definitely not the correct
time interval. Why is that?
Also, if I have a DataFrame `df` with a column of datetime64 format that looks
like:
Time
15:59:57.887529007
15:59:57.805383290
Once I solved the problem above, will I be able to do the following:
`data = df.reindex(df.Time + range) data = data.ffill()` ??
I need to do the exact steps proposed under EDIT: SOLUTION of the OP question
asked [here](http://stackoverflow.com/questions/15930885/converting-
irregularly-time-stamped-measurements-into-equally-spaced-time-weigh) except
with `datetime64` format. Possible?
Answer: It seems that pandas `date_range` is dropping the timezone (looks like a bug,
I think it's already filed...), you can use Timestamp rather than datetime64
to workaround this:
In [11]: start = pd.Timestamp(date+'T09:30:00')
In [12]: end = pd.Timestamp(date+'T16:00:00')
In [13]: pd.date_range(start, end, freq='S')
Out[13]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2011-01-10 09:30:00, ..., 2011-01-10 16:00:00]
Length: 23401, Freq: S, Timezone: None
Note: To see it's a timezone, you're in UTC-8, and 14:00 + 8:00 == 00:00 (the
next day).
|
Least-Squares Fit to a Straight Line python code
Question: I have a scatter plot composed of X and Y coordinates. I want to use the
Least-Squares Fit to a Straight Line to obtain the line of best fit.
The Least-Squares Fit to a Straight Line refers to: If(x_1,y_1),....(x_n,y_n)
are measured pairs of data, then the best straight line is y = A + Bx.
Here is my code in python:
# number of points is 50
A = (sum(x**2)*sum(y) - sum(x)*sum(x*y)) / (50*sum(x**2) - (sum(x))**2)
B = (50*sum(x*y) - sum(x)*sum(y)) / (50*sum(x**2) - (sum(x))**2)
print (A,B)
Does this look correct, I'm having issues printing A and B. Thank you!
Answer: Simplest if you just want a line is `scipy.stats.linregress`:
>>> from scipy import stats
>>> slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
[Link to
docs](http://docs.scipy.org/doc/scipy-0.13.0/reference/generated/scipy.stats.linregress.html)
|
How to normalize a histogram in python? (updated)
Question: I'm trying to plot normed histogram, but instead of getting 1 as maximum value
on y axis, I'm getting different numbers.
For array k=(1,4,3,1)
import numpy as np
def plotGraph():
import matplotlib.pyplot as plt
k=(1,4,3,1)
plt.hist(k, normed=1)
from numpy import *
plt.xticks( arange(10) ) # 10 ticks on x axis
plt.show()
plotGraph()
I get this histogram, that doesn't look like normed.

For a different array k=(3,3,3,3)
import numpy as np
def plotGraph():
import matplotlib.pyplot as plt
k=(3,3,3,3)
plt.hist(k, normed=1)
from numpy import *
plt.xticks( arange(10) ) # 10 ticks on x axis
plt.show()
plotGraph()
I get this histogram with max y-value is 10.

For different k I get different max value of y even though normed=1 or
normed=True.
Why the normalization (if it works) changes based on the data and how can I
make maximum value of y equals to 1?
**UPDATE:**
I am trying to implement [Carsten
König](http://stackoverflow.com/users/1542814/carsten-konig) answer from
[plotting histograms whose bar heights sum to 1 in
matplotlib](http://stackoverflow.com/questions/3866520/plotting-histograms-
whose-bar-heights-sum-to-1-in-matplotlib) and getting very weird result:
import numpy as np
def plotGraph():
import matplotlib.pyplot as plt
k=(1,4,3,1)
weights = np.ones_like(k)/len(k)
plt.hist(k, weights=weights)
from numpy import *
plt.xticks( arange(10) ) # 10 ticks on x axis
plt.show()
plotGraph()
Result:

What am I doing wrong?
Thanks
Answer: When you plot a normalized histogram, it is not the height that should sum up
to one, but the area underneath the curve should sum up to one:
In [44]:
import matplotlib.pyplot as plt
k=(3,3,3,3)
x,bins,p=plt.hist(k, normed=1)
from numpy import *
plt.xticks( arange(10) ) # 10 ticks on x axis
plt.show()
In [45]:
print bins
[ 2.5 2.6 2.7 2.8 2.9 3. 3.1 3.2 3.3 3.4 3.5]
Here, this example, the bin width is 0.1, the area underneath the curve sums
up to one (0.1*10).
To have the sum of height to be 1, add the following before `plt.show()`:
for item in p:
item.set_height(item.get_height()/sum(x))

|
Execution of Python code with -m option or not
Question: The python interpreter has `-m` _module_ option that "Runs library module
_module_ as a script".
With this python code a.py:
if __name__ == "__main__":
print __package__
print __name__
I tested `python -m a` to get
"" <-- Empty String
__main__
whereas `python a.py` returns
None <-- None
__main__
To me, those two invocation seems to be the same except __package__ is not
None when invoked with -m option.
Interestingly, with `python -m runpy a`, I get the same as `python -m a` with
python module compiled to get a.pyc.
What's the (practical) difference between these invocations? Any pros and cons
between them?
Also, David Beazley's Python Essential Reference explains it as "The -m option
runs a library module as a script which executes inside the __main__ module
prior to the execution of the main script". What does it mean?
Answer: When you use the [`-m` command-line
flag](http://docs.python.org/2/using/cmdline.html#cmdoption-m), Python will
import a module _or package_ for you, then run it as a script. When you don't
use the `-m` flag, the file you named is run as _just a script_.
The distinction is important when you try to run a package. There is a big
difference between:
python foo/bar/baz.py
and
python -m foo.bar.baz
as in the latter case, `foo.bar` is imported and relative imports will work
correctly with `foo.bar` as the starting point.
Demo:
$ mkdir -p test/foo/bar
$ touch test/foo/__init__.py
$ touch test/foo/bar/__init__.py
$ cat << EOF > test/foo/bar/baz.py
> if __name__ == "__main__":
> print __package__
> print __name__
>
> EOF
$ PYTHONPATH=test python test/foo/bar/baz.py
None
__main__
$ PYTHONPATH=test bin/python -m foo.bar.baz
foo.bar
__main__
As a result, Python has to actually care about packages when using the `-m`
switch. A normal script can never _be_ a package, so `__package__` is set to
`None`.
But run a package or module _inside_ a package with `-m` and now there is at
least the _possibility_ of a package, so the `__package__` variable is set to
a string value; in the above demonstration it is set to `foo.bar`, for plain
modules not inside a package, it is set to an empty string.
As for the `__main__` _module_ ; Python imports scripts being run as it would
a regular module. A new module object is created to hold the global namespace,
stored in `sys.modules['__main__']`. This is what the `__name__` variable
refers to, it is a key in that structure.
For packages, you can create a `__main__.py` module and have that run when
running `python -m package_name`; in fact that's the only way you _can_ run a
package as a script:
$ PYTHONPATH=test python -m foo.bar
python: No module named foo.bar.__main__; 'foo.bar' is a package and cannot be directly executed
$ cp test/foo/bar/baz.py test/foo/bar/__main__.py
$ PYTHONPATH=test python -m foo.bar
foo.bar
__main__
So, when naming a package for running with `-m`, Python looks for a `__main__`
module contained in that package and executes that as a script. It's name is
then still set to `__main__`, and the module object is still stored in
`sys.modules['__main__']`.
|
Use Python's bisect in C/Objective-C
Question: I'm looking to port this class written in Python
<http://stackoverflow.com/a/4113400/129202> into Objective-C, or C.
It uses something called `bisect.bisect_right`. I'm not terribly experienced
with Python, so how would one implement that in C/obj-c?
Answer: This is the class I came up with. I just tested it one million times and it
gives expected results. Not exactly binary search I guess but it does the job.
No outside libraries needed.
The header file:
//
// Mjweightedtuple2.h
// orixnknk
//
// Created by Jonny Bergström on 3/7/14.
// Copyright (c) 2014 Jonny Bergstrom. All rights reserved.
//
#import <Foundation/Foundation.h>
@interface Mjweightedtuple2 : NSObject
-(id)initWithItems:(NSDictionary*)items;
-(id)randomValue;
@end
The implementation file:
//
// Mjweightedtuple2.m
// orixnknk
//
// Created by Jonny Bergström on 3/7/14.
// Copyright (c) 2014 Jonny Bergstrom. All rights reserved.
//
#import "Mjweightedtuple2.h"
@interface Valueandlength : NSObject
@property (nonatomic, retain) id value;
@property NSInteger high;
@end
@implementation Valueandlength
@synthesize value; // retain
-(void)dealloc {
self.value = nil;
[super dealloc];
}
@end
@interface Mjweightedtuple2 ()
@property (nonatomic, retain) NSArray* thearray;
@property NSInteger length;
@end
@implementation Mjweightedtuple2
@synthesize thearray;
@synthesize length; // assign
-(void)dealloc {
self.thearray = nil;
[super dealloc];
}
-(id)initWithItems:(NSDictionary*)items {
self = [super init];
if (self) {
// NSDictionary items = @{
// @"pear": [NSNumber numberWithInteger:1],
// @"banana": [NSNumber numberWithInteger:100],
// @"apple": [NSNumber numberWithInteger:15],
// };
NSMutableArray* temparray = [NSMutableArray array];
//NSMutableSet* tempset = [NSMutableSet set];
NSInteger maxval = 0;
Valueandlength* val;
NSNumber* numberValue;
for (NSString* key in items.allKeys) {
numberValue = items[key];
const NSInteger VALUE = [numberValue integerValue];
maxval += VALUE;
val = [[Valueandlength alloc] init];
val.value = key;
val.high = maxval;
[temparray addObject:val];
[val release];
}
self.thearray = [NSArray arrayWithArray:temparray];
self.length = maxval;
}
return self;
}
-(id)randomValue {
const NSInteger INDEXTOLOOKFOR = arc4random_uniform(self.length);
for (Valueandlength* val in self.thearray) {
if (INDEXTOLOOKFOR < val.high)
return val.value;
}
return nil;
}
@end
This is how I tested:
NSDictionary* items = @{
@"pear": [NSNumber numberWithInteger:1],
@"banana": [NSNumber numberWithInteger:1],
@"apple": [NSNumber numberWithInteger:1],
};
Mjweightedtuple2* r = [[Mjweightedtuple2 alloc] initWithItems:items];
DLog(@"Mjweightedtuple2 test");
NSMutableDictionary* dicresult = [NSMutableDictionary dictionary];
for (NSString* key in items.allKeys) {
[dicresult setObject:[NSNumber numberWithInteger:0] forKey:key];
}
const NSInteger TIMES = 1000000;
for (NSInteger i = 0; i < TIMES; i++) {
//DLog(@"%d: %@", i + 1, [r randomValue]);
NSString* selectedkey = [r randomValue];
NSNumber* number = dicresult[selectedkey];
[dicresult setObject:[NSNumber numberWithInteger:1 + number.integerValue] forKey:selectedkey];
}
const double DTIMES = TIMES;
for (NSString* key in dicresult.allKeys) {
const NSInteger FINALCOUNT = [dicresult[key] integerValue];
DLog(@"%@: %d = %.1f%%", key, FINALCOUNT, ((double)FINALCOUNT / DTIMES) * 100.0);
}
Results:
> banana: 333560 = 33.4% apple: 333540 = 33.4% pear: 332900 = 33.3%
Then I prefer bananas 90% of the times...
NSDictionary* items = @{
@"pear": [NSNumber numberWithInteger:5000],
@"banana": [NSNumber numberWithInteger:90000],
@"apple": [NSNumber numberWithInteger:5000],
};
> banana: 899258 = 89.9% apple: 50362 = 5.0% pear: 50380 = 5.0%
|
Import Java library in RIDE
Question: I'm trying to use a java library in RIDE. I found a good tutorial(
<https://blog.codecentric.de/en/2012/06/robot-framework-tutorial-writing-
keyword-libraries-in-java/>) I follow it, but when the time comes to import
and use the java library ( Database Library)in RIDE. It fails. When I look the
page with my different imports, the java library is written in red and not in
black as the others.
And when I try to run with Jybot, I have the well-known message : [ ERROR ]
Error in file
'C:\Users\XXXXXX\Documents\Robot_Test\implementation\DB_Test\Example.html':
Importing test library 'org.robot.database.keywords.DatabaseLibrary' failed:
ImportError: No module named robot
I follow every line of the tutorial, even the with the set CLASSPATH.
Any idea ? ( I know that this library exist in Python, but I want to write my
own java libraries ^^) Thanks
Answer: This worked for me using:
* Jython 2.7b4
* Robotframework 2.8.7
* Ride 1.3
Create Lib and compile it (you do not need to jar it)
Directory structure is
run_ride.sh
libs/DemoLib.class
tests/DemoLibTest.txt
Excerpt from tests/DemoLibTest.txt:
* Settings
Library ../libs/DemoLib.class
* Test Cases
DemoLibTest
Print Demo
Start Ride, switch to tab "Run", choose `Execution Profile: jybot`, press
Start, output is:
Starting test: tests.DemoLibTest.DemoLibTest
20150304 19:13:20.321 : INFO : ---------- Demo ---------------
To avoid confusion put this line
echo $CLASSPATH
in your Ride startup script in order to ensure that your library is really
imported. By the way, in my Ride the import is also marked red. Sometimes
restarting Ride might help. But the colour does not mean anything, if your
settings are correct.
|
Error handling with verbose output
Question: Im trying to implement the `--verbose` option in my script. The idea is to
turn on extra printing of errors etc for debugging, but for some reason it
doesnt seem to work. Ive tried a few variations of the `if verbose` statement
but no joy. Im hoping someone could point me in the right direction?
**CLI EXAMPLE**
`./attack2.py -f wordfile.txt -d google.com --verbose 1`
**CLI OUTPUT**
unknown@ubuntu:~$ ./attack2.py -f wordfile.txt -d google.com --verbose 1
173.194.34.149
173.194.34.130
unknown@ubuntu:~$
**ARG PRINT**
{'--domain': 'google.com',
'--file': 'wordfile.txt',
'--help': False,
'--thread': False,
'--verbose': True,
'10': False,
'<1>': '1'}
**CODE**
#!/usr/bin/python
"""
Description:
Basic Domain bruteforcer
Usage:
attack2.py (-f <file>) (-d <domain>) [-t 10] [-v <1>]
attack2.py -h | --help
Arguments:
-f --file File to read potential Sub-domains from. (Required)
-d --domain Domain to bruteforce. (Required)
Options:
-h --help Show this screen.
-p --proxy Proxy address and port. [default: http://127.0.0.1:8080] (Optional)
-t --thread Thread count. (Optional)
-v --verbose Turn debug on. (Optional)
"""
import socket
from docopt import docopt
def fread(dwords):
flist = open(dwords).readlines()
return [s.replace('\n', '.') for s in flist]
def subcheck(subdomain, domain, verbose):
vdomain = {}
for sub in subdomain:
try:
check = socket.gethostbyname(sub + domain)
vdomain[sub + domain] = check
except socket.gaierror, e:
if verbose == True:
print arguments
print e, sub + domain
else:
pass
return vdomain
if __name__ == "__main__":
arguments = docopt(__doc__, version='0.1a')
fread(arguments['--file'])
returned_list = fread(arguments['--file'])
returned_domains = subcheck(returned_list, arguments['--domain'], ['--verbose'])
Answer: The below line in function `subcheck`
returned_domains = subcheck(returned_list, arguments['--domain'], ['--verbose'])
should be
returned_domains = subcheck(returned_list, arguments['--domain'], arguments['--verbose'])
You forgot to pass the `verbose` param from arguments, instead you passed a
`list`
|
What is islambda function in python
Question: Can anyone help me understanding this:
def isalambda(v):
return isinstance(v, type(lambda: None)) and v.__name__ == '<lambda>'
Answer: The function tests if a function object was created using a `lambda`
statement:
>>> l = lambda: None
>>> l
<function <lambda> at 0x1020ae0c8>
>>> l.__name__
'<lambda>'
`lambda` expressions create a regular function object with the name set to
`<lambda>` as there is no `def` statement naming the function.
It could just as well have used:
from types import FunctionType
def isalambda(v):
return isinstance(v, FunctionType) and v.__name__ == '<lambda>'
or using
[`inspect.isfunction()`](http://docs.python.org/2/library/inspect.html#inspect.isfunction):
from inspect import isfunction
def isalambda(v):
return isfunction(v) and v.__name__ == '<lambda>'
The test isn't foolproof; you can assign any string to the `__name__`
attribute:
>>> def foo(): pass
...
>>> foo.__name__
'foo'
>>> foo.__name__ = '<lambda>'
>>> foo.__name__
'<lambda>'
>>> foo
<function <lambda> at 0x1020ae050>
|
Unable to import MySQLdb in ansible module
Question: I am trying to write custom module in ansible. while using `import MySQLdb` it
is giving me error
failed: [127.0.0.1] => {"failed": true, "parsed": false}
invalid output was: Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-1394199347.29-33439012674717/inventory", line 11, in <module>
import MySQLdb
ImportError: No module named MySQLdb
Using
**Python Version :** 2.6.6
**MySQL-python Version :** 1.2.3
Python Code:-
#!/usr/bin/python
import datetime
import sys
import json
import os
import shlex
import MySQLdb
db = MySQLdb.connect("localhost","user","pwd","db_name" )
cursor = db.cursor()
cursor.execute("SELECT * FROM hosts")
data = cursor.fetchone()
print data
db.close()
I have written a playbook to run ansible module:-
inventory.yaml:-
---
- hosts: webservers
user: root
sudo: True
vars:
act: list
tasks:
- name: Run module inventory
action: inventory act="{{act}}" prod="roop"
I'm running this playbook using below commands:-
ansible-playbook -v playboook/path/inventory.yaml
Same code working in python command line (<<<) but not working in ansible
module.
In my ansible module other code are working. Is there any configuration
setting need to do for ansible??
Answer: Your MySQLdb module is not in the Python environment. In your script you are
running python using: `#!/usr/bin/python`. It's quite possible that the Python
that you are running from the command line (`>>>`) is not the same Python. You
can try running:
which python
to find out which Python you are running off of. So, if it's another Python to
install it, you can run on Ubuntu:
sudo apt-get install python-mysqldb
or on CentOS, RedHat:
sudo yum install MySQL-python
Hope it helps.
|
using GPIO to control raspberry picamera with push button
Question:
import time
import os
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(23 , GPIO.IN)
while True:
if GPIO.input(23)==1:
os.system('raspistill -o image.jpg')
os.system(‘gpicview image.jpg &’)
sleep(20)
os.system(‘killall gpicview’)
else:
print "Ready to take picture"
GPIO.cleanup()
Dear friends I'm trying to control raspberry picamera with a button.. like
when i press the button it should take a picture. I configured GPIO pins also
button when I press button nothing is working, however picamera taking photo
itself. How can i make it work with button? any experts on python can guide
me? thanks!
Answer: Hardware doesn't give us a clean 1 or 0, it bounces back and forth a bit.
Check this code for "debouncing" the button:
<https://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/robot/buttons_and_switches/>
|
Python 2 tuple / list unpacking using star throws SyntaxError
Question: Why does the following code throw a `SyntaxError` for `*phones` in Python
2.7.3?
contact = ('name', 'email', 'phone1', 'phone2')
name, email, *phones = contact
Was this introduced in Python 3 and not backported? How can I get this to work
in Python 2? That is, if there isn't some trivial way to fix things here.
Answer: Yup, the extended unpacking syntax (using `*` to take the rest) is Python 3.x
only. The closest you can get in Python 2.x is explicitly slicing the parts
you want from the remainder:
contact = ('name', 'email', 'phone1', 'phone2')
(name, email), phones = contact[:2], contact[2:]
If you needed it to work on arbitrary iterables, then you can use something
like:
from itertools import islice
i = iter(contact)
(name, email), phone = tuple(islice(i, 2)), list(i)
|
Python (if, elif, else) working with timestamp
Question: I am taking a beginning Python programming class and I am having trouble
getting the code below to work correctly. The assignment asks: write a Python
code that uses the “strftime()” function to get the today’s weekday value and
then use an “if..elif..else” statement to display the associated message. So,
with today being Friday (w == 5) for me, it should print "Prevention is better
than cure." Instead, it keeps printing the else statement "Stupid is as stupid
does." Advice?
import datetime
t = datetime.date.today()
w = t.strftime("%w"); # day of week
if (w == 0): print("The devil looks after his own.");
elif (w == 1): print("Everything comes to him who waits.");
elif (w == 2): print("Give credit where credit is due.");
elif (w == 3): print("If you pay peanuts, you get monkeys.");
elif (w == 4): print("Money makes the world go round.");
elif (w == 5): print("Prevention is better than cure.");
else: print("Stupid is as stupid does.");
Answer: Instead of using `strftime()`, which returns a `str`, use `weekday()`, which
returns an `int`:
t = datetime.date.today()
w = today.weekday() + 1 # +1 because weekday() is 0-based, %w is 1-based
In Python, a string and a number are never equal.
|
Django caching issue
Question: When I'm changing something in .py files, the changes are not shown
immediately, but after few minutes (even half hour). I'm restarting, reloading
apache, this don't help.
I'm using Apache2 with mod_wsgi on ubuntu server.
It doesn't even create .pyc files.
Here's my [settings.py](http://pastebin.com/1DVhHUhA) file. What can I do to
disable caching?
# Django settings for schedule project.
DEBUG = True
TEMPLATE_DEBUG = DEBUG
ADMINS = (
# ('Your Name', '[email protected]'),
)
MANAGERS = ADMINS
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'.
'NAME': 'schedule', # Or path to database file if using sqlite3.
# The following settings are not used with sqlite3:
'USER': 'u',
'PASSWORD': 'p',
'HOST': '', # Empty for localhost through domain sockets or '127.0.0.1' for localhost through TCP.
'PORT': '', # Set to empty string for default.
}
}
# Hosts/domain names that are valid for this site; required if DEBUG is False
# See https://docs.djangoproject.com/en/1.5/ref/settings/#allowed-hosts
ALLOWED_HOSTS = []
# Local time zone for this installation. Choices can be found here:
# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
# although not all choices may be available on all operating systems.
# In a Windows environment this must be set to your system time zone.
TIME_ZONE = 'Europe/Warsaw'
# Language code for this installation. All choices can be found here:
# http://www.i18nguy.com/unicode/language-identifiers.html
LANGUAGE_CODE = 'pl'
SITE_ID = 1
# If you set this to False, Django will make some optimizations so as not
# to load the internationalization machinery.
USE_I18N = True
# If you set this to False, Django will not format dates, numbers and
# calendars according to the current locale.
USE_L10N = True
# If you set this to False, Django will not use timezone-aware datetimes.
USE_TZ = True
# Absolute filesystem path to the directory that will hold user-uploaded files.
# Example: "/var/www/example.com/media/"
MEDIA_ROOT = ''
# URL that handles the media served from MEDIA_ROOT. Make sure to use a
# trailing slash.
# Examples: "http://example.com/media/", "http://media.example.com/"
MEDIA_URL = ''
# Absolute path to the directory static files should be collected to.
# Don't put anything in this directory yourself; store your static files
# in apps' "static/" subdirectories and in STATICFILES_DIRS.
# Example: "/var/www/example.com/static/"
STATIC_ROOT = ''
# URL prefix for static files.
# Example: "http://example.com/static/", "http://static.example.com/"
STATIC_URL = '/static/'
# Additional locations of static files
STATICFILES_DIRS = (
# Put strings here, like "/home/html/static" or "C:/www/django/static".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
# List of finder classes that know how to find static files in
# various locations.
STATICFILES_FINDERS = (
'django.contrib.staticfiles.finders.FileSystemFinder',
'django.contrib.staticfiles.finders.AppDirectoriesFinder',
# 'django.contrib.staticfiles.finders.DefaultStorageFinder',
)
# Make this unique, and don't share it with anybody.
SECRET_KEY = '8d-#p35@xz=10t%@^&lo9o@)+#u1e%fi^8$m2gi=t5&4(557_l'
# List of callables that know how to import templates from various sources.
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.Loader',
)
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'schedule.urls'
# Python dotted path to the WSGI application used by Django's runserver.
WSGI_APPLICATION = 'schedule.wsgi.application'
TEMPLATE_DIRS = (
"/home/aklajnert/apps/schedule/templates",
# Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
# Always use forward slashes, even on Windows.
# Don't forget to use absolute paths, not relative paths.
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'stops',
# Uncomment the next line to enable the admin:
# 'django.contrib.admin',
# Uncomment the next line to enable admin documentation:
# 'django.contrib.admindocs',
)
# A sample logging configuration. The only tangible logging
# performed by this configuration is to send an email to
# the site admins on every HTTP 500 error when DEBUG=False.
# See http://docs.djangoproject.com/en/dev/topics/logging for
# more details on how to customize your logging configuration.
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse'
}
},
'handlers': {
'mail_admins': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'django.utils.log.AdminEmailHandler'
}
},
'loggers': {
'django.request': {
'handlers': ['mail_admins'],
'level': 'ERROR',
'propagate': True,
},
}
}
Answer: Make sure you read:
* <http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode>
It details what you need to do to ensure a proper restart and have code be
reloaded. What is required depends on what mode of mod_wsgi you are using.
|
Make a script in python that lists adjacent words through Unix?
Question: How can I write a script in python through nested dictionaries that takes a
`txt` file written as,
white,black,green,purple,lavendar:1
red,black,white,silver:3
black,white,magenta,scarlet:4
and make it print for each entry before the **:** character, all neighbors it
showed up next to
white: black silver magenta
black: white green red
green: black purple
and so on
Edit: Well, I didn't post what I have because it is rather
unsubstantial...I'll update it if I figure out anything else... I just have
been stuck for a while - all I have figured out how to do is post each
word/letter on a separate line with:
from sys import argv
script,filename=argv
txt=open(filename)
for line in txt:
line=line[0:line.index(';')]
for word in line.split(","):
print word
I guess what I want is to have some kind of for loop that runs through each
word, if the word is not in an original dictionary, I'll add it to it, then
I'll search through for words that appear next to it in the file.
Answer: **Input**
a,c,f,g,hi,lw:1
f,g,j,ew,f,h,a,w:3
fd,s,f,g,s:4
**Code**
neighbours = {}
for line in file('4-input.txt'):
line = line.strip()
if not line:
continue # skip empty input lines
line = line[:line.index(':')] # take everything left of ':'
previous_token = ''
for token in line.split(','):
if previous_token:
neighbours.setdefault(previous_token, []).append(token)
neighbours.setdefault(token, []).append(previous_token)
previous_token = token
import pprint
pprint.pprint(neighbours)
**Output**
{'a': ['c', 'h', 'w'],
'c': ['a', 'f'],
'ew': ['j', 'f'],
'f': ['c', 'g', 'g', 'ew', 'h', 's', 'g'],
'fd': ['s'],
'g': ['f', 'hi', 'f', 'j', 'f', 's'],
'h': ['f', 'a'],
'hi': ['g', 'lw'],
'j': ['g', 'ew'],
'lw': ['hi'],
's': ['fd', 'f', 'g'],
'w': ['a']}
Tidying up the prettyprinted dictionary is left as an exercise for the reader.
(Because dictionaries are inherently not sorted into any order, and removing
the duplicates without changing the ordering of the lists is also annoying).
Easy solution:
for word, neighbour_list in neighbours.items():
print word, ':', ', '.join(set(neighbour_list))
But that does change the ordering.
|
Hammerwatch Puzzle; Solving with Python
Question: So, my wife was playing Hammerwatch on Steam. She came across a puzzle I
decided I'd try to program a solution for.
Here's how the puzzle works:
Activating a switch either turns ON or OFF that switch, and toggles its
adjacent switches as well.
Here's a YouTube video of the puzzle within the game:
<http://www.youtube.com/watch?v=OM1XD7IZ0cg>
I figured out how to get the mechanics of the puzzle working correctly. I
eventually realized I have two options to get the computer to solve this:
**A)** Allow the computer to solve by randomly selecting switches
...or...
**B)** Create an algorithm that will allow the computer to solve the puzzle
more efficiently.
Being a new programmer (halfway through CodeAcademy tutorials, halfway through
LPTHW, and currently working through the MIT edX computer science Python
course), I feel I'm a little limited in my abilities to figure this out. I've
come to learn! Please help!
## Please help with:
I need help either figuring out a better way to solve this randomly, or even
better, to have an algorithm that will allow the computer to _systematically_
solve the puzzle.
The only thing I could thing of was to have the computer _store_ the puzzle's
states in a list or dictionary, assisting the program by skipping over those
stored states, point the program to new possible solutions
## How the current program works:
I intended to allow the user to input the current state of the puzzle-board
with the first 9 raw_inputs. It then enters a loop, randomly toggling the
puzzle-board's switches until they're all ON.
_P.S.: While I was signing up for a StackOverflow account and typing this
message, my computer has been running this program in the background to find a
solution. It's been about an hour, still hasn't found a solution, it is
currently on its ~92,000,000th iteration. I don't think it's working..._
import random
def switcheroo(x):
"""
switches 'x' to 1 if it's a 0 and vice-versa
"""
if x == 0:
x = 1
else:
x = 0
return x
# original input variables
a1 = 0
a2 = 0
a3 = 0
b1 = 0
b2 = 0
b3 = 0
c1 = 0
c2 = 0
c3 = 0
# puzzleboard
print "\n\n"
print " 1 2 3 "
print " -------------"
print "a |",a1,"|",a2,"|",a3,"|"
print " -------------"
print "b |",b1,"|",b2,"|",b3,"|"
print " -------------"
print "c |",c1,"|",c2,"|",c3,"|"
print " -------------"
print "\n\n"
print "What's ON/OFF? (type 0 for OFF, 1 for ON)"
a1 = int(raw_input("a1: "))
a2 = int(raw_input("a2: "))
a3 = int(raw_input("a3: "))
b1 = int(raw_input("b1: "))
b2 = int(raw_input("b2: "))
b3 = int(raw_input("b3: "))
c1 = int(raw_input("c1: "))
c2 = int(raw_input("c2: "))
c3 = int(raw_input("c3: "))
# for counting the iterations within the loop
iteration = 0
# to stop loop if all switches are ON
ans = a1 and a2 and a3 and b1 and b2 and b3 and c1 and c2 and c3
while ans == False:
# randomly generates number, flipping random switches
counter = random.randint(1,9)
if counter == 1:
switch = "a1"
elif counter == 2:
switch = "a2"
elif counter == 3:
switch = "a3"
elif counter == 4:
switch = "b1"
elif counter == 5:
switch = "b2"
elif counter == 6:
switch = "b3"
elif counter == 7:
switch = "c1"
elif counter == 8:
switch = "c2"
elif counter == 9:
switch = "c9"
# PUZZLE MECHANICES #
if switch == "a1":
a1 = switcheroo(a1)
a2 = switcheroo(a2)
b1 = switcheroo(b1)
if switch == "a2":
a2 = switcheroo(a2)
a1 = switcheroo(a1)
a3 = switcheroo(a3)
b2 = switcheroo(b2)
if switch == "a3":
a3 = switcheroo(a3)
a2 = switcheroo(a2)
b3 = switcheroo(b3)
if switch == "b1":
b1 = switcheroo(b1)
b2 = switcheroo(b2)
a1 = switcheroo(a1)
c1 = switcheroo(c1)
if switch == "b2":
b2 = switcheroo(b2)
a2 = switcheroo(a2)
b1 = switcheroo(b1)
b3 = switcheroo(b3)
c2 = switcheroo(c2)
if switch == "b3":
b3 = switcheroo(b3)
b1 = switcheroo(b1)
b2 = switcheroo(b2)
c3 = switcheroo(c3)
# Edit 1
if switch == "c1":
c1 = switcheroo(c1)
c2 = switcheroo(c2)
b1 = switcheroo(b1)
if switch == "c2":
c2 = switcheroo(c2)
c1 = switcheroo(c1)
c3 = switcheroo(c3)
b2 = switcheroo(b2)
if switch == "c3":
c3 = switcheroo(c3)
c2 = switcheroo(c2)
b3 = switcheroo(b3)
if switch == "stop":
break
# prints puzzle-board state at end of loop iteration
print "\n\n"
print " 1 2 3 "
print " -------------"
print "a |",a1,"|",a2,"|",a3,"|"
print " -------------"
print "b |",b1,"|",b2,"|",b3,"|"
print " -------------"
print "c |",c1,"|",c2,"|",c3,"|"
print " -------------"
print "\n\n"
# prints which # was randomly generated
print "random #: ", counter
# tracks loop iteration
iteration += 1
print "iteration", iteration
if ans == True:
print "I figured it out!"
Answer: There's a well-known method for solving this problem. Let x_1, ..., x_n be
variables corresponding to whether you press the n'th button as part of the
solution, and let a_1, ..., a_n be the initial state.
Let's say you're solving a 3x3 problem, and the variables are set up like
this:
x_1 x_2 x_3
x_4 x_5 x_6
x_7 x_8 x_9
and this initial state is:
a_1 a_2 a_3
a_4 a_5 a_6
a_7 a_8 a_9
Now, you can write down some equations (in arithmetic modulo 2) that the
solution must satisfy. It's basically encoding the rule about which switches
cause a particular light to toggle.
a_1 = x_1 + x_2 + x_4
a_2 = x_1 + x_2 + x_3 + x_5
...
a_5 = x_2 + x_4 + x_5 + x_6 + x_8
...
a_9 = x_6 + x_8 + x_9
Now you can use gaussian elimination to solve this set of simultaneous
equations. Because you're working in arithmetic modulo 2, it's actually a bit
easier than simultaneous equations over real numbers. For example, to get rid
of x_1 in the 2nd equation, simply add the first equation to it.
a_1 + a_2 = (x_1 + x_2 + x_4) + (x_1 + x_2 + x_3 + x_5) = x_3 + x_4 + x_5
Specifically, here's the Gaussian elimination algorithm in arithmetic modulo
2:
* Pick an equation with an x_1 in it. Name it E_1.
* Add E_1 to every other unnamed equation with an x_1 in it.
* Repeat for x_2, x_3, ...., x_n.
Now, E_n is an equation which only contains x_n. You can substitute the value
for x_n you get from this into the earlier equations. Repeat for E_{n-1}, ...,
E_1.
Overall, this solves the problem in O(n^3) operations.
Here's some code.
class Unsolvable(Exception):
pass
def switches(n, m, vs):
eqs = []
for i in xrange(n):
for j in xrange(m):
eq = set()
for d in xrange(-1, 2):
if 0 <= i+d < n: eq.add((i+d)*m+j)
if d != 0 and 0 <= j+d < m: eq.add(i*m+j+d)
eqs.append([vs[i][j], eq])
N = len(eqs)
for i in xrange(N):
for j in xrange(i, N):
if i in eqs[j][1]:
eqs[i], eqs[j] = eqs[j], eqs[i]
break
else:
raise Unsolvable()
for j in xrange(i+1, N):
if i in eqs[j][1]:
eqs[j][0] ^= eqs[i][0]
eqs[j][1] ^= eqs[i][1]
for i in xrange(N-1, -1, -1):
for j in xrange(i):
if i in eqs[j][1]:
eqs[j][0] ^= eqs[i][0]
eqs[j][1] ^= eqs[i][1]
return [(i//m,i%m) for i, eq in enumerate(eqs) if eq[0]]
print switches(4, 3, ([1, 0, 0], [0, 1, 0], [0, 0, 1], [0, 0, 0]))
You give it the height and width of the switch array, and the initial state a
row at a time. It returns the switches that you need to press to turn all the
lights off.
|
Defining Views and URLs for Many to Many field in Django
Question: I'm new to Django and have been stuck on this for a few days now. Hoping to
find some help here. I've searched stackoverflow and read through the django
docs but haven't been able to grasp this. I'm using Django 1.6.2 and Python
2.7.
I'm setting up a simple news app in which **article** has a **ManyToMany**
relationship with **category**. I'm running into trouble trying to display
articles from a specific category. I have the index working displaying all
articles and also the single page view is working e.g. clicking on article
title from index brings you to the article itself. Once in the article I am
displaying the article category. Up to here all is well. When I try to link
the category and display an index for all posts in that category I get a
**NoReverseMatch** for the url 'category-archive'.
Should I do this in a view like I'm trying or would the Manager work better?
Open to all suggestions and answers. Like I said I'm new so would like to know
best practice. Here is my code and thank you in advance for dealing with a
noobie.
**models.py**
from django.db import models
from tinymce import models as tinymce_models
class ArticleManager(models.Manager):
def all(self):
return super(ArticleManager, self).filter(active=True)
class Category(models.Model):
title = models.CharField(max_length=65)
slug = models.SlugField()
def __unicode__(self, ):
return self.title
class Article(models.Model):
title = models.CharField(max_length=65)
slug = models.SlugField()
description = models.CharField(max_length=165)
content = tinymce_models.HTMLField()
categories = models.ManyToManyField(Category)
image = models.ImageField(upload_to='article/images')
active = models.BooleanField(default=False)
timestamp = models.DateTimeField(auto_now_add=True, auto_now=False)
updated = models.DateTimeField(auto_now=True, auto_now_add=False)
objects = ArticleManager()
def __unicode__(self, ):
return self.title
class Meta:
ordering = ['-timestamp',]
**views.py**
from django.http import HttpResponse, HttpResponseRedirect
from django.shortcuts import render_to_response, RequestContext, get_object_or_404
from .models import Article, Category
def all_articles(request):
articles = Article.objects.all()
return render_to_response('news/all.html', locals(), context_instance=RequestContext(request))
def single_article(request, slug):
article = get_object_or_404(Article, slug=slug)
return render_to_response('news/single.html', locals(), context_instance=RequestContext(request))
def category_archive(request, slug):
articles = Article.objects.filter(category=category)
categories = Category.objects.all()
category = get_object_or_404(Category, slug=slug)
return render_to_response('news/category.html', locals(), context_instance=RequestContext(request))
**single.html** \- for single article view
{% extends 'base.html' %}
{% block content %}
<h1>{{ article.title }}</h1>
<img src='{{ MEDIA_URL }}{{ article.image }}' class="article-image img-responsive"/>
<p>{{ article.content|safe }}</p>
<p class='small'>
**this next line gets an error for the url 'category-archive'**
{% for category in article.categories.all %}Category: <a href='{% url "category-archive" %}{{ category.slug }}'>{{ category }}</a>{% endfor %}</p>
{% endblock %}
**category.html** \- display all articles in specific category
{% extends 'base.html' %}
{% block content %}
{% for article in articles %}
<h1><a href='{% url "articles" %}{{ article.slug }}'>{{ article }}</a></h1>
<a href='{% url "articles" %}{{ article.slug }}'><img src='{{ MEDIA_URL }}{{ article.image }}' class="img-responsive"/></a>
{{ article.description }}
{% if forloop.counter|divisibleby:4 %}
<hr/>
<div class='row'>
{% endif %}
{% endfor %}
</div>
{% endblock %}
**urls.py** \- project urls
from django.conf.urls import patterns, include, url
from django.conf import settings
from filebrowser.sites import site
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
(r'^tinymce/', include('tinymce.urls')),
(r'^admin/filebrowser/', include(site.urls)),
(r'^grappelli/', include('grappelli.urls')),
(r'^static/(?P<path>.*)$', 'django.views.static.serve',{
'document_root': settings.STATIC_ROOT
}),
(r'^media/(?P<path>.*)$', 'django.views.static.serve',{
'document_root': settings.MEDIA_ROOT
}),
url(r'^admin/', include(admin.site.urls)),
url(r'^$', 'dl.views.home', name='home'),
(r'^news/', include('news.urls')),
(r'^guides/', include('guides.urls')),
)
**urls.py** \- news urls
from django.conf import settings
from django.conf.urls import patterns, include, url
urlpatterns = patterns('news.views',
url(r'^$', 'all_articles', name='articles'),
url(r'^(?P<slug>[-\w]+)/$', 'single_article'),
**This next one is giving me the problem I suspect - should be url to category with articles**
url(r'^chive/(?P<slug>[-\w]+)/?', 'category_archive', name='category-archive'),
)
Answer: I would have post it as a comment but i don't have the reputation. I think
that the thing is that the URL Dispatcher expects the category-archive to also
get the slug. so you should change the URL in the template to:
{% url "category-archive" category.slug %}
hope this helps!
|
is it possible to save generated wsdl stub classes to disk using SUDs in python
Question: I have a wsdl that takes a lot of time to get processed using SUDS.
client = Client(url)
Now is there a way i can save generated client classes from python to disk(i
tried using cPickle but it gives error as this protocol is meant to save
instances and the **typeof** client is a **class**)? The reason i want to save
is to utilize the generated stub classes to ship these with a py module(a
plugin that i am writing for sublime editor and in my case WSDL is quite
static and takes hell lot of time to get loaded)
Answer: Suds has a caching options that you can use on client creation:
from suds.cache import ObjectCache
oc = ObjectCache(days=0)
client = Client(url, cache=oc, cachingpolicy=1)
Caching policy description from suds documentation:
> **cachingpolicy**
>
> The caching policy, determines how data is cached. The default is 0. version
> 0.4+
>
> * 0 = XML documents such as WSDL & XSD.
>
> * 1 = WSDL object graph.
>
>
|
Error trying to call the backend module in pyusb. "AttributeError: 'module' object has no attribute 'backend'"
Question: I'm fairly new to this so please bare with me!
I recently installed pyusb for this project, which is trying to attempt at
writing to a [USB LED Message Board](https://www.thinkgeek.com/product/1690/)
and received this error:
`AttributeError: 'module' object has no attribute 'backend'`
I don't know why this is, I checked the pyusb module files and it clearly has
a folder named "backend" and inside has the correct files.
Here's all of my code:
import usb.core
import usb.util
import sys
backend = usb.backend.libusb01.get_backend(find_library=lambda C: "Users\nabakin\Desktop\libusb-win32-bin-1.2.6.0\lib\msvc_x64")
#LED Display Message device identify
MessageDevice = usb.core.find(idVendor=0x1D34, idProduct=0x0013, backend=backend)
if MessageDevice is None:
raise ValueError('LED Message Display Device could not be found.')
MessageDevice.set_configuration()
# get an endpoint instance
cfg = MessageDevice.get_active_configuration()
interface_number = cfg[(0,0)].bInterfaceNumber
print interface_number
alternate_settting = usb.control.get_interface(interface_number)
intf = usb.util.find_descriptor(
cfg, bInterfaceNumber = interface_number,
bAlternateSetting = alternate_setting
)
ep = usb.util.find_descriptor(
intf,
# match the first OUT endpoint
custom_match = \
lambda e: \
usb.util.endpoint_direction(e.bEndpointAddress) == \
usb.util.ENDPOINT_OUT
)
assert ep is not None
# write the data
ep.write('\x00\x06\xFE\xBA\xAF\xFF\xFF\xFF')
Code to focus on:
backend = usb.backend.libusb01.get_backend(find_library=lambda C: "Users\nabakin\Desktop\libusb-win32-bin-1.2.6.0\lib\msvc_x64")
Also I've noticed in other code people don't have the backend at all. But when
I try to remove the backend part of my code it displays:
MessageDevice = usb.core.find(idVendor=0x1D34, idProduct=0x0013)
File "C:\Python27\lib\site-packages\usb\core.py", line 846, in find
raise ValueError('No backend available')
ValueError: No backend available
Some extra info:
* Windows 8 64bit
* Python 2.7
* pyusb-1.0.0a2
Thanks guys, I really appreciate what you do here
Answer: I know this question is 4 months old, but in case it helps I think you're
missing an import statement:
import usb.backend.libusb1
See <https://github.com/walac/pyusb/blob/master/docs/tutorial.rst#specifying-
libraries-by-hand> for more details.
|
Writing multiple Python dictionaries to csv file
Question: Thanks to this other thread, I've successfully written my dictionary to a csv
as a beginner using Python: [Python: Writing a dictionary to a csv file with
one line for every 'key:
value'](http://stackoverflow.com/questions/8685809/python-writing-a-
dictionary-to-a-csv-file-with-one-line-for-every-key-value)
dict1 = {0 : 24.7548, 1: 34.2422, 2: 19.3290}
csv looks like this:
0 24.7548
1 34.2422
2 19.3290
Now, i'm wondering what would be the best approach to organize several
dictionaries with the same keys. I'm looking to have the keys as a first
column, then the dict values in columns after that, all with a first row to
distinguish the columns by dictionary names.
Sure, there are a lot of threads trying to do similar things, such as:
[Trouble writing a dictionary to csv with keys as headers and values as
columns](http://stackoverflow.com/questions/15440970/trouble-writing-a-
dictionary-to-csv-with-keys-as-headers-and-values-as-columns?rq=1) , but don't
have my data structured in the same way (yet…). Maybe the dictionaries must be
merged first.
dict2 = {0 : 13.422, 1 : 9.2308, 2 : 20.132}
dict3 = {0 : 32.2422, 1 : 23.342, 2 : 32.424}
My ideal output:
ID dict1 dict2 dict3
0 24.7548 13.422 32.2422
1 34.2422 9.2308 23.342
2 19.3290 20.132 32.424
I'm not sure, yet, how the column name 'ID' for key names will work it's way
in there. Any help would be appreciated. Mahalo in advance!
Answer: Use `defaultdict(list)`
from collections import defaultdict
merged_dict = defaultdict(list)
dict_list = [dict1, dict2, dict3]
for dict in dict_list:
for k, v in dict.items():
merged_dict[k].append(v)
This is what you get:
{0: [24.7548, 13.422, 32.2422], 1: [34.2422, 9.2308, 23.342], 2: [19.329, 20.132, 32.424]})
Then write the `merged_dict` to csv file as you had previously done for a
single dict. This time `writerow` method of `csv` module will be helpful.
|
Solving 'Cookies must be enabled to use GitHub' using GAE/Webapp2/Urllib2/Python
Question: Despite looking through the API documentation, I couldn't find anything
explaining why Github needs cookies enabled, or how to go about it. I may have
missed it tho.
I'd like to use the native Webapp2 framework on GAE in Python with Urllib2,
and stay away from high-level libraries so that I can learn this from the
inside out.
Snippet from my code:
# Get user name
fields = {
"user" : username,
"access_token" : access_token
}
url = 'https://github.com/users/'
data = urllib.urlencode(fields)
result = urlfetch.fetch(url=url,
payload=data,
method=urlfetch.POST
)
username = result.content
`result.content` returns:
Cookies must be enabled to use GitHub.
I tried putting the following
([ref](http://stackoverflow.com/questions/525773/accept-cookies-in-python)) at
the top of my file but it didn't work:
import cookielib
jar = cookielib.FileCookieJar("cookies")
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(jar))
Answer: It seems to be related to the api endpoint. From the official doc: `All API
access is over HTTPS, and accessed from the api.github.com domain (or through
yourdomain.com/api/v3/ for enterprise). All data is sent and received as
JSON.`
You get an error about cookies because you're calling the GitHub website which
requires a bunch of stuff to work like cookies and javascript. That's why you
need a specific endpoint for the api. The following code sent me back a HTTP
200, note that I'm using the `requests` library to do HTTP call but you can
use whichever you like.
>>> import urllib
>>> import requests
>>> url = "https://api.github.com"
>>> fields = {"user": "Ketouem"}
>>> string_query = urllib.urlencode(fields)
>>> response = requests.get(url + '?' + string_query)
>>> print response.status_code
200
>>> print response.content
'{"current_user_url":"https://api.github.com/user","authorizations_url":"https://api.github.com/authorizations","code_search_url":"https://api.github.com/search/code?q={query}{&page,per_page,sort,order}","emails_url":"https://api.github.com/user/emails","emojis_url":"https://api.github.com/emojis","events_url":"https://api.github.com/events","feeds_url":"https://api.github.com/feeds","following_url":"https://api.github.com/user/following{/target}","gists_url":"https://api.github.com/gists{/gist_id}","hub_url":"https://api.github.com/hub","issue_search_url":"https://api.github.com/search/issues?q={query}{&page,per_page,sort,order}","issues_url":"https://api.github.com/issues","keys_url":"https://api.github.com/user/keys","notifications_url":"https://api.github.com/notifications","organization_repositories_url":"https://api.github.com/orgs/{org}/repos/{?type,page,per_page,sort}","organization_url":"https://api.github.com/orgs/{org}","public_gists_url":"https://api.github.com/gists/public","rate_limit_url":"https://api.github.com/rate_limit","repository_url":"https://api.github.com/repos/{owner}/{repo}","repository_search_url":"https://api.github.com/search/repositories?q={query}{&page,per_page,sort,order}","current_user_repositories_url":"https://api.github.com/user/repos{?type,page,per_page,sort}","starred_url":"https://api.github.com/user/starred{/owner}{/repo}","starred_gists_url":"https://api.github.com/gists/starred","team_url":"https://api.github.com/teams","user_url":"https://api.github.com/users/{user}","user_organizations_url":"https://api.github.com/user/orgs","user_repositories_url":"https://api.github.com/users/{user}/repos{?type,page,per_page,sort}","user_search_url":"https://api.github.com/search/users?q={query}{&page,per_page,sort,order}"}'
|
./xx.py: line 1: import: command not found
Question: I am trying to use this [Python urllib2 Basic Auth
Problem](http://stackoverflow.com/questions/2407126/python-urllib2-basic-auth-
problem) bit of code to download a webpage content from an URL which requires
authentication. The code I am trying is:
import urllib2, base64
request = urllib2.Request("http://api.foursquare.com/v1/user")
base64string = base64.encodestring('%s:%s' % (username, password)).replace('\n', '')
request.add_header("Authorization", "Basic %s" % base64string)
result = urllib2.urlopen(request)
It's showing me:
./xx.py: line 1: import: command not found
./xx.py: line 3: syntax error near unexpected token `('
./xx.py: line 3: `request = urllib2.Request("http://api.foursquare.com/v1/user")'
I am wondering what I am doing wrong? I am using `Python 2.7.5`. How can I
download file contents from an URL which requires authentication?
Answer: It's not an issue related to authentication at the first step. Your `import`
is not working. So, try writing this on first line:
#!/usr/bin/python
and for the time being run using
python xx.py
For you here is one explanation:
>>> abc = "Hei Buddy"
>>> print "%s" %abc
Hei Buddy
>>>
>>> print "%s" %xyz
Traceback (most recent call last):
File "<pyshell#6>", line 1, in <module>
print "%s" %xyz
NameError: name 'xyz' is not defined
At first, I initialized abc variable and it works fine. On the otherhand, xyz
doesn't work as it is not initialized!
|
Find the nth lucky number generated by a sieve in Python
Question: I'm trying to make a program in Python which will generate the nth lucky
number according to the [lucky number
sieve](http://en.wikipedia.org/wiki/Lucky_number). I'm fairly new to Python so
I don't know how to do all that much yet. So far I've figured out how to make
a function which determines all lucky numbers below a specified number:
def lucky(number):
l = range(1, number + 1, 2)
i = 1
while i < len(l):
del l[l[i] - 1::l[i]]
i += 1
return l
Is there a way to modify this so that I can instead find the nth lucky number?
I thought about increasing the specified number gradually until a list of the
appropriate length to find the required lucky number was created, but that
seems like a really inefficient way of doing it.
**Edit:** I came up with this, but is there a better way?
def lucky(number):
f = 2
n = number * f
while True:
l = range(1, n + 1, 2)
i = 1
while i < len(l):
del l[l[i] - 1::l[i]]
i += 1
if len(l) >= number:
return l[number - 1]
f += 1
n = number * f
Answer: > I came up with this, but is there a better way?
Truth is, there will _always_ be a better way, the remaining question being:
is it _good enough_ for your need?
One possible improvement would be to turn all this into a generator function.
That way, you would only compute new values as they are consumed. I came up
with this version, which I only validated up to about 60 terms:
import itertools
def _idx_after_removal(removed_indices, value):
for removed in removed_indices:
value -= value / removed
return value
def _should_be_excluded(removed_indices, value):
for j in range(len(removed_indices) - 1):
value_idx = _idx_after_removal(removed_indices[:j + 1], value)
if value_idx % removed_indices[j + 1] == 0:
return True
return False
def lucky():
yield 1
removed_indices = [2]
for i in itertools.count(3, 2):
if not _should_be_excluded(removed_indices, i):
yield i
removed_indices.append(i)
removed_indices = list(set(removed_indices))
removed_indices.sort()
If you want to extract for example the 100th term from this generator, you can
use [itertools nth
recipe](http://docs.python.org/2/library/itertools.html#recipes):
def nth(iterable, n, default=None):
"Returns the nth item or a default value"
return next(itertools.islice(iterable, n, None), default)
print nth(lucky(), 100)
I hope this works, and there's without any doubt more room for code
improvement (but as stated previously, there's _always_ room for
improvement!).
|
Why does my WSGI app always get URL decoded path in environ['PATH_INFO']?
Question: I have a simple bare WSGI application:
def application(environ, start_response):
start_response('200 OK', [('Content-Type','text/html')])
print('PATH_INFO:', environ['PATH_INFO'])
return [b'<p>Hello World</p>']
if __name__ == '__main__':
from wsgiref import simple_server
server = simple_server.make_server('0.0.0.0', 8080, application)
server.serve_forever()
I make two requests:
C:\>curl "http://localhost:8080/<foo>"
<p>Hello World</p>
C:\>curl "http://localhost:8080/%3Cfoo%3E"
<p>Hello World</p>
I get this output:
C:\code>python foo.py
PATH_INFO: /<foo>
127.0.0.1 - - [09/Mar/2014 13:48:39] "GET /<foo> HTTP/1.1" 200 18
PATH_INFO: /<foo>
127.0.0.1 - - [09/Mar/2014 13:48:47] "GET /%3Cfoo%3E HTTP/1.1" 200 18
See how my application gets the URL decoded path `/foo` even when the client
requests `/%3Cfoo%3E`.
It shows that wsgiref.simple_server ensures that my application always gets
the URL-decoded path in `environ['PATH_INFO']`.
But I can't find this behavior documented anywhere in PEP-3333. Can you please
point me to an official documentation that documents this behavior?
Answer: The value of REQUEST_URI from the actual HTTP request line, if the server
makes it available, would be:
REQUEST_URI: '/%3Cfoo%3E'
This is probably the case even if you used:
curl "http://localhost:8080/<foo>"
because curl would encode the URL before sending to use the % escapes.
REQUEST_URI is not I believe covered by any RFC but is a variable provided by
many servers. You cannot rely on its presence though, so don't write your WSGI
application to depend on it existing.
The web server will decode the % escapes in REQUEST_URI before processing it.
The result which will end up in PATH_INFO will thus always be:
PATH_INFO: '/<foo>'
The decoding is covered by the CGI and related RFCs that WSGI builds on.
See for example:
* <http://www.ietf.org/rfc/rfc3875>
* <http://www.ietf.org/rfc/rfc2396>
|
Set callback along with its parameters and invoke the callback later
Question: I have a simple class `Foo` that allows the user of this class to set a
callback and later run it three times.
This is how I am solving the problem.
# API code
class Foo:
def set_handler(self, callback, *args, **kwargs):
self.callback = callback
self.args = args
self.kwargs = kwargs
def run_three_times(self):
for i in range(3):
return self.callback(*self.args, **self.kwargs)
# User of the API code
if __name__ == '__main__':
def myfunc(a, b, c='foo', d='bar'):
print(a, b, c, d)
foo = Foo()
foo.set_handler(myfunc, 'arg1', 'arg2', c='arg3', d='arg4')
foo.run_three_times()
So you see that in the `foo.set_handler` call I am passing the callback along
with its parameters. When `foo.run_three_times` is called, this method
carefully invokes the callback with the parameters I had passed earlier.
Is this the right way to do this thing? Isn't it possible to pass
`myfunc('arg1', 'arg2', c='arg3', d='arg4')` in a more direct manner?
Something like passing only one object that encapsulates the callback along
with its arguments as a single function?
This is what I am looking for expressed in Python-pseudocode.
class Foo:
def set_handler(self, callback):
self.callback = callback
def run_three_times(self):
for i in range(3):
return self.callback()
if __name__ == '__main__':
def myfunc(a, b, c='foo', d='bar'):
print(a, b, c, d)
foo = Foo()
mycallback = reference to myfunc('arg1', 'arg2', c='arg3', d='arg4')
foo.set_handler(mycallback)
foo.run_three_times()
Answer: To flesh out @doukremt's comment, here is an example using
[`functools.partial`](http://docs.python.org/2/library/functools.html#functools.partial).
>>> from functools import partial
>>> my_list = partial(list, 'Hello world!')
>>> my_list
<functools.partial object at 0x10ce98838>
>>> my_list()
['H', 'e', 'l', 'l', 'o', ' ', 'w', 'o', 'r', 'l', 'd', '!']
See the documentation link above for more information.
|
FileNotFoundError: [Errno 2] No such file or directory
Question: I am trying to open a CSV file but for some reason python cannot locate it.
Here is my code (it's just a simple code but I cannot solve the problem):
import csv
with open('address.csv','r') as f:
reader = csv.reader(f)
for row in reader:
print row
Answer: You are using a relative path, which means that the program looks for the file
in the working directory. The error is telling you that there is no file of
that name in the working directory.
Try using the exact, or absolute, path.
|
NO video file was saved by using Python and OpenCV on my Raspberry PI
Question: I have two pieces of codes. Here is the first one. It was mainly copied from
[save a video section](http://opencv-python-
tutroals.readthedocs.org/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#display-
video) on OpenCV-Python tutorial website, but I modified a little bit.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
cap.set(7,200)
out = cv2.VideoWriter('output.avi',cv2.cv.CV_FOURCC('X','V','I','D'), 20.0, (640,480))
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
out.write(frame)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
Second one is here:
import cv
cv.NamedWindow('camera',1)
cap = cv.CaptureFromCAM(0)
fps = 20
fourcc = cv.CV_FOURCC('X','V','I','D')
cv.SetCaptureProperty(cap,cv.CV_CAP_PROP_FRAME_COUNT,200)
out = cv.CreateVideoWriter('output.avi',fourcc,fps,(640,480))
while True
img = cv.QueryFrame(out,img)
cv.WriteFrame(out,img)
cv.ShowImage('camera',img)
if cv.WaitKey(1) & 0xFF == ord('q'):
break
cv.DestroyAllWindows()
Neither of them can make a video file saved or destroy the window in the end.
No errors occurred in shell after running the code. I used Python 2.7.6 and
OpenCV 2.3.1. Can somebody help me? Thanks a lot.
PS: I am not sure whether my method to set frame numbers correct or not.
Answer: It may have several reasons. Check the following:
* Check that you can encode with XVID, maybe try with MJPEG first.
* Set width and height of your input video by `cap.set(3,640)` and `cap.set(4,480)`
|
PyEval_GetLocals returns globals?
Question: I am trying to access the python locals from the constructor of a C++ class
exported with boost.python, but PyEval_GetLocals() seems to return the global
instead of local dict. An example: in C++ I do
class X {
public:
X() {
boost::python::object locals(boost::python::borrowed(PyEval_GetLocals()));
locals["xyz"]=42
}
};
BOOST_PYTHON_MODULE(test) {
class_<X>("X", init<>());
}
If I now do in Python
x = X()
print(xyz)
I get '42' as output (as expected). However, the same happens with
def fun():
x = X()
print(xyz)
which also prints '42', despite the fact that 'fun()' has created a new scope.
I would have expected the 'xyz' name to have gone out of scope again after
fun() exits, and thus be left with an undefined 'xyz' by the time I reach the
print statement.
What am I doing wrong? Is there any way to get access to the local names from
within a C++ object or function?
Answer: I think the testcase may be resulting in a false positive. Is it possible that
you forgot to `del` the `xyz` variable prior to calling `fun()`?
Defining a function creates a variable local to the current scope that refers
to the function object. For example:
def fun():
x = X()
Creates a `function` object that is referenced by the `fun` variable within
the current scope. If the function is invoked, then (by default) a new local
scope is created, where in the object returned from `X()` will be referenced
by `x` within the local scope of the function and not within the caller's
frame's `locals()`.
* * *
Here is an example based on the original code:
#include <boost/python.hpp>
/// @brief Mockup types.
struct X
{
X()
{
// Borrow a reference from the locals dictionary to create a handle.
// If PyEval_GetLocals() returns NULL, then Boost.Python will throw.
namespace python = boost::python;
python::object locals(python::borrowed(PyEval_GetLocals()));
// Inject a reference to the int(42) object as 'xyz' into the
// frame's local variables.
locals["xyz"] = 42;
}
};
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
python::class_<X>("X", python::init<>());
}
Interactive usage that asserts visibility:
>>> import example
>>> def fun():
... assert('xyz' not in locals())
... x = example.X()
... assert('xyz' in locals())
... assert('xyz' not in globals())
...
>>> assert('xyz' not in globals())
>>> fun()
>>> assert('xyz' not in globals())
>>> x = example.X()
>>> assert('xyz' in globals())
>>> del xyz
>>> fun()
>>> assert('xyz' not in globals())
* * *
For completeness, a
[`FuncionType`](http://docs.python.org/2/library/types.html#types.FunctionType)
can be constructed with a
[`CodeType`](http://docs.python.org/2/library/types.html#types.CodeType) whose
[`co_flags`](http://docs.python.org/2/library/inspect.html#types-and-members)
does not have the `newlocals` flag set, causing the frame used for a
function's invocation to have its `locals()` return the same as `globals()`.
Here is an interactive usage example demonstrating this:
>>> def fun():
... x = 42
... print "local id in fun:", id(locals())
...
>>> import types
>>> def no_locals(fn):
... func_code = fn.func_code
... return types.FunctionType(
... types.CodeType(
... func_code.co_argcount,
... func_code.co_nlocals,
... func_code.co_stacksize,
... func_code.co_flags & ~2, # disable newlocals
... func_code.co_code,
... func_code.co_consts,
... func_code.co_names,
... func_code.co_varnames,
... func_code.co_filename,
... func_code.co_name,
... func_code.co_firstlineno,
... func_code.co_lnotab),
... globals())
...
>>> id(globals())
3075430164L
>>> assert('x' not in locals())
>>> fun()
local id in fun: 3074819588
>>> assert('x' not in locals())
>>> fun = no_locals(fun) # disable newlocals flag for fun
>>> assert('x' not in locals())
>>> fun()
local id in fun: 3075430164
>>> assert('x' in locals())
>>> x
42
Even after disabling the `newlocals` flag, I had to invoke `locals()` within
`fun()` to observe `x` being inserted into the global symbol table.
|
Is it possible to use cut on a collection of datetimes?
Question: Is it possible to use `pandas.cut` to make bins out of `datetime` stamps?
The following code:
import pandas as pd
import StringIO
contenttext = """Time,Bid
2014-03-05 21:56:05:924300,1.37275
2014-03-05 21:56:05:924351,1.37272
2014-03-05 21:56:06:421906,1.37275
2014-03-05 21:56:06:421950,1.37272
2014-03-05 21:56:06:920539,1.37275
2014-03-05 21:56:06:920580,1.37272
2014-03-05 21:56:09:071981,1.37275
2014-03-05 21:56:09:072019,1.37272"""
content = StringIO.StringIO(contenttext)
df = pd.read_csv(content, header=0)
df['Time'] = pd.to_datetime(df['Time'], format='%Y-%m-%d %H:%M:%S:%f')
pd.cut(df['Time'], 5)
Throws the following error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-f5387a84c335> in <module>()
16 df['Time'] = pd.to_datetime(df['Time'], format='%Y-%m-%d %H:%M:%S:%f')
17
---> 18 pd.cut(df['Time'], 5)
/home/???????/sites/varsite/venv/local/lib/python2.7/site-packages/pandas/tools/tile.pyc in cut(x, bins, right, labels, retbins, precision, include_lowest)
80 else:
81 rng = (nanops.nanmin(x), nanops.nanmax(x))
---> 82 mn, mx = [mi + 0.0 for mi in rng]
83
84 if mn == mx: # adjust end points before binning
TypeError: unsupported operand type(s) for +: 'Timestamp' and 'float'
Answer: Here is my work-around. You might need to change the code slightly to suit
your precision needs. I use date as an example below:
# map dates to timedelta
today=dt.date.today()
# x below is a timedelta,
# use x.value below if you need more precision
df['days']=map(lambda x : x.days, df.Time - today)
pd.cut(df.days, bins=5)
Effectively you turn `datetime` or `date` into a numerical distance measure,
then cut/qcut it.
|
Trying to run GAE python - not sure if imports and app.yaml is configured correctly
Question: I'm trying to deploy the following python code named image-getter.py in GAE:
from google.appengine.ext import db
from google.appengine.ext import webapp
from google.appengine.ext import os
from google.appengine.ext.webapp.util import run_wsgi_app
#the addimage endpoint
class AddImage(webapp.RequestHandler):
def post(self):
image = self.request.get('image')
i = Image()
i.picture = db.Blob(image)
i.put()
self.response.out.write('done');
#the Image object:
class Image(db.Model):
picture = db.BlobProperty();
#to get the image : /getimage?key=sdfsadfsf...
class GetImage(webapp.RequestHandler):
def get(self):
images_query = Image.get(self.request.get('key'))
if (images_query and images_query.picture):
self.response.headers['Content-Type'] = "image/jpeg"
self.response.out.write(images_query.picture)
#to draw the images out to the main page:
class MainPage(webapp.RequestHandler):
def get(self):
images = db.Query(Image)
keys = [];
for image in images:
keys.append(str(image.key()))
template_values = {'images' : keys}
path = os.path.join(os.path.dirname(__file__), 'index.html')
self.response.out.write(template.render(path, template_values))
def main():
app = webapp.WSGIApplication(
[('/', MainPage),
], debug=True)
The above code uses the os library, but I thought you weren't allowed to us it
in GAE.
My app.yaml file looks like:
application: myapp
version: 1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /
script: image-getter.app
libraries:
The html, index.html file looks like:
<div>
{% for i in images %}
<img src="/getimage?key={{i}}" />
{% endfor %}
</div>
I can't seem to get the app to run, I get "Error: Server Error," which isn't
awfully helpful.
Thank!
Answer: There is no image-getter.app in your image-getter.py. Also there is no routing
in your image-getter.py check example here
<https://developers.google.com/appengine/docs/python/gettingstartedpython27/helloworld>
You need to add something like
app = webapp2.WSGIApplication([
('/', MainPage),
], debug=True)
When you post code, please include the import statements, your code seems
invalid because it does not import the db module.
|
Passing numpy string-format arrays to fortran using f2py
Question: My aim is to print the 2nd string from a python numpy array in fortran, but I
only ever get the first character printed, and it's not necessarily the right
string either.
**Can anyone tell me what the correct way to pass full string arrays to
fortran?**
The code is as follows:
**testpy.py**
import numpy as np
import testa4
strvar = np.asarray(['aa','bb','cc'], dtype = np.dtype('a2'))
testa4.testa4(strvar)
**testa4.f90**
subroutine testa4(strvar)
implicit none
character(len=2), intent(in) :: strvar(3)
!character*2 does not work here - why?
print *, strvar(2)
end subroutine testa4
**Compiled with**
f2py -c -m testa4 testa4.f90
**Output of above code**
c
**Desired output**
bb
Answer: Per the [documentation](http://docs.scipy.org/doc/numpy-dev/f2py/python-
usage.html#string-arguments), f2py likes string arrays to be passed with
dtype='c' (i.e., '|S1'). This gets you part of the way there, although there
are some oddities with array shape going on behind the scenes (e.g., in a lot
of my tests I found that fortran would keep the 2 character length, but
interpret the 6 characters as being indicative of a 2x6 array, so I'd get
random memory back in the output). This (as far as I could tell), requires
that you treat the Fortran array as a 2D character array (as opposed to a 1D
"string" array). Unfortunately, I couldn't get it to take assumed shape and
ended up passing the number of strings in as an argument.
I'm pretty sure I'm missing something fairly obvious, but this should work for
the time being. As to why CHARACTER*2 doesn't work ... I honestly have no
idea.
MODULE char_test
CONTAINS
SUBROUTINE print_strings(strings, n_strs)
IMPLICIT NONE
! Inputs
INTEGER, INTENT(IN) :: n_strs
CHARACTER, INTENT(IN), DIMENSION(2,n_strs) :: strings
!f2py INTEGER, INTENT(IN) :: n_strs
!f2py CHARACTER, INTENT(IN), DIMENSION(2,n_strs) :: strings
! Misc.
INTEGER*4 :: j
DO j=1, n_strs
WRITE(*,*) strings(:,j)
END DO
END SUBROUTINE print_strings
END MODULE char_test
----------------
import numpy as np
import char_test as ct
strings = np.array(['aa', 'bb', 'cc'], dtype='c').T
ct.char_test.print_strings(strings, strings.shape[1])
strings = np.array(['ab', 'cd', 'ef'], dtype='c').T
ct.char_test.print_strings(strings, strings.shape[1])
-->python run_char_test.py
aa
bb
cc
ab
cd
ef
|
What magic does staticmethod() do, so that the static method is always called without the instance parameter?
Question: I am trying to understand how static methods work internally. I know how to
use `@staticmethod` decorator but I will be avoiding its use in this post in
order to dive deeper into how static methods work and ask my questions.
From what I know about Python, if there is a class `A`, then calling `A.foo()`
calls `foo()` with no arguments whereas calling `A().foo()` calls `foo()` with
one argument where that one argument is the instance `A()` itself.
However, in case of static methods, it seems always `foo()` is called with no
arguments whether we call it as `A.foo()` or `A().foo()`.
Proof below:
>>> class A:
... x = 'hi'
... def foo():
... print('hello, world')
... bar = staticmethod(foo)
...
>>> A.bar()
hello, world
>>> A().bar()
hello, world
>>> A.bar
<function A.foo at 0x00000000005927B8>
>>> A().bar
<function A.foo at 0x00000000005927B8>
>>> A.bar(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: foo() takes 0 positional arguments but 1 was given
>>> A().bar(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: foo() takes 0 positional arguments but 1 was given
So am I right in concluding that the `staticmethod()` function does some magic
such that `foo()` is always called with 0 arguments?
If I were to define my own `staticmethod()` in my own Python code, how would I
do it? Is it even possible to define such a method from our own Python code,
or can such a function be only defined as a builtin?
Answer: It's implemented as a
[descriptor](http://docs.python.org/2/howto/descriptor.html#definition-and-
introduction). For example:
In [1]: class MyStaticMethod(object):
...: def __init__(self, func):
...: self._func = func
...: def __get__(self, inst, cls):
...: return self._func
...:
In [2]: class A(object):
...: @MyStaticMethod
...: def foo():
...: print('Hello, World!')
...:
In [3]: A.foo()
Hello, World!
In [4]: A().foo()
Hello, World!
In the same way you can define `classmethod`, just passing the `cls` to the
original function:
In [5]: from functools import partial
...:
...: class MyClassMethod(object):
...: def __init__(self, func):
...: self._func = func
...: def __get__(self, inst, cls):
...: return partial(self._func, cls)
In [6]: class A(object):
...: @MyClassMethod
...: def foo(cls):
...: print('In class: {}'.format(cls))
...:
In [7]: A.foo()
In class: <class '__main__.A'>
In [8]: A().foo()
In class: <class '__main__.A'>
|
arch -i386 ipython notebook Error
Question: I have installed all the required packages to run the ipython notebook using
macports with the +universal build option. I can run ipython with arch -i386
ipython without a problem. I have successfully opened the notebook using the
64bit build. However, when I try to open the notebook in 32bit mode I get the
following error:
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/zmq/__init__.py", line 52, in <module>
from zmq.utils import initthreads # initialize threads
ImportError: dlopen(/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/zmq/utils/initthreads.so, 2): no suitable image found. Did find:
/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/zmq/utils/initthreads.so: mach-o, but wrong architecture
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/utils/zmqrelated.py", line 35, in check_for_zmq
import zmq
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/zmq/__init__.py", line 54, in <module>
raise ImportError("%s\nAre you trying to `import zmq` from the pyzmq source dir?" % e)
ImportError: dlopen(/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/zmq/utils/initthreads.so, 2): no suitable image found. Did find:
/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/zmq/utils/initthreads.so: mach-o, but wrong architecture
Are you trying to `import zmq` from the pyzmq source dir?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/local/bin/ipython", line 9, in <module>
load_entry_point('ipython==1.2.1', 'console_scripts', 'ipython3')()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/__init__.py", line 118, in start_ipython
return launch_new_instance(argv=argv, **kwargs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/config/application.py", line 544, in launch_instance
app.initialize(argv)
File "<string>", line 2, in initialize
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/config/application.py", line 89, in catch_config_error
return method(app, *args, **kwargs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/terminal/ipapp.py", line 312, in initialize
super(TerminalIPythonApp, self).initialize(argv)
File "<string>", line 2, in initialize
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/config/application.py", line 89, in catch_config_error
return method(app, *args, **kwargs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/core/application.py", line 373, in initialize
self.parse_command_line(argv)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/terminal/ipapp.py", line 307, in parse_command_line
return super(TerminalIPythonApp, self).parse_command_line(argv)
File "<string>", line 2, in parse_command_line
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/config/application.py", line 89, in catch_config_error
return method(app, *args, **kwargs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/config/application.py", line 474, in parse_command_line
return self.initialize_subcommand(subc, subargv)
File "<string>", line 2, in initialize_subcommand
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/config/application.py", line 89, in catch_config_error
return method(app, *args, **kwargs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/config/application.py", line 405, in initialize_subcommand
subapp = import_item(subapp)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/utils/importstring.py", line 42, in import_item
module = __import__(package, fromlist=[obj])
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/html/notebookapp.py", line 36, in <module>
check_for_zmq('2.1.11', 'IPython.html')
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/IPython/utils/zmqrelated.py", line 37, in check_for_zmq
raise ImportError("%s requires pyzmq >= %s"%(required_by, minimum_version))
Any suggestions would be greatly appreciated!
Answer: If the port that installed
`/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-
packages/zmq/utils/initthreads.so` is installed with the universal variant
(use `port provides $file` to find out which port that is), that's a bug in
said port. Please file a ticket about this.
|
merge values of same key in a list of dictionaries , and compare to another list of dictionaries in python3
Question: **Update:** Apparently, I noticed that in my main code, when I extract the
values from the list of dictionaries that I get from readExpenses.py, I store
it as a set, not as a list of dictionaries.
Now, I know that I store each dictionary in the 'exp' list with these lines of
code:
for e in expenses:
exp.append(e)
However, I only want the Keys Amount, and Type from those dictionaries, and
not the other entries.
For reference, here is the list of keys in an expense dictionary:
"Date","Description","Type","Check Number","Amount","Balance"
As mentioned before, I only need Type and Amount.
I am trying to make a budget program, So I have this list of dictionaries:
[{'Bills': 30.0}, {'Bills': 101.53}, {'Bills': 60.0}, {'Bills': 52.45}, {'Gas': 51.17}, {500.0: 'Mortgage'}, {'Food': 5.1}]
And I'm trying to compare it to this list of dictionaries:
[{400.0: 'Bills'}, {'Gas': 100.0}, {500.0: 'Mortgage'}, {'Food': 45.0}]
The first list is how much money I spent on different services in a given
month, and what category it was in, and the second dictionary is the max
amount that the budget allows me to spend on said category.
The goal is, in the first dictionary, to combine all the values of the same
key into one key:value pair, then compare it to the second dictionary.
So I should get this list of dictionaries out of the first one:
[{'Bills': 295.15), {'Gas': 51.17}, {500.0: 'Mortgage'}, {'Food': 5.1}]
I tried looking at [this
example](http://stackoverflow.com/questions/3421906/how-to-merge-lists-of-
dictionaries "this example") and [this
one](http://stackoverflow.com/questions/38987/how-can-i-merge-union-two-
python-dictionaries-in-a-single-expression), but they are just about merging
the dictionaries lists together, and not summing the values of the same key. I
did try the code in the latter, but it only joined the dictionaries together.
I noticed that sum only seems to work with "raw" dictionaries, and not with
lists of dictionaries.
I did try this as a thought experiment:
print(sum(item['amount'] for item in exp))
I know that would sum up all the numbers under amount, rather than return a
number for each category, but I wanted to try out it for the heck of it, to
see if it would lead to a solution, but I got this error in return:
TypeError: 'set' object is not subscriptable
The Counter function seemed to show promise as a solution as well when I was
messing around, however, it seems to only work with dictionaries that are on
their own, and not with list of dictionaries.
#where exp is the first dictionary that I mentioned
a = Counter(exp)
b = Counter(exp)
c = a + b #I'm aware the math would have be faulty on this, but this was a test run
print (c)
This attempt returned this error:
TypeError: unhashable type: 'set'
Also, is there a way to do it without importing the collections module and
using what comes with python as well?
My code:
from readExpense import *
from budget import *
from collections import *
#Returns the expenses by expenses type
def expensesByType(expenses, budget):
exp = []
expByType = []
bud = []
for e in expenses:
entry = {e['exptype'], e['amount']}
exp.append(entry)
for b in budget:
entry = {b['exptype'], b['maxamnt']}
bud.append(entry)
return expByType;
def Main():
budget = readBudget("budget.txt")
#printBudget(budget)
expenses = readExpenses("expenses.txt")
#printExpenses(expenses)
expByType = expensesByType(expenses, budget)
if __name__ == '__main__':
Main()
And for reference, the code from budget and readexpense respectively.
budget.py
def readBudget(budgetFile):
# Read the file into list lines
f = open(budgetFile)
lines = f.readlines()
f.close()
budget = []
# Parse the lines
for i in range(len(lines)):
list = lines[i].split(",")
exptype = list[0].strip('" \n')
if exptype == "Type":
continue
maxamount = list[1].strip('$" \n\r')
entry = {'exptype':exptype, 'maxamnt':float(maxamount)}
budget.append(entry)
return budget
def printBudget(budget):
print()
print("================= BUDGET ==================")
print("Type".ljust(12), "Max Amount".ljust(12))
total = 0
for b in budget:
print(b['exptype'].ljust(12), str("$%0.2f" %b['maxamnt']).ljust(50))
total = total + b['maxamnt']
print("Total: ", "$%0.2f" % total)
def Main():
budget = readBudget("budget.txt")
printBudget(budget)
if __name__ == '__main__':
Main()
readExpense.py
def readExpenses(file):
#read file into list of lines
#split lines into fields
# for each list create a dictionary
# add dictionary to expense list
#return expenses in a list of dictionary with fields
# date desc, exptype checknm, amnt
f = open(file)
lines=f.readlines()
f.close()
expenses = []
for i in range(len(lines)):
list = lines[i].split(",")
date = list[0].strip('" \n')
if date == "Date":
continue
description = list[1].strip('" \n\r')
exptype= list[2].strip('" \n\r')
checkNum = list[3].strip('" \n\r')
amount = list[4].strip('($)" \n\r')
balance = list[5].strip('" \n\r')
entry ={'date':date, 'description': description, 'exptype':exptype, 'checkNum':checkNum, 'amount':float(amount), 'balance': balance}
expenses.append(entry)
return expenses
def printExpenses(expenses):
#print expenses
print()
print("================= Expenses ==================")
print("Date".ljust(12), "Description".ljust(12), "Type".ljust(12),"Check Number".ljust(12), "Amount".ljust(12), "Balance".ljust(12))
total = 0
for e in expenses:
print(str(e['date']).ljust(12), str(e['description']).ljust(12), str(e['exptype']).ljust(12), str(e['checkNum']).ljust(12), str(e['amount']).ljust(12))
total = total + e['amount']
print()
print("Total: ", "$%0.2f" % total)
def Main():
expenses = readExpenses("expenses.txt")
printExpenses(expenses)
if __name__ == '__main__':
Main()
Answer: Is there a reason you're avoiding creating some objects to manage this? If it
were me, I'd go objects and do something like the following (this is
completely untested, there may be typos):
#!/usr/bin/env python3
from datetime import datetime # why python guys, do you make me write code like this??
from operator import itemgetter
class BudgetCategory(object):
def __init__(self, name, allowance):
super().__init__()
self.name = name # string naming this category, e.g. 'Food'
self.allowance = allowance # e.g. 400.00 this month for Food
self.expenditures = [] # initially empty list of expenditures you've made
def spend(self, amount, when=None, description=None):
''' Use this to add expenditures to your budget category'''
timeOfExpenditure = datetime.utcnow() if when is None else when #optional argument for time of expenditure
record = (amount, timeOfExpenditure, '' if description is None else description) # a named tuple would be better here...
self.expenditures.append(record) # add to list of expenditures
self.expenditures.sort(key=itemgetter(1)) # keep them sorted by date for the fun of it
# Very tempting to the turn both of the following into @property decorated functions, but let's swallow only so much today, huh?
def totalSpent(self):
return sum(t[0] for t in self.expenditures)
def balance(self):
return self.allowance - self.totalSpent()
Now I can right code that looks like:
budget = BudgetCategory(name='Food', allowance=200)
budget.spend(5)
budget.spend(8)
print('total spent:', budget.totalSpent())
print('left to go:', budget.balance())
This is just a starting point. Now you can you add methods that group (and
sum) the expenditures list by decoration (e.g. "I spent HOW MUCH on Twinkies
last month???"). You can add a method that parses entries from a file, or
emits them to a csv list. You can do some charting based on time.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.