text
stringlengths 226
34.5k
|
---|
How to read csv file with string but convert to different type and send to an array in python?
Question: I have a csv file, contain datetime, number1, number2, number3 number4.
I use code to read, but how to change the types.
my code:
import csv
import datetime
myarray=([])
filename='Contract.csv'
f=csv.reader(open(filename,'rb'), delimiter=',')
for row in f:
myarray=array([row for row in f])
print myarray
I get the array looks like: [['2010-05-01 15:20:12 0000' '345' '234' '163'
'120'], ['2010-05-02 15:22:12 0000' '335' '214' '164' '120'], ... ]
I have no idea how to change the first column into datetime and the others
into float. Please help. Thanks
Answer: First, [read this question](http://stackoverflow.com/questions/354038/how-do-
i-check-if-a-string-is-a-number-in-python).
Having said that. I use a function like this one (requries [python-
dateutil](http://labix.org/python-dateutil)) to manage dates too:
from dateutil.parser import parse as date_parser
def _cast_value(self, value):
tests = (
int,
float,
lambda value: date_parser(value)
)
for test in tests:
try:
return test(value)
except ValueError:
continue
return value
`dateutil` will handle different kind of date formats for you.
|
Missing Table When Running Django Unittest with Sqlite3
Question: I'm trying to run a unittest with Django 1.3. Normally, I use MySQL as my
database backend, but since this is painfully slow to spinup for a single
unittest, I'm using Sqlite3.
So to switch to Sqlite3 just for my unittests, in my settings.py I have:
import sys
if 'test' in sys.argv:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME':'/tmp/database.db',
'USER' : '',
'PASSWORD' : '',
'HOST' : '',
}
}
When I run my unittest with `python manage.py test myapp.Test.test_myfunc`, I
get the error:
DatabaseError: no such table: django_content_type
Googling shows there are a
[few](https://code.djangoproject.com/wiki/NewbieMistakes#DjangosaysUnabletoOpenDatabaseFilewhenusingSQLite3)
of
[possible](http://www.pantz.org/software/sqlite/unabletoopendbsqliteerror.html)
[reasons](http://stackoverflow.com/questions/6157152/no-such-table-error-when-
running-a-django-server-from-eclipse) for this
[error](http://stackoverflow.com/questions/4636970/sqlite3-operationalerror-
unable-to-open-database-file), none of which seem applicable to me. I'm not
running Apache, so I don't see how permissions would be an issue. The file
/tmp/database.db is being created, so /tmp is writable. The app
django.contrib.contenttypes is included in my INSTALLED_APPS.
What am I missing?
Edit: I ran into this problem again in Django 1.5, but none of the proposed
solutions work.
Answer: In Django 1.4, 1.5, 1.6, 1.7, or 1.8 it _should_ be sufficient to use:
if 'test' in sys.argv:
DATABASES['default']['ENGINE'] = 'django.db.backends.sqlite3'
It should not be necessary to override `TEST_NAME`1, nor to call `syncdb` in
order to run tests. As @osa points out, the default with the SQLite engine is
to create the test database in memory (`TEST_NAME=':memory:'`). Calling
`syncdb` should not be necessary because Django's test framework will do this
automatically via a call to `syncdb` or `migrate` depending on the Django
version.2 You can observe this with `manage.py test -v [2|3]`.
Very loosely speaking Django sets up the test environment by:
1. Loading the regular database `NAME` from your `settings.py`
2. Discovering and **constructing** your test classes (`__init__()` is called)
3. Setting the database `NAME` to the value of `TEST_NAME`
4. Running the tests against the database `NAME`
Here's the rub: At step 2, `NAME` is still pointing at your **regular** (non-
test) database. If your tests contain class-level queries or queries in
`__init__()`, they will be run against the regular database which is likely
not what you are expecting. This is identified in [bug
#21143](https://code.djangoproject.com/ticket/21143).
Don't do:
class BadFooTests(TestCase):
Foo.objects.all().delete() # <-- class level queries, and
def __init__(self):
f = Foo.objects.create() # <-- queries in constructor
f.save() # will run against the production DB
def test_foo(self):
# assert stuff
since these will be run against the database specified in `NAME`. If `NAME` at
this stage points to a valid database (e.g. your production database), the
query will run, but may have unintended consequences. If you have overridden
`ENGINE` and/or `NAME` such that it does not point to a pre-existing database,
an exception will be thrown because the test database has yet to be created:
django.db.utils.DatabaseError: no such table: yourapp_foo # Django 1.4
DatabaseError: no such table: yourapp_foo # Django 1.5
OperationalError: no such table: yourapp_foo # Django 1.6+
Instead do:
class GoodFooTests(TestCase):
def setUp(self):
f = Foo.objects.create() # <-- will run against the test DB
f.save() #
def test_foo(self):
# assert stuff
So, if you are seeing errors, check to see that your tests do not include any
queries that might hit the database outside of your test class method
definitions.
* * *
[1] In Django >= 1.7, `DATABASES[alias]['TEST_NAME']` is
[deprecated](https://docs.djangoproject.com/en/1.7/ref/settings/#std:setting-
OLD_TEST_NAME) in favour of
[`DATABASES[alias]['TEST']['NAME']`](https://docs.djangoproject.com/en/1.7/ref/settings/#std:setting-
DATABASE-TEST)
[2] See the `create_test_db()` method in `db/backends/creation.py`
|
Python comparing strings to their equivalent integers effeciently
Question: What's the most efficient way to compare two python values both of which are
probably strings, but might be integers. So far I'm using str(x)==str(y) but
that feels inefficient and (more importantly) ugly:
>>> a = 1.0
>>> b = 1
>>> c = '1'
>>> a == b
True
>>> b == c
False # here I wanted this to be true
>>> str(b)==str(c)
True # true, as desired
My actual objects are dictionary values retrieved with get(), and most of them
are strings.
Answer: Test it out. I like using `%timeit` in `ipython`:
In [1]: %timeit str("1") == str(1)
1000000 loops, best of 3: 702 ns per loop
In [2]: %timeit "1" == str(1)
1000000 loops, best of 3: 412 ns per loop
In [3]: %timeit int("1") == 1
1000000 loops, best of 3: 906 ns per loop
Apart from that, though, if you truly don't know what the input type is, there
isn't much you can do about it, unless you want to make assumptions about the
input data. For example, if you assume that _most_ of the inputs are equal
(same type, same value), you could do something like:
if a == b or str(a) == str(b):
... they are equal ...
Which _would_ be faster if they are normally the same type and normally
equal... But it will be slower if they aren't normally the same type, or
aren't normally equal.
However, are you sure you can't cast everything to a str/int when they enter
your code?
|
Using an MFC .dll file with Python 3.2
Question: I am currently planning to access my MFC Dialog based application's .dll file
using Python. I am new to Python and have the latest version of Python
installed i.e. 3.2. I have installed PythonWin as well, but not really sure if
it would be useful or not. I have understood like the basics of using python
with the help of [ctypes](http://docs.python.org/py3k/library/ctypes.html). In
my dll file, I have two functions:
double BoxArea(double L, double H, double W);
double BoxVolume(double L, double H, double W);
and I have used the extern dllimport command to access these in my mfc dialog
application.
extern "C" __declspec(dllexport)void BoxProperties(double Length, double Height,
double Width, double& Area, double& Volume);
All that works fine when trying to access with another mfc program. Now, I am
trying to access those two functions using Python. Could anyone suggest me how
should I go about and what commands would directly let me access it?
Many thanks in advance.
Answer: I think [this](http://oreilly.com/catalog/pythonwin32/chapter/ch20.html)
should give you some idea on what you are trying to do. Hope this helps.
|
How can I pass file names to external commands executed from Python?
Question: I am trying to execute a command inside a Python script:
import subprocess
output_process =
subprocess.Popen("javac -cp C:\Users\MyUsername\Desktop\htmlcleaner-2.2.jar Scrapping_lastfm.java",
shell=True, stdout=subprocess.PIPE)
But I am getting an error `package org.htmlcleaner does not exist`.
If I run the javac command independently, it executes fine..
My current working directry is `C:\Users\MyUsername`.
Answer: Try
output_process = subprocess.Popen(["javac", "-cp",
"C:\Users\MyUsername\Desktop\htmlcleaner-2.2.jar", "Scrapping_lastfm.java"],
shell=True, stdout=subprocess.PIPE, env={'ENVIRONMENTAL': '/variables/here'})
with whatever java-related environmental variables you have when you run
`javac` normally as items in the `env` dictionary. asgs suggests you need
`CLASSPATH`.
You don't have to split the command up into a list I just did that to make it
easier to see the whole thing.
|
How to count the number of words in a paragraph and exclude some words (from file)?
Question: I've just started to learn Python so my question might be a bit silly. I'm
trying to create a program that would:
\- import a text file (got it)
\- count the total number of words (got it),
\- count the number of words in a specific paragraph, starting with a specific
phrase (e.g. "P1", ending with another participant "P2") and exclude these
words from my word count. Somehow I ended up with something that counts the
number of characters instead :/
\- print paragraphs separately (got it)
\- exclude "P1" "P2" etc. words from my word count.
My text files look like that:
P1: Bla bla bla.
P2: Bla bla bla bla.
P1: Bla bla.
P3: Bla.
I ended up with this code:
text = open (r'C:/data.txt', 'r')
lines = list(text)
text.close()
words_all = 0
for line in lines:
words_all = words_all + len(line.split())
print 'Total words: ', words_all
words_par = 0
for words_par in lines:
if words_par.startswith("P1" or "P2" or "P3") & words_par.endswith("P1" or "P2" or "P3"):
words_par = line.split()
print len(words_par)
print words_par.replace('P1', '') #doesn't display it but still counts
else:
print 'No words'
Any ideas how to improve it?
Thanks
Answer: You shouldn't call `open ('zery.txt', 'r')` with identifier **text**. It is
not the text in the file, it is the handler of the file, described as a "file-
like object" in the docs (I never understood what it means, "file-like
object", by the way)
.
with open ('C:/data.txt', 'r') as f:
........
........
is better than
f = open ('C:/data.txt', 'r')
......
.....
f.close()
.
You should read the instructions concerning **split()** , so you'll see that
you can do:
with open ('C:/data.txt', 'r') as f:
text = f.read()
words_all = len(text.split())
print 'Total words: ', words_all
.
If the structure of your text is:
P1: Bla bla bla.
P2: Bla bla bla bla.
P1: Bla bla.
P3: Bla.
then `words_par.endswith("P1" or "P2" or "P3")` is always _False_ , hence the
desired spliting isn't performed.
Consequently, **words_par** doesn't become a list, it remains a string, that's
why the characters are counted.
.
Also, your code is certainly wrong.
If the splitting was performed, it would be the last **line** obtained in the
first for-loop, in the beginning of the code, that would be repeatedly
splitted.
So, instead of
for words_par in lines:
if words_par.startswith("P1" or "P2" or "P3"):
words_par = line.split()
it is certainly:
for line in lines:
if line[0:2] in ("P1","P2","P3") :
words_par = line.split()
|
Python one-liner to extract field
Question: Input:
$ ./ffmpeg -i test020.3gp
ffmpeg version UNKNOWN, Copyright (c) 2000-2011 the FFmpeg developers
built on May 5 2011 14:30:25 with gcc 4.4.3
configuration:
libavutil 51. 2. 0 / 51. 2. 0
libavcodec 53. 3. 0 / 53. 3. 0
libavformat 53. 0. 3 / 53. 0. 3
libavdevice 53. 0. 0 / 53. 0. 0
libavfilter 2. 4. 0 / 2. 4. 0
libswscale 0. 14. 0 / 0. 14. 0
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test020.3gp':
Metadata:
major_brand : 3gp4
minor_version : 512
compatible_brands: 3gp4
creation_time : 2004-07-01 09:59:21
Duration: 00:01:02.20, start: 0.000000, bitrate: 284 kb/s
Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16, 96 kb/s
Metadata:
creation_time : 2004-07-01 09:59:21
Stream #0.1(und): Video: mpeg4, yuv420p, 176x120 [PAR 1:1 DAR 22:15], 184 kb/s, 15 fps, 15 tbr, 30k tbn, 15 tbc
Metadata:
creation_time : 2004-07-01 09:59:23
At least one output file must be specified
Let's say I would like to extract the width & height using the following
regexp:
(\d+x\d+)
Using perl, I'd do do something like this:
$ ./ffmpeg -i test020.3gp 2>&1 | perl -lane 'print $1 if /(\d+x\d+)/'
176x120
Then I'd tried to construct a similar python one-liner, it sort-of works, but
not perfectly:
$ ./ffmpeg -i test020.3gp 2>&1 | python -c "import sys,re;[sys.stdout.write(str(re.findall(r'(\d+x\d+)', line))) for line in sys.stdin]"
[][][][][][][][][][][][][][][][][][][]['176x120'][][][]
What do a python one-liner looks like that corresponds to the perl one?
Answer: What you want is `re.search` instead of `re.findall`.
This does the trick, even if the one-liner itself is a bit "ugly" (`/tmp/p` is
just the sample data you gave):
% cat /tmp/p 2>&1 | python -c "import re,sys; print re.search(r'(\d+x\d+)', sys.stdin.read()).group()"
176x120
Any reason you're not just using `grep` (`egrep` in this case)?
% cat /tmp/p | egrep -o '[0-9]+x[0-9]+'
176x120
|
introspective code completion with VIM? ... or other lightweight editor with this feature?
Question: I've been all over the web trying to find a way to get VIM to have code
completion similar to PyDev. It doesn't seem like it is possible!
-I have tried to use the omnicompletion suggested at this link: <http://blog.dispatched.ch/2009/05/24/vim-as-python-ide/> .
-I have tried several addons to alleviate the problem, none work.
The "omnicomplete" functionality is NOT what I am looking for. It just takes
all the words in the file you are working on and uses those to try and
complete what I am doing. For example if I wrote:
import numpy
a_single_array = range(100)
np.a#[then I hit cntrl+n to code complete]
It would spit out "a_single_array" as a possible completion -- but that is
absurd! That is not a valid completion for "numpy.a ..."
What is the issue here? All the addon would have to do is run a dir(work you
want to find) from the folder you are in and then filter the output! This
cannot be that difficult! (I suppose you would also have to read the file you
are currently editing and filter that as well to take note of name changes...
but that's pretty much it!)
Speaking of how easy it would be... if there isn't anything already made, I
was thinking of writing the script myself! Any guides on how to do THAT?
Answer: No, the omni completion functionality is EXACTLY what you are looking for.
You are using `<C-n>` instead of `<C-x><C-o>`:
* type `<C-n>` & `<C-p>` to complete with words from the buffer (after and before the cursor respectively)
* type `<C-x><C-o>` to complete method/properties names
It's specifically explained in the article you linked:
> In V7, VIM introduced omni completion – given it is configured to recognize
> Python (if not, this feature is only a plugin away) Ctrl+x Ctrl+o opens a
> drop down dialog like any other IDE – even the whole Pydoc gets to be
> displayed in a split window.
|
Redirect subprocess to a variable as a string
Question: > **Possible Duplicate:**
> [Parsing a stdout in
> Python](http://stackoverflow.com/questions/2101426/parsing-a-stdout-in-
> python)
With the following command, it prints '640x360'
>>> command = subprocess.call(['mediainfo', '--Inform=Video;%Width%x%Height%',
'/Users/david/Desktop/1video.mp4'])
640x360
How would I set a variable equal to the string of the output, so I can get
`x='640x360'`? Thank you.
**Update** : answers can be found here: [Parsing a stdout in
Python](http://stackoverflow.com/questions/2101426/parsing-a-stdout-in-
python). This worked for me:
>>> p1 = subprocess.Popen(['mediainfo', '--Inform=Video;%Width%x%Height%',
'/Users/david/Desktop/10stest720p.mov'],stdout=PIPE)
>>> output=p1.communicate()[0].strip('\n')
>>> output
'1280x688'
Answer: If you're using 2.7, you can use subprocess.check_output():
>>> import subprocess
>>> output = subprocess.check_output(['echo', '640x360'])
>>> print output
640x360
If not:
>>> p = subprocess.Popen(['echo', '640x360'], stdout=subprocess.PIPE)
>>> p.communicate()
('640x360\n', None)
|
Why do I have to type ctrl-d twice?
Question: For my own amusement, I've cooked up a python script that allows me to use
python for bash one-liners; Supply a python generator expression; and the
script iterates over it. Here's the script:
DEFAULT_MODULES = ['os', 're', 'sys']
_g = {}
for m in DEFAULT_MODULES:
_g[m] = __import__(m)
import sys
sys.stdout.writelines(eval(sys.argv[1], _g))
And here's how you might use it.
$ groups | python pype.py '(l.upper() for l in sys.stdin)'
DBORNSIDE
$
For the intended use, it works perfectly!
But when I don't feed it with pipe and just invoke it directly, for instance:
[emphasis added to show what I type]
$ python pype.py '("%r\n" % (l,) for l in sys.stdin)'
**foo Enter
barEnter
bazEnter
Ctrl DCtrl D**'foo\n'
'bar\n'
'baz\n'
$
In order to stop accepting input and produce any output, I have to type either
`Enter` \- `Ctrl D` \- `Ctrl D` or `Ctrl D` \- `Ctrl D` \- `Ctrl D`. This
violates my expectations, that each line should be processed as entered, and
that typing `Ctrl D` at any time will end the script. Where is the gap in my
understanding?
EDIT: I've updated the interactive example to show that I'm not seeing the
quoting wim describes in his answer, and some more examples too.
$ python pype.py '("%r\n" % (l,) for l in sys.stdin)'
**foo Ctrl DCtrl DbarEnter
Ctrl DCtrl D**'foobar\n'
$ python pype.py '("%r\n" % (l,) for l in sys.stdin)'
**foo Ctrl VCtrl D**^D**bar Enter
Ctrl DCtrl D**'foo\x04bar\n'
$
Answer: `Ctrl-D` is recognized not necessarily as EOF, but as "terminate current
`read()` call".
If you have an empty line (or just pressed `Ctrl-D`) and press `Ctrl-D`, your
`read()` terminates immediately and returns 0 read bytes. And this is a sign
for EOF.
If you have data in a line and press `Ctrl-D`, your `read()` terminates with
whatever there has been typed, of course without a terminating newline
(`'\n'`).
So if you have input data, you press `Ctrl-D` twice of a non-empty line or
once on a empty one, i.e. with `Enter` before.
This all holds for the normal OS interface, accessible from Python via
`os.read()`.
Python file objects, and also file iterators, treat the first EOF recognized
as termination for the current `read()` call, as they suppose there is nothing
any longer. A next `read()` call tries again and needs another `Ctrl-D` in
order to really return 0 bytes. The reason is that a file object `read()`
always tries to return as many bytes as requested and tries to fill up if a OS
`read()` returns less than requested.
As opposite to `file.readline()`, `iter(file)` uses the internal `read()`
functions to read and thus always has this special requirement of the extra
`Ctrl-D`.
I always use `iter(file.readline, '')` to read line-wise from a file.
|
is there a way to do range() with ast.literal_eval?
Question: or another way to ask I suppose is there a literal which will literal_eval to
the equivalent of the range function (without sending the entire array as a
range).
the following
import ast
ast.literal_eval("range(0,3)")
ast.literal_eval("[0, 1, 2, 3][0:3]")
results in:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/ast.py", line 80, in literal_eval
return _convert(node_or_string)
File "/usr/lib64/python2.7/ast.py", line 79, in _convert
raise ValueError('malformed string')
ValueError: malformed string
while this is possible it converts the entire array to a string then back
which is not ideal. I was hoping for something with [0:3] syntax without the
initial list.
>>> ast.literal_eval(str(range(0,3)))
[0, 1, 2]
Answer: <http://docs.python.org/library/ast.html#ast.literal_eval>
> `ast.literal_eval(node_or_string)`
>
> Safely evaluate an expression node or a string containing a Python
> expression. **The string or node provided may only consist of the following
> Python literal structures: strings, numbers, tuples, lists, dicts, booleans,
> and None.**
A range (other than an explicitly specified list) is not any of those, so no.
|
Extending my application - Pyramid/Pylons/Python
Question: Simple question about extending my application
Lets say I have a "Main Application", and in this application I have the
following in the _init_.py file:
config.add_route('image_upload', '/admin/image_upload/',
view='mainapp.views.uploader',
view_renderer='/site/upload.mako')
and in the views.py I have:
def uploader(request):
# some code goes here
return {'xyz':xyz}
Now when I create a new application, and I want to extend it, to use the above
view and route:
In the new application _init_.py file I would manually copy over the
config.add_route code:
config.add_route( 'image_upload', '/admin/image_upload/',
view='mainapp.views.uploader',
view_renderer='mainapp:templates/site/upload.mako'
)
And is that all I would need to do? From this would my application be able to
use the view and template from the main application, or is am I missing
something else?
Thanks for reading!
Answer: You don't have to copy your code to do this. Use the
[Configurator.include](https://pylonsproject.org/projects/pyramid/dev/api/config.html)
method to include your "Main Application" configuration in your new
application. The documentation explains this pretty well both
[here](https://pylonsproject.org/projects/pyramid/dev/narr/advconfig.html?highlight=extending#including-
configuration-from-external-sources) and
[here](https://pylonsproject.org/projects/pyramid/dev/api/config.html), but
the essentially, if you declare your main apps configuration inside a
callable:
def main_app_config(config):
config.add_route('image_upload', '/admin/image_upload/',
view='mainapp.views.uploader',
view_renderer='/site/upload.mako')
Then you can include your main app in your new app's configuration like this:
from my.main.app import main_app_config
# do your new application Configurator setup, etc.
# then "include" it.
config.include(main_app_config)
# continue on with your new app configuration
|
Making python imports more structured?
Question: The code works but looks messy so this might be a code review question where I
didn't study enough of pythons conventions to know how to structure and
organize the beginning of my file more pythonic. I basically just pasted in
imports so they could be duplicates, not needed anymore or wrongly ordered.
Can you advice anything how to structure my imports or can I leave code like
this to focus on my own functions?
File 1:
from __future__ import with_statement
import logging
import os
from google.appengine.api.users import is_current_user_admin, UserNotFoundError
import time
import cgi
import geo.geotypes
import main
import captcha
from google.appengine import api
from google.appengine.runtime import DeadlineExceededError
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.ext.blobstore import BlobInfo
from google.appengine.ext.db import djangoforms
from django import forms
from django.core.exceptions import ValidationError
from django.utils import translation
from datetime import datetime, timedelta
os.environ['DJANGO_SETTINGS_MODULE'] = 'conf.settings'
from django.conf import settings
from django.template import RequestContext
from util import I18NHandler
import util
from google.appengine.api import urlfetch, taskqueue
from django.template.defaultfilters import register
from django.utils import simplejson as json
from functools import wraps
from google.appengine.api import urlfetch, taskqueue, users, images
from google.appengine.ext import db, webapp, search, blobstore
from google.appengine.ext.webapp import util, template
from google.appengine.runtime import DeadlineExceededError
from random import randrange
import Cookie
import base64
import cgi
import conf
import datetime
import hashlib
import hmac
import logging
import time
import traceback
import urllib
import twitter_oauth_handler
from twitter_oauth_handler import OAuthClient
from geo.geomodel import GeoModel
from django.utils.translation import gettext_lazy as _
webapp.template.register_template_library('common.templatefilters')
File 2 (there's are several instructions here I don't understand):
from __future__ import with_statement
# -*- coding: utf-8 -*-
import facebookconf
import os, wsgiref.handlers
os.environ[u'DJANGO_SETTINGS_MODULE'] = u'conf'
import util
import time
import logging
import urllib
import wsgiref.handlers
import appengine_admin
import cgi
import captcha
import re
import hashlib
import string
import hmac
import twitter_oauth_handler
from twitter_oauth_handler import OAuthClient
os.environ['DJANGO_SETTINGS_MODULE'] = 'conf.settings'
from geo.geomodel import GeoModel
from google.appengine.dist import use_library
from google.appengine.ext import blobstore, webapp, db, search
# template import must be run before other Django modules imports
from google.appengine.ext.webapp import blobstore_handlers, util, template
from google.appengine.ext.blobstore import BlobInfo
from google.appengine.ext.webapp.util import run_wsgi_app
from google.appengine.api import files, images, mail, memcache, users
from django.conf import settings
# Force Django reload
settings._target = None
from util import I18NHandler, FacebookBaseHandler
from google.appengine.ext.db import djangoforms
from django.utils import translation
from django.utils import simplejson as json
from django.contrib.formtools.preview import FormPreview
from random import choice
from urllib import quote
from google.appengine.api.users import is_current_user_admin, UserNotFoundError
from google.appengine.api import urlfetch
import random
import datetime
from datetime import timedelta
from django.utils.translation import gettext_lazy as _
from django.template import defaultfilters
How do I know when an import is no longer used since the function was moved or
removed? Why can't I specify the same import for multiple files at one place
and I must spec the same import in both files? I can imagine moveing handling
imports to a separate file i.e. `imports.yaml` to specify imports for all
python files in that directory or likewise.
Answer: [PEP 8 - Style Guide for Python
code](http://www.python.org/dev/peps/pep-0008/) recomends to order your
imports in fallowing order:
1. Standart library imports
2. - blank line -
3. google sdk imports
4. - blank line -
5. django imports
6. - blank line -
7. your own code imports
Import only things you use in code. Remove unused imports. You can use one of
these tools to detect unused imports: [Pydev on Eclipse](http://pydev.org/) /
[pyflakes](http://pypi.python.org/pypi/pyflakes) /
[pylint](http://www.logilab.org/857)
You have quite a lot imports. How big is your actual code? It might be a good
idea to split it into few modules.
Why can't you import single time in single file? Well you actually could do it
like this:
WARNING: THIS EXAMPLE ILLUSTRATES BAD CODING PRACTICES
import_all.py:
import a
import b
import c
other.py:
from import_all import *
**But please don't do that.** It is against all good practices of python
development and against [The Zen of
Python](http://www.python.org/dev/peps/pep-0020/):
> Explicit is better than implicit.
>
> ...
>
> Namespaces are one honking great idea -- let's do more of those!
I also recommend you to read the [python documentation on
modules](http://docs.python.org/tutorial/modules.html) and something about
[python namespaces](http://www.google.com/search?q=python%20namespaces).
|
file error in python
Question: I have the following script:
import os
import stat
curDir = os.getcwd()+'/test'
for (paths, dirs, files) in os.walk(curDir):
for f in files:
if os.stat(f)[stat.ST_SIZE]>0:
print f
and the folder test/:
test_folder:
--test.wav
a.exe
t1
t2
rain.wav
when i run this script with geany it gives the following error:
>
> Traceback (most recent call last):
> File "new_folder_deleter.py", line 8, in <module>
> if os.stat(f)[stat.ST_SIZE]>0:
> OSError: [Errno2] No such file or directory: 'a.exe'
>
but when I run it with IDLE: it just prints test.wav in subfolder test_folder
Can anyone explain why it is so and how I can fix it? P.S: My aim is to browse
all files and delete files with specified sizes.
Answer: You need to specify a full path for `os.stat`, unless the file is in the
current working directory. The simplest way to fix this is to change the WD
before trying to access the files:
curDir = os.getcwd()+'/test'
os.chdir(curDir)
A more general solution is to pass the full path to `os.stat`:
if os.stat(os.path.join(paths, f))[stat.ST_SIZE]>0:
print f
I am not quite sure why IDLE does not produce an error here, though.
|
Parsing crontab-style lines
Question: I need to parse a crontab-like schedule definition in Python (e.g. 00 3 * * *)
and get where this should have last run.
Is there a good (preferably small) library that parses these strings and
translates them to dates?
Answer: Perhaps the python package [croniter](http://pypi.python.org/pypi/croniter/)
suits your needs.
Usage example:
>>> import croniter
>>> import datetime
>>> now = datetime.datetime.now()
>>> cron = croniter.croniter('45 17 */2 * *', now)
>>> cron.get_next(datetime.datetime)
datetime.datetime(2011, 9, 14, 17, 45)
>>> cron.get_next(datetime.datetime)
datetime.datetime(2011, 9, 16, 17, 45)
>>> cron.get_next(datetime.datetime)
datetime.datetime(2011, 9, 18, 17, 45)
|
Save python plistlib data
Question: how do I save the output I get for this program(as a variable), instead of it
being printed?
import plistlib, time
import plistlib as pl
p=pl.readPlist("Restore.plist")
print p["ProductType"]#I want this to be outputted as a variable, such as 'x' instead of python printing it.
print p["ProductVersion"]
print p["ProductBuildVersion"]
Answer: something like this?
outputfile = open('output.plist', 'w')
outputfile.write(p["ProductVersion"])
outputfile.close()
|
How can I detect whether my python class instance is cast as an integer or float?
Question: I have a python class to calculate the number of bits when they have been
specified using "Kb", "Mb" or "Gb" notation. I assigned a `@property` to the
`bits()` method so it will always return a `float` (thus working well with
`int(BW('foo').bits)`).
However, I am having trouble figuring out what to do when a pure class
instance is cast as an `int()`, such as `int(BW('foo'))`. I have already
defined `__repr__()` to return a string, but it seems that that code is not
touched when the class instance is cast to a type.
Is there any way to detect within my class that it is being cast as another
type (and thus permit me to handle this case)?
>>> from Models.Network.Bandwidth import BW
>>> BW('98244.2Kb').bits
98244200.0
>>> int(BW('98244.2Kb').bits)
98244200
>>> BW('98244.2Kb')
98244200.0
>>> type(BW('98244.2Kb'))
<class 'Models.Network.Bandwidth.BW'>
>>>
>>> int(BW('98244.2Kb'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: int() argument must be a string or a number, not 'BW'
>>>
Answer: Read this
<http://docs.python.org/reference/datamodel.html#emulating-numeric-types>
[__int__()](http://docs.python.org/reference/datamodel.html#object.__int__)
[__float__()](http://docs.python.org/reference/datamodel.html#object.__float__)
|
Python: How do you use re to ignore links in parentheses?
Question: The relevant part of the code is:
import re
reargs = '<a\s*href=[\'|"](.*?)[\'"].*?>'
link = re.search(reargs,content,flags=re.IGNORECASE)
I'm building a crawler and the web pages I'm working with have links in
parentheses that I don't want so it would be like:
Foo foo foo foo (_link_) foo foo foo foo _link_ foo foo foo foo (foo _link_
foo) foo foo _link_ foo foo _link_......and so on
Answer: If there can be multiple sets of nested parentheses like "((foo) _link_)", I
don't think this is possible with regular expressions. In particular, note
that parentheses can be used inside URLs (such as at
[wikipedia](http://en.wikipedia.org/wiki/Python_%28programming_language%29)),
so there may still be nested parens even if the text itself doesn't contain
any. So, in the general case I don't think this can be done with regex.
In order to solve it, I will assume you can have parentheses at most 1 level
deep, and that no URLs contain parentheses.
The regex you're looking for is something like the following:
(\([^\)]*\)|[^\(<])*_link_
Where `_link_` is a regular expression matching a link (which you describe in
the problem statement, though it might need some tweaking). To summarize what
that first part of my regex is: it matches 0 or more of either a parenthetical
statement or a non-link non-parenthesis character. Now, use the matched back
references (`link.group(2)` in your example) to find your URL.
|
How can I parse an external XML file with django/python
Question: I've done some research on trying to parse an XML file from another web server
and came across something called
[minidom](http://docs.python.org/library/xml.dom.minidom.html).
I've tried implementing this in my view.py file:
from xml.dom import minidom
import models
def test(request):
data={}
doc=minidom.parse("http://www.someotherdomain.com/XML.aspx?id=27550&limit=100")
The problem I'm running into is I get the error `Exception Value: [Errno 2] No
such file or directory:
'http://www.someotherdomain.com/XML.aspx?id=27550&limit=100'`
I haven't been able to find out if you can use minidom on an external document
or if it's only for documents located on the same server.
If this is not possible or is not the ideal solution?
Answer: Apparently minidom cannot parse URLs. You have to do
import urllib2
doc = urllib2.urlopen(your_url)
parsed = minidom.parse(doc)
|
merging arrays in python
Question: I am new to python..I have two sorted arrays (by key) that I would like to
merge. Both arrays have some common keys and some exist uniquely in one of the
arrays. I want to do an outer join.
Array1 = {'key_1': 10, 'key_2': 14,..'key_m': 321}
Array2 = {'key_1': 15, 'key_2': 12..'key_m':2,..'key_n':34}
I want to merge using key_1..key_n..
Array3 = [[key_1',10,15],['key_2':14:12],..]
I don't have numpy package installed on my computer. Do I need it to merge
this array? what is the best way to code this up? Thanks!!!
Answer: How about this?
#!/usr/bin/env python
from itertools import chain
dict1 = {'key_1': 10, 'key_2': 14, 'key_m': 321}
dict2 = {'key_1': 15, 'key_2': 12, 'key_m':2, 'key_n':34}
dict3 = {}
# Go through all keys in both dictionaries
for key in set(chain(dict1, dict2)):
# Find the key in either dictionary, using an empty
# string as the default if it is not found.
dict3[key] = [dict.get(key, "")
for dict in (dict1, dict2)]
print(dict3)
Now `dict3` has a list of every value from the input arrays.
Or if you want it in that `[[key, value, value], [key, value, value]...]`
format:
#!/usr/bin/env python
from itertools import chain
dict1 = {'key_1': 10, 'key_2': 14, 'key_m': 321}
dict2 = {'key_1': 15, 'key_2': 12, 'key_m':2, 'key_n':34}
result = [[key] + [dict.get(key, "")
for dict in (dict1, dict2)]
for key in set(chain(dict1, dict2))]
result.sort()
print(result)
|
Dynamically loading instance method during object instantiation
Question: I want to be able to dynamically load instance methods during object
instantiation. According to my design, the default behaviour is coded in the
base class. However, if certain conditions are met during object
instatination, I dynamically change this behaviour with another piece of code.
This is how I do it:
The default behaviour is coded in `first.py`:
class First(object):
def __init__(self, p):
p = str(p)
#The decision whether or not to perform default action is done
#in the following try/except block. In reality this block
#is more complicated
#and more checks are performed in order to assure proper work
try:
strImport = "__import__('module%s')"%p
e = eval(strImport, {}, {})
if not hasattr(e, p):
raise ImportError()
except ImportError:
e = None #default behaviour
if e is not None:
self.act = getattr(e, p)(p).act #special behaviour
self.p = p
def act(self):
print 'Default behaviour'
import cPickle as pickle
if __name__ == '__main__':
print 'first'
first = First('f')
first.act()
pickle.dump(first, open('first.dump', 'w'))
print 'third'
third = First('Third')
third.act()
pickle.dump(third, open('third.dump', 'w'))
In the code above, both `first` and `third` perform default action. I can
change the behaviour of `third` by adding a file `moduleThird.py` as follows:
from temp import First
class Third(First):
def __init__(self, p):
p = 'Third *** %p'
print 'third init'
super(self.__class__, self).__init__(p)
def act(self):
super(self.__class__, self).act()
print 'Third acted'
After this file has been added, `third` changes its behaviour. However I'm not
unable to pickle the resulting object due to the following error:
Traceback (most recent call last):
File "C:\temp\temp.py", line 35, in <module>
pickle.dump(fff, open('fff.dump', 'w'))
File "C:\Python26\lib\copy_reg.py", line 70, in _reduce_ex
raise TypeError, "can't pickle %s objects" % base.__name__
TypeError: can't pickle instancemethod objects
It is clear that the dynamically loading method `Third.act` is causing the
problems with pickle. How do I need to change my approach in order to get
pickable objects (and more elegant code too)?
Is there a better way to acheive my goal?
Answer: If you change your code as follows then it should work:
class First(object):
def __init__(self, p):
p = str(p)
#The decision whether or not to perform default action is done
#in the following try/except block. In reality this block
#is more complicated
#and more checks are performed in order to assure proper work
try:
strImport = "__import__('module%s')"%p
print strImport
e = eval(strImport, {}, {})
if not hasattr(e, p):
raise ImportError()
self.override_obj = getattr(e, p)(p)
except ImportError:
e = None #default behaviour
self.override_obj = None
self.p = p
def act(self):
if self.override_obj:
return self.override_obj.act()
else:
print 'Default behaviour'
|
alternative to oauth2 (Python module) on GAE?
Question: From my app running on GAE, I want to be able to post tweets periodically.
I've a code with which I am able to post tweets from localhost.
import urllib
import urllib2
import simplejson as json
import oauth2 as oauth
consumer_key = ""
consumer_key_secret = ""
oauth_token = ""
oauth_token_secret = ""
consumer = oauth.Consumer(key=consumer_key, secret=consumer_key_secret)
access_token = oauth.Token(key=oauth_token, secret=oauth_token_secret)
client = oauth.Client(consumer, access_token)
url = "http://api.twitter.com/1/statuses/update.json"
data = {'status': 'post this'}
response, data = client.request(url,'POST',urllib.urlencode(data))
Since oauth2 library is not available on GAE, I want to know the easiest means
to be able to run the code on GAE.
Answer: `oauth2` is a pure-python module; it should run fine on App Engine - just
bundle it with your app.
|
Change the column data delimiter on mysqldump output
Question: I'm looking to change to formatting of the output produced by the mysqldump
command in the following way:
(data_val1,data_val2,data_val3,...)
to
(data_val1|data_val2|data_val3|...)
The change here being a different delimiter. This would then allow me to (in
python) parse the data lines using a line.split("|") command and end up with
the values correctly split (as opposed to doing line.split(",") and have
values that contain commas be split into multiple values).
I've tried using the --fields-terminated-by flag, but this requires the --tab
flag to be used as well. I don't want use the --tab flag as it splits the dump
into several files. Does anyone know how to alter the delimiter that mysqldump
uses?
Answer: This is not a good idea. Instead of using `string.split()` in Python, use [the
`csv` module](http://docs.python.org/library/csv.html) to properly parse CSV
data, which may be enclosed in quotes and may have internal `,` which aren't
delimiters.
import csv
MySQL dump files are intended to be used as input back into MySQL. If you
really want pipe-delimited output, use the [`SELECT INTO OUTFILE`
syntax](http://dev.mysql.com/doc/refman/5.0/en/select.html) instead with the
`FIELDS TERMINATED BY '|'` option.
|
Is using Python modules main function for validation testing a bad idea?
Question: I'll quickly explain exactly what I mean by this.
I'm working on a project using python, where I have multiple modules doing
segments of work. Let's say for example I have a module called `Parser.py` and
this module has a function `parseFile()` which my main module `Main.py` calls
in order to parse some files.
As of right now, I'm using a main method inside of the Parser.py
if __name__ == "__main__":
line_list = parseFile(sys.argv[1])
out_file = open(sys.argv[2], "w")
for i in range(len(line_list)):
out_file.write(line_list[i].get_string(True))
It's not important what exactly the parsing does, but the important part is if
you call it, the first argument will be the input file for the parsing, the
second argument is the output file for parsing.
So, what I'm doing essentially, is I'm using a batch file to validate the
results of my parser by a typical input, output, baseline system...
ECHO Set the test, source, input, output and baseline directories
set TESTDIR=%CD%
set SRCDIR=%CD%\..\pypro\src
set INDIR=%CD%\input
set OUTDIR=%CD%\output
set BASEDIR=%CD%\baseline
:: Parser.py main method is base for unit testing on parsing
ECHO Begin Parser testing
cd %INDIR%\Parser
FOR %%G IN (*.psma) DO %SRCDIR%\Parser.py %%G %OUTDIR%\Parser\%%G
ECHO Parser testing complete
cd %TESTDIR%
"C:\Program Files\WinMerge\winmergeU.exe" "%OUTDIR%" "%BASEDIR%"
As you can see it diffs the results against the baseline, so if anything is
changed the programmer knows it is no longer valid, or the requirements are
wrong.
Is there anything wrong with this method? I did it because it would be easy.
My plan is to continue doing this with as many modules that I can which are
valid and make sense to do this way, as well as a suite of pyunit tests inside
pydev...
Answer: I think it's a good idea, and it does seem to be a common use case for `if
__name__ == '__main__'` construct. Though this is a more usual structure:
def main(argv=None):
if argv is None:
argv = sys.argv
# etc.
if __name__ == "__main__":
sys.exit(main() or 0)
This gives you the additional flexibility to use your `main` from within the
interactive interpreter. There are a few more nice examples from Guido and
others
[here](http://www.artima.com/forums/flat.jsp?forum=106&thread=4829&start=15&msRange=15).
|
CPU usage increasing over time
Question: I have a python program that is running for many days. Memory usage does not
increase by very much, however the program becomes slower and slower. Is there
a tool or utility that will list all function calls and how long they take to
complete? Something like [guppy/heapy](http://guppy-pe.sourceforge.net/) but
for CPU usage.
Answer: **Edit2**
I just saw your actual question is answered as in '[How can you profile a
Python script?](http://stackoverflow.com/questions/582336/how-can-you-profile-
a-python-script)'
* * *
Sure, use profile.py.
import profile
def myfunction():
bla bla bla
profile.run('myfunction()')
see also [profilers](http://docs.python.org/library/profile.html) and [tips on
performance](http://wiki.python.org/moin/PythonSpeed/PerformanceTips).
**Edit:** Above example is for one function. You can profile and run your
script from the commandline with cProfile with:
python -m cProfile myscript.py
Your program/script could look also like the following for profiling always
when run:
def myfunction():
for i in range(100):
print(i)
def myotherfunction():
for i in range(200):
print(i)
def main():
""" main program to run over several days """
for _ in range(3):
myfunction()
myotherfunction()
if __name__ == '__main__':
profile.run('main()') # will execute your program
# and show profiling results afterwards
|
Open and read sequential XML files with unknown files names in Python
Question: I wish to read incoming XML files that have no specific name (e.g. date/time
naming) to extract values and perform particular tasks. I can step through the
files but am having trouble opening them and reading them.
What I have that works is:-
import os
path = 'directory/'
listing = os.listdir(path)
for infile in listing:
print infile
But when I add the following to try and read the files it errors saying No
such file or directory.
file = open(infile,'r')
Thank you.
Answer: `os.listdir` provides the base names, not absolute paths. You'll need to do
`os.path.join(path, infile)` instead of just `infile` (that may still be a
relative path, which should be fine; if you needed an absolute path, you'd
feed it through `os.path.abspath`).
|
Space padded binary string in Python
Question: I need to convert this PHP function into Python but I don't even know, what is
space padded binary string.
pack('A*', $string);
Python has struct.pack what should be probably used but I end here. Can
somebody help and explain me the behaviour?
Thanks!
* * *
UPDATE:
This is the whole code I need to implement in Python. Until now I never heard
about pack() so I am trying to understand what it exactly does so I can do it
in Python:
function getSIGN($PID, $ID, $DESC, $PRICE, $URL, $EMAIL, $PWD) {
$bHash = pack('A*', $PID . $ID . $DESC . $PRICE . $URL . $EMAIL);
$bPWD = pack('A*', $PWD);
$SIGN = strtoupper(hash_hmac('sha256', $bHash, $bPWD, false));
return $SIGN;
}
Answer: I think that is a noop.
$string = 'asdf';
print pack('A10', $string) . "|<-\n";
would give you
asdf |<-
Since the `*` means "take as many as possible," there is never any reason to
pad.
IMHO you can just throw away the whole line.
* * *
Re. your Update:
The `pack` function still serves no purpose, except for maybe implicitely
converting all non-string arguments to strings.
Here is how you would do it in Python. I took the liberty to change the order
of the parameters, so I can use parameter packing (which is not at all like
string packing ;).
import hmac, hashlib
def get_sign(key, *data):
msg = ''.join(str(item) for item in data)
h = hmac.new(key, msg, hashlib.sha256)
return h.hexdigest().upper()
PHP:
$ print getSIGN(1234, 456, "foo", '123.45', 'http://example.com', '[email protected]', 'blah');
7FA608240FA2DC04F15DB2CDB58C83F4ED6C28C5C5B4063C5A7605F9D69F170B
Python:
In [12]: get_sign('blah', 1234, 456, "foo", '123.45',
'http://example.com', '[email protected]')
Out[12]: '7FA608240FA2DC04F15DB2CDB58C83F4ED6C28C5C5B4063C5A7605F9D69F170B'
|
Parsing a website with a javascript call using Python
Question: Since I counldn't find an API function in common wikimedia to get alicense of
an image, the only thing left to do it to fetch the webpage and parse it
myself.
For each image, there is a nice popup in wikimedia that lists the
"Attribution" field which I need. For example, in the page
[http://commons.wikimedia.org/wiki/File:Brad_Pitt_Cannes_2011.jpg](http://commons.wikimedia.org/wiki/File%3aBrad_Pitt_Cannes_2011.jpg)
there is a link on the right saying `"Use this file on the web"`. When
clicking on it I can see the "Attribution" field which I need.
Using Python, how can I fetch the webpage and initiate a javascript call to
open that pop up in order to retrieve the text inside the "Attribution" field?
Thanks!
meir
Answer: using unutbu's answer, I converted it to use Selenium
[WebDriver](http://code.google.com/p/selenium/wiki/PythonBindings) (rather
than the older Selenium-RC).
import codecs
import lxml.html as lh
from selenium import webdriver
browser = webdriver.Firefox()
browser.get('http://commons.wikimedia.org/wiki/File%3aBrad_Pitt_Cannes_2011.jpg')
content = browser.page_source
browser.quit()
doc = lh.fromstring(content)
for elt in doc.xpath('//span[a[contains(@title,"Use this file")]]/text()'):
print elt
output:
on the web
on a wiki
|
Python error when retrieving a url from a database and opening it with webbrowser()
Question: I am trying to make an app similar to StumbleUpon using Python as a back end
for a personal project . From the database I retrieve a website name and then
I open that website with webbrowser.open("http://www.website.com"). Sounds
pretty straight forward right but there is a problem. When I try to open the
website with webbrowser.open("website.com") it returns the following error:
File "fetchall.py", line 18, in <module>
webbrowser.open(x)
File "/usr/lib/python2.6/webbrowser.py", line 61, in open
if browser.open(url, new, autoraise):
File "/usr/lib/python2.6/webbrowser.py", line 190, in open
for arg in self.args]
TypeError: expected a character buffer object
Here is my code:
import sqlite3
import webbrowser
conn = sqlite3.connect("websites.sqlite")
cur = conn.cursor()
cur.execute("SELECT WEBSITE FROM COLUMN")
x = cur.fetchmany(1)
webbrowser.open(x)
EDIT
Okay thanks for the reply, but now I'm receiving this: "Error showing URL:
Error stating file '/home/user/(u'http:bbc.co.uk,)': No such file or
directory".
What's going on ?
Answer: `webbrowser.open` is expecting a character buffer, but `fetchmany` returns a
list. So `webbrowser.open(x[0])` should do the trick.
|
XML parsing python
Question:
<dict>
<key>1208</key>
<dict>
<key>Track ID</key><integer>1208</integer>
<key>Name</key><string>Kings And Queens</string>
<key>Artist</key><string>30 Seconds To Mars</string>
<key>Album Artist</key><string>30 Seconds to Mars</string>
<key>Composer</key><string>Jared Leto</string>
<key>Album</key><string>This Is War</string>
<key>Genre</key><string>Pop</string>
<key>Kind</key><string>MPEG audio file</string>
<key>Size</key><integer>10634388</integer>
<key>Total Time</key><integer>347820</integer>
<key>Track Number</key><integer>1</integer>
<key>Year</key><integer>2009</integer>
<key>Date Modified</key><date>2011-09-05T21:03:08Z</date>
<key>Date Added</key><date>2011-08-18T03:57:19Z</date>
<key>Bit Rate</key><integer>244</integer>
<key>Sample Rate</key><integer>44100</integer>
<key>Comments</key><string> 00000000 00000210 000006F0 0000000000EA0000 00000000 00A22291 00000000 00000000 00000000 00000000 00000000 00000000</string>
<key>Play Count</key><integer>1</integer>
<key>Play Date</key><integer>3399116673</integer>
<key>Play Date UTC</key><date>2011-09-17T09:34:33Z</date>
<key>Persistent ID</key><string>BB3D5E86F5CAC255</string>
<key>Track Type</key><string>File</string>
<key>Location</key><string>file://localhost/D:/My%20Music/English%20songs/01-30_seconds_to_mars-kings_and_queens.mp3</string>
<key>File Folder Count</key><integer>-1</integer>
<key>Library Folder Count</key><integer>-1</integer>
</dict>
...
..
I want to use xml.dom.minidom package to parse this file.
self.dom = xml.dom.minidom.parse(self.file)
self.name = self.dom.getElementsByTagName('dict')
print self.name[10].firstChild.data
This code does not seem to work. Basically what I want is to check the value
of the second child of dict and then if it is the track I want get the
location of the track.
Is there a way to get the dict node which satisfies my conditions?
Answer: How about:
import xml.dom.minidom as minidom
dom = minidom.parse('test.xml')
data={}
for dct in dom.getElementsByTagName('dict'):
keys=dct.getElementsByTagName('key')
# key.nextSibling can be an integer or string or date element, or Text node
# key.nextSibling.firstChild is a Text node or None
vals=[key.nextSibling.firstChild for key in keys]
# drill down to the text inside the keys and vals
keys=[key.firstChild.data for key in keys]
vals=[val.data if val else None for val in vals]
data=dict(zip(keys,vals))
if data['Track ID']=='1208':
print(data['Location'])
break
which yields
file://localhost/D:/My%20Music/English%20songs/01-30_seconds_to_mars-kings_and_queens.mp3
|
How to deal with Linux/Python dependencies?
Question: Due to lack of support for some libraries I want to use, I moved some Python
development from Windows to Linux development. I've spent most of the day
messing about getting nowhere with dependencies.
**The question**
Whenever I pick up Linux, I usually run into some kind of dependency issue,
usually with development libraries, whether they're installed via apt-get,
easy_install or pip. I can waste days on what should be simple tasks, spending
longer on getting libraries to work than writing code. **Where can I learn
about strategy for dealing with these kind of issues rather than aimlessly
googling for someone who's come across the same problem before?**
* * *
**An example**
Just one example: I wanted to generate some QR codes. So, I thought I'd use
[github.com/bitly/pyqrencode](https://github.com/bitly/pyqrencode) which is
based on [pyqrcode.sourceforge.net](http://pyqrcode.sourceforge.net/) but
supposedly without the Java dependencies. There are others
([pyqrnative](http://code.google.com/p/pyqrnative/),
[github.com/Arachnid/pyqrencode](https://github.com/Arachnid/pyqrencode/)) but
that one seemed like the best bet for my needs.
So, I found the package on [pypi](http://pypi.python.org/pypi/pyqrencode) and
thought using that would make life easier:
(I've perhaps made life more difficult for myself by using virtualenv to keep
things neat and tidy.)
(myenv3)mat@ubuntu:~/myenv3$ bin/pip install pyqrencode
Downloading/unpacking pyqrencode
Downloading pyqrencode-0.2.tar.gz
Running setup.py egg_info for package pyqrencode
Installing collected packages: pyqrencode
Running setup.py install for pyqrencode
building 'qrencode' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c qrencode.c -o build/temp.linux-i686-2.7/qrencode.o
gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions build/temp.linux-i686-2.7/qrencode.o -lqrencode -o build/lib.linux-i686-2.7/qrencode.so
Successfully installed pyqrencode
Cleaning up...
(I guess I probably `sudo apt-get install libqrencode-dev` at some point prior
to that too.)
So then I tried to run the test script:
(myenv3)mat@ubuntu:~/myenv3$ python test_qr.py
Traceback (most recent call last):
File "test_qr.py", line 1, in <module>
from qrencode import Encoder
File "qrencode.pyx", line 1, in init qrencode (qrencode.c:1520)
ImportError: No module named ImageOps
:(
Well, [investigations](http://www.google.com/search?q=ImageOps) revealed that
ImageOps appears to be part of PIL...
(myenv3)mat@ubuntu:~/myenv3$ pip install pil
Downloading/unpacking pil
Downloading PIL-1.1.7.tar.gz (506Kb): 122Kb downloaded
Operation cancelled by user
Storing complete log in /home/mat/.pip/pip.log
(myenv3)mat@ubuntu:~/myenv3$ bin/pip install pil
Downloading/unpacking pil
Downloading PIL-1.1.7.tar.gz (506Kb): 506Kb downloaded
Running setup.py egg_info for package pil
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
Installing collected packages: pil
Running setup.py install for pil
WARNING: '' not a valid package name; please use only.-separated package names in setup.py
building '_imaging' extension
gcc ...
building '_imagingmath' extension
gcc ...
--------------------------------------------------------------------
PIL 1.1.7 SETUP SUMMARY
--------------------------------------------------------------------
version 1.1.7
platform linux2 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24)
[GCC 4.5.2]
--------------------------------------------------------------------
*** TKINTER support not available
*** JPEG support not available
*** ZLIB (PNG/ZIP) support not available
*** FREETYPE2 support not available
*** LITTLECMS support not available
--------------------------------------------------------------------
To add a missing option, make sure you have the required
library, and set the corresponding ROOT variable in the
setup.py script.
To check the build, run the selftest.py script.
...
Successfully installed pil
Cleaning up...
Hmm, PIL's installed but hasn't picked up the libraries I installed with `sudo
apt-get install libjpeg62 libjpeg62-dev libpng12-dev zlib1g zlib1g-dev`
earlier. I'm not sure how to tell pip to feed the library locations to
setup.py. Googling suggests a variety of
[ideas](http://www.cleverkoala.com/2011/09/how-to-install-python-imaging-in-a-
virtualenv-with-no-site-packages/) which I've tried, but none of them seem to
help much other than to send me round in circles.
[Ubuntu 11.04: Installing PIL into a virtualenv with
PIP](http://stackoverflow.com/questions/6138848/ubuntu-11-04-installing-pil-
into-a-virtualenv-with-pip) suggests using the
[pillow](http://pypi.python.org/pypi/Pillow) package instead, so let's try
that:
(myenv3)mat@ubuntu:~/myenv3$ pip install pillow
Downloading/unpacking pillow
Downloading Pillow-1.7.5.zip (637Kb): 637Kb downloaded
Running setup.py egg_info for package pillow
...
Installing collected packages: pillow
Running setup.py install for pillow
building '_imaging' extension
gcc ...
--------------------------------------------------------------------
SETUP SUMMARY (Pillow 1.7.5 / PIL 1.1.7)
--------------------------------------------------------------------
version 1.7.5
platform linux2 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24)
[GCC 4.5.2]
--------------------------------------------------------------------
*** TKINTER support not available
--- JPEG support available
--- ZLIB (PNG/ZIP) support available
--- FREETYPE2 support available
*** LITTLECMS support not available
--------------------------------------------------------------------
To add a missing option, make sure you have the required
library, and set the corresponding ROOT variable in the
setup.py script.
To check the build, run the selftest.py script.
...
Successfully installed pillow
Cleaning up...
Well, we seem to have the JPEG and PNG support this time, yay!
(myenv3)mat@ubuntu:~/myenv3$ python test_qr.py
Traceback (most recent call last):
File "test_qr.py", line 1, in <module>
from qrencode import Encoder
File "qrencode.pyx", line 1, in init qrencode (qrencode.c:1520)
ImportError: No module named ImageOps
Still no ImageOps though. Now I'm stumped, is ImageOps missing from pillow, or
is it a different problem that was also there with pil.
Answer: I see two separate problems here:
1. Keeping track of all the python modules you need for your project.
2. Keeping track of all the dynamic libraries you need for the python modules in your project.
For the first problem, I have found that
[buildout](http://www.buildout.org/%20buildout) is good help, althought it
takes a litle while to grasp.
In your case, I would start by creating a directory for my new project. I
would then go into that directory and download _bootstrap.py_
wget http://python-distribute.org/bootstrap.py
I would then create a _buildout.cfg_ file:
[buildout]
parts = qrproject
python
eggs = pyqrencode
[qrproject]
recipe = z3c.recipe.scripts
eggs = ${buildout:eggs}
entry-points= qrproject=qrprojectmodule:run
extra-paths = ${buildout:directory}
# This is a simple way of creating an interpreter that will have
# access to all the eggs / modules that this project uses.
[python]
recipe = z3c.recipe.scripts
interpreter = python
eggs = ${buildout:eggs}
extra-paths = ${buildout:directory}
In this _buildout.cfg_ I'm referencing the module _qrprojectmodule_ (in
_entry-points_ under _[qrproject]_. This will create a bin/qrproject that runs
the function _run_ in the module _qrprojectmodule_. So I will also create the
file _qrprojectmodule.py_
import qrencode
def run():
print "Entry point for qrproject. Happily imports qrencode module"
Now it's time to run _bootstrap.py_ with the python binary you want to use:
python bootstrap.py
Then run the generated _bin/buildout_
bin/buildout
This will create two additional binaries in the _bin/_ directory -
_bin/qrproject_ and _bin/python_. The former is your project's main binary. It
will be created automatically each time you run buildout and will have all the
modules and eggs you want loaded. The second is simply a convenient way to get
a python prompt where all your modules and eggs are loaded, for easy
debugging. The fine thing here is that bin/buildout will automatically install
any python eggs that the eggs (in your case pyqrencode) have specified as
dependencies.
Actually, you will probably get a compilation error in the step where you run
_bin/buildout_. This is because you need to address problem 2: All dynamic
libraries being available on your system. On Linux, it's usually best to get
help from your packaging system. I'm going to assume you're using a Debian
derivate such as Ubuntu here.
The pyqrencode web site specifies that you need the libqrencode library for
pyqrencode to work. So I used my package manager to search for that:
$ apt-cache search libqrencode
libqrencode-dev - QR Code encoding library -- development
libqrencode3 - QR Code encoding library
qrencode - QR Code encoder into PNG image
In this case, I want the -dev package, as that installs linkable libraries and
header files required to compile python C-modules. Also, the dependency system
in the package manager will make sure that if I install _libqrencode-dev_ , I
will also get _libqrencode3_ , as that is required at runtime, i.e. after
compilation of the module.
So, I install the package:
sudo apt-get install libqrencode-dev
Once that has completed, rerun bin/buildout and the pyqrencode module will
(hopefully) compile and install successfully. Now try to run _bin/qrproject_
$ bin/qrproject
Entry point for qrproject. Happily imports qrencode module
Success! :-)
So, in summary:
1. Use buildout to automatically download and install all the python modules/eggs you need for your project.
2. Use your system's package manager to install any dynamic (C) libraries required by the python modules you use.
Be aware that in many cases there are already packaged versions of your python
modules available in the package system. For example, pil is available by
installing the _python-imaging_ package on Ubuntu. In this case, you don't
need to install it via buildout, and you don't need to worry about libraries
being available - the package manager will install all dependencies required
for the module to run. Doing it via buildout can however make it easier to
distribute your project and make it run on other systems.
|
Scaling Problem with Pygame
Question: When I attempt to scale an object, only the top and left of the image get
bigger. The rest stays the same. I want an even scale.
import pygame._view
import pygame, sys
from pygame.locals import *
import random
pygame.init()
barrel = pygame.image.load("images\Barrel.gif")
barrelx = 0
barrely = 0
while running:
barrel = pygame.transform.scale(barrel, (int(barrely/4), int(barrely/4)))
screen.blit(barrel, (barrelx, barrely))
barrely is always getting bigger (as a number) until it gets off-screen. I'm
using Python 2.7 on Windows XP.
Answer: Shouldn't that be:
import pygame._view
import pygame, sys
from pygame.locals import *
import random
pygame.init()
barrel = pygame.image.load("images\Barrel.gif")
barrelx = 0
barrely = 0
while running:
barrel = pygame.transform.scale(barrel, (int(barrelx/4), int(barrely/4)))
screen.blit(barrel, (barrelx, barrely))
In your code you give barrely for both the width _and_ the height of the
pygame.transform.scale()
Also I can't see where you update barrelx and barrely. In the code you pasted
barrelx and barrely will always be 0.
|
NameError: name 're' is not defined
Question: I am very new to python. Very new. I copied the following from a tutorial
#!/usr/bin/python
import re
from urllib import urlopen
from BeautifulSoup import BeautifulSoup
webpage = urlopen('http://feeds.huffingtonpost.com/huffingtonpost/LatestNews').read
patFinderTitle = re.compile('<title>(.*)</title>')
patFinderLink = re.compile('<link rel.*href="(.*)"/>')
findPatTitle = re.findall(patFinderTitle,webpage)
findPatLink = re.findall(patFinderLink,webpage)
listIterator = []
listIterator[:] = range(2,16)
for i in listIterator:
print findPatTitle[i]
print findPatLink[i]
print "\n"
I get the error:
Traceback (most recent call last):
File "test.py", line 8, in <module>
patFinderTitle = re.compile('<title>(.*)</title>')
NameError: name 're' is not defined
What am I doing wrong?
Edit: I added `import re` but now get the following:
File "/scripts/_prod/test.py", line 13, in <module>
findPatTitle = re.findall(patFinderTitle,webpage)
File "/usr/lib64/python2.6/re.py", line 177, in findall
return _compile(pattern, flags).findall(string)
TypeError: expected string or buffer
Answer: You need to import [regular expression
module](http://docs.python.org/library/re.html) in your code
import re
re.compile('<title>(.*)</title>')
|
Converting timezone-aware date string to UTC and back in Python
Question: I'm parsing the National Weather Service alerts feed into a web application.
I'd like to purge the alerts when they hit their expiration time. I'd also
like to display the expiration time in the local time format for the
geographic area they pertain to.
The alerts cover the whole US, so I think the best approach is to store and
compare the times in UTC timestamps. The expiration time arrives in the feed
as a string like this: `2011-09-09T22:12:00-04:00`.
I'm using the _Labix dateutils_ package to parse the string in a timezone-
aware manner:
>>> from dateutil.parser import parse
>>> d = parse("2011-09-18T15:52:00-04:00")
>>> d
datetime.datetime(2011, 9, 18, 15, 52, tzinfo=tzoffset(None, -14400))
I'm also able to capture the UTC offset in hours:
>>> offset_hours = (d.utcoffset().days * 86400 + d.utcoffset().seconds) / 3600
>>> offset_hours
-4
Using the `datetime.utctimetuple()` and `time.mktime()` methods, I'm able to
convert the parsed date to a UTC timestamp:
>>> import time
>>> expiration_utc_ts = time.mktime(d.utctimetuple())
>>> expiration_utc_ts
1316393520.0
At this point, I feel pretty good that I'm able to convert the raw strings
into a timestamp representing the expiration time in UTC. I'm able to compare
the current time as a UTC timestamp to the expiration and determine if it
needs to be purged:
>>> now_utc_ts = time.mktime(time.gmtime())
>>> now_utc_ts
1316398744.0
>>> now_utc_ts >= expiration_tc_ts
True
The difficulty I'm having is trying to convert my stored UTC timestamp back to
the original localized format. I have the offset hours stored from the
original conversion and a string I parsed to store the timezone label:
>>> print offset_hours
-4
>>> print timezone
EDT
**I'd like to convert the UTC timestamp back to a locally formatted time, but
converting it back to a`datetime` doesn't seem to be working:**
>>> import datetime
>>> datetime.datetime.fromtimestamp(expiration_utc_ts) + datetime.timedelta(hours=offset_hours)
datetime.datetime(2011, 9, 18, 16, 52) # The hour is 16 but it should be 15
It looks like it's off by an hour. I'm not sure where the error was
introduced? I put together another test and got similar results:
>>> # Running this at 21:29pm EDT
>>> utc_now = datetime.datetime.utcnow()
>>> utc_now_ts = time.mktime(right_now.utctimetuple())
>>> datetime.datetime.fromtimestamp(utc_now_ts)
datetime.datetime(2011, 9, 18, 22, 29, 47) # Off by 1 hour
Can someone help me find my mistake? I'm not sure if it's a daylight savings
issue? I came across some stuff that leads me to believe it might be trying to
localize my dates and times but at this point I'm pretty stumped. I was hoping
to do all of these calculations/comparisons in a timezone-agnostic manner.
Answer: The problem is that Daylight Savings time is being applied twice.
A trivial example:
>>> time_tuple = datetime(2011,3,13,2,1,1).utctimetuple()
time.struct_time(tm_year=2011, tm_mon=3, tm_mday=13, tm_hour=2, tm_min=1, tm_sec=1, tm_wday=6, tm_yday=72, tm_isdst=0)
>>> datetime.fromtimestamp(time.mktime(time_tuple))
datetime.datetime(2011, 3, 13, 3, 1, 1)
I am fairly certain that the fault lies within `time.mktime()`. As it says in
its [documentation](http://docs.python.org/library/time.html#time.mktime):
> This is the inverse function of localtime(). Its argument is the struct_time
> or full 9-tuple (since the dst flag is needed; use -1 as the dst flag if it
> is unknown) which expresses the time in local time, not UTC. It returns a
> floating point number, for compatibility with time(). If the input value
> cannot be represented as a valid time, either OverflowError or ValueError
> will be raised (which depends on whether the invalid value is caught by
> Python or the underlying C libraries). The earliest date for which it can
> generate a time is platform-dependent.
When you pass a time tuple to `time.mktime()`, it expects a flag on whether
the time is _in daylight savings time_ or not. As you can see above,
`utctimetuple()` returns a tuple with that flag marked `0`, as it says it will
do in its [documentation](http://docs.python.org/library/datetime.html):
> If datetime instance d is naive, this is the same as d.timetuple() except
> that tm_isdst is forced to 0 regardless of what d.dst() returns. DST is
> never in effect for a UTC time.
>
> If d is aware, d is normalized to UTC time, by subtracting d.utcoffset(),
> and a time.struct_time for the normalized time is returned. tm_isdst is
> forced to 0. Note that the result’s tm_year member may be MINYEAR-1 or
> MAXYEAR+1, if d.year was MINYEAR or MAXYEAR and UTC adjustment spills over a
> year boundary.
Since you have told `time.mktime()` that your time is not DST, and its job is
to convert all times into local time, and it is currently daylight savings
time in your area, it **adds an hour** to make it daylight savings time. Hence
the result.
* * *
While I don't have the post handy, I came across a method a couple of days ago
to convert timezone-aware datetimes into naive ones in your local time. This
might work much better for your application than what you are currently doing
(uses the excellent [pytz](http://pytz.sourceforge.net/) module):
import pytz
def convert_to_local_time(dt_aware):
tz = pytz.timezone('America/Los_Angeles') # Replace this with your time zone string
dt_my_tz = dt_aware.astimezone(tz)
dt_naive = dt_my_tz.replace(tzinfo=None)
return dt_naive
Replace 'America/LosAngeles' with your own timezone string, which you can find
somewhere in `pytz.all_timezones`.
|
Python: Question about multiprocessing / multithreading and shared resources
Question: Here's the simplest multi threading example I found so far:
import multiprocessing
import subprocess
def calculate(value):
return value * 10
if __name__ == '__main__':
pool = multiprocessing.Pool(None)
tasks = range(10000)
results = []
r = pool.map_async(calculate, tasks, callback=results.append)
r.wait() # Wait on the results
print results
I have two lists and one index to access the elements in each list. The ith
position on the first list is related to the ith position on the second. I
didn't use a dict because the lists are ordered.
What I was doing was something like:
for i in xrange(len(first_list)):
# do something with first_list[i] and second_list[i]
So, using that example, I think can make a function sort of like this:
#global variables first_list, second_list, i
first_list, second_list, i = None, None, 0
#initialize the lists
...
#have a function to do what the loop did and inside it increment i
def function:
#do stuff
i += 1
But, that makes `i` a shared resource and I'm not sure if that'd be safe. It
also seems to me my design is not lending itself well to this multithreaded
approach, but I'm not sure how to fix it.
Here's a working example of what I wanted (Edit an image you want to use):
import multiprocessing
import subprocess, shlex
links = ['http://www.example.com/image.jpg']*10 # don't use this URL
names = [str(i) + '.jpg' for i in range(10)]
def download(i):
command = 'wget -O ' + names[i] + ' ' + links[i]
print command
args = shlex.split(command)
return subprocess.call(args, shell=False)
if __name__ == '__main__':
pool = multiprocessing.Pool(None)
tasks = range(10)
r = pool.map_async(download, tasks)
r.wait() # Wait on the results
Answer: First off, it might be beneficial to make one list of tuples, for example
new_list[i] = (first_list[i], second_list[i])
That way, as you change `i`, you ensure that you are always operating on the
same items from `first_list` and `second_list`.
Secondly, assuming there are no relations between the `i` and `i-1` entries in
your lists, you can use your function to operate on one given i value, and
spawn a thread to handle each i value. Consider
indices = range(len(new_list))
results = []
r = pool.map_async(your_function, indices, callback=results.append)
r.wait() # Wait on the results
This should give you what you want.
|
python libraries in a computer cluster
Question: I'm having a problem for python to find the installed libraries when I run it
in a computer cluster.
When I try, e.g., to load numpy in the script:
#file: /home/foo/test.py
import numpy
print numpy.__version__
on the server, I get this:
foo@abax:~$ python test.py
1.4.1
but when I try to run the same in a node with remote shell, I get an error:
foo@abax:~$ rsh -l foo ab01 "python test.py"
Traceback (most recent call last):
File "test.py", line 2, in <module>
import numpy
ImportError: No module named numpy
is there a way to tell python to load the files that are installed in the the
central node of the cluster?
Answer: First things to check:
* Print `PYTHONPATH` on both the frontal server and the cluster nodes, to make sure there is no inconsistency
* Print `numpy.__file__` on the frontal server, to check where it finds numpy. Then explore the filesystem of the cluster nodes a bit to see if numpy can be found in the same place (if not, run a search to see if you can find it, then update your `PYTHONPATH` accordingly).
It may just be that numpy is installed locally on the frontal server, but not
on cluster nodes. In that case you will need to install numpy by yourself on a
filesystem that cluster nodes can access (note that on a scientific cluster,
it would be better to ask the cluster admins to install numpy on cluster
nodes, to make it available to everyone).
If the platforms are similar enough, copying the numpy folder from the frontal
server to somewhere in the shared filesystem (e.g. a subfolder of your home
dir, that you would add to your `PYTHONPATH`) might work, but a clean install
is to be preferred.
|
Python script to find word frequencies of a given document
Question: I am looking for a simple script that can find frequencies of words for a
given document (probably by using portable stemmer).
Is there any library or simple script that does this process?
Answer: use [nltk](http://www.nltk.org/)
import nltk
YOUR_STRING = "Your words"
words = [w for w in YOUR_STRING.split()]
freq_dist = nltk.FreqDist(words)
tokens = freq_dist.keys()
#50 most frequent
most_frequent = tokens[:50]
#50 least frequent
least_frequent = tokens[-50:]
|
Sending messages or datas with bluetooth via python
Question: How can i send messages over bluetooth via python without key authentification
like type numbers ?
i used pybluez but i got this error:
File "./send", line 12, in <module>
connect()
File "./send", line 8, in connect
sock.connect((bd_addr, port))
File "<string>", line 5, in connect
bluetooth.btcommon.BluetoothError: (111, 'Connection refused')
Here is the code
#!/usr/bin/python
import bluetooth
def connect ():
bd_addr = "x:x:x:x:x:x"
port = 1
sock=bluetooth.BluetoothSocket(bluetooth.RFCOMM)
sock.connect((bd_addr, port))
sock.send("hello!!")
sock.close()
connect()
Answer: As @TJD said, you need to ensure you bind with the correct port for the
service you want.
>>> from bluetooth import *
>>> from pprint import pprint
>>>
>>> devices = discover_devices()
>>> devices
['xx:yy:tt:zz:44:BD', '00:yy:72:zz:bb:aa']
Then as the second step try to find the service on the device you want to
connect to.
>>> service = find_service(address='00:yy:72:zz:bb:aa')
>>> pprint(service)
[{'description': None,
'host': '00:yy:72:zz:bb:aa',
'name': 'Headset Audio Gateway',
'port': 12,
'profiles': [('1108', 258)],
...},
{'description': None,
'host': '00:yy:72:zz:bb:aa',
'name': 'Dial-Up Networking',
'port': 1,
'profiles': [('1103', 256)],
'protocol': 'RFCOMM',
...}]
Based on this information you can connect to a service running on a device.
According to the service/profile [specification](https://www.bluetooth.org/en-
us/specification/assigned-numbers-overview/service-discovery) you send service
specific commands and get back information from the device. E.g. in the list
above you see the 'Headset Audio Gateway' and the profile list with the number
'1108', which is the short uuid for the service. You can now lookup the
commands for this profile and it should work.
|
Python pty module usage example
Question: What I need to do is the following: in a Python script spawn, say the "ls
--colors=always /" Linux command, and read its output. The important part of
this is that I need the output to keep all the ANSI escape sequences (colors
and such) to later translate these sequences into HTML. I heard that Python
pty module can do that, but I could not find a useful example of its usage in
the Internet, and this module's documentation is not quite comprehensive. I'll
appreciate if someone could guide me through the way of accomplishing this
task.
Answer:
import subprocess as sub
process = sub.Popen("ls --colors=always /", stdout=sub.PIPE, stderr=sub.PIPE)
output, errors = process.communicate()
Now all the data you want should be in `output` \- including the ANSI escape
sequences.
|
Executing python code
Question: I am starting fresh with python and trying to execute a code from the python
command window. I wrote a file on Desktop\practice\new.py and lunched the
python command window.
when I type
C:\users\user\Desktop\practice\new.py
it gives me
SyntaxError: invalid syntax
Executing from CMD worked, but from python window didnt!
Any help?
**EDIT2:** when i put the compiled code in the directory and use the 'import'
it runs, but when the compiled is not in the same directory it won't execute
**EDIT:** the file contains a simple print statement nd is sytax error free
Answer: Everything is explained in here: <http://docs.python.org/faq/windows.html#how-
do-i-run-a-python-program-under-windows>
The main point that when you launch python shell. Its like a live programming.
Try to type in it:
>>> print 'hello world'
If you want to launch your file - run in cmd: `python
C:/users/user/Desktop/practice/new.py`
**UPDATE:** If you do want to run file from within python shell - it was
answered here: [How to execute a file within the python
interpreter?](http://stackoverflow.com/questions/1027714/how-to-execute-a-
file-within-the-python-interpreter)
|
Embarassingly Parallel DB Update Using Python (PostGIS/PostgreSQL)
Question: I need to update every record in a spatial database in which I have a data set
of points that overlay data set of polygons. For each point feature I want to
assign a key to relate it to the polygon feature that it lies within. So if my
point 'New York City' lies within polygon USA and for the USA polygon 'GID =
1' I will assign 'gid_fkey = 1' for my point New York City.
To do this I have created the following query.
procQuery = 'UPDATE city SET gid_fkey = gid FROM country WHERE ST_within((SELECT the_geom FROM city WHERE wp_id = %s), country.the_geom) AND city_id = %s' % (cityID, cityID)
At present I am getting the cityID info from another query that just selects
all cityID where gid_fkey is NULL. Essentially I just need to loop through
these and run the query shown earlier. As the query only relies on static
information in the other table in theory all of these processes can be run at
once. I have implemented the threading procedure below but I can't seem to
make the migration to multiprocessing
import psycopg2, pprint, threading, time, Queue
queue = Queue.Queue()
pyConn = psycopg2.connect("dbname='geobase_1' host='localhost'")
pyConn.set_isolation_level(0)
pyCursor1 = pyConn.cursor()
getGID = 'SELECT cityID FROM city'
pyCursor1.execute(getGID)
gidList = pyCursor1.fetchall()
class threadClass(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
while True:
gid = self.queue.get()
procQuery = 'UPDATE city SET gid_fkey = gid FROM country WHERE ST_within((SELECT the_geom FROM city WHERE wp_id = %s), country.the_geom) AND city_id = %s' % (cityID, cityID)
pyCursor2 = pyConn.cursor()
pyCursor2.execute(procQuery)
print gid[0]
print 'Done'
def main():
for i in range(4):
t = threadClass(queue)
t.setDaemon(True)
t.start()
for gid in gidList:
queue.put(gid)
queue.join()
main()
I'm not even sure if the multithreading is optimal but it is definitely faster
than going through one by one.
The machine I will be using has four cores (Quad Core) and a minimal Linux OS
with no GUI, PostgreSQL, PostGIS and Python if that makes a difference.
What do I need to change to get this painfully easy multiprocessing task
enabled?
Answer: Okay this is an answer to my own post. Well done me =D
Produces about a 150% increase in speed on my system going from a single core
thread to quad core multiprocessing.
import multiprocessing, time, psycopg2
class Consumer(multiprocessing.Process):
def __init__(self, task_queue, result_queue):
multiprocessing.Process.__init__(self)
self.task_queue = task_queue
self.result_queue = result_queue
def run(self):
proc_name = self.name
while True:
next_task = self.task_queue.get()
if next_task is None:
print 'Tasks Complete'
self.task_queue.task_done()
break
answer = next_task()
self.task_queue.task_done()
self.result_queue.put(answer)
return
class Task(object):
def __init__(self, a):
self.a = a
def __call__(self):
pyConn = psycopg2.connect("dbname='geobase_1' host = 'localhost'")
pyConn.set_isolation_level(0)
pyCursor1 = pyConn.cursor()
procQuery = 'UPDATE city SET gid_fkey = gid FROM country WHERE ST_within((SELECT the_geom FROM city WHERE city_id = %s), country.the_geom) AND city_id = %s' % (self.a, self.a)
pyCursor1.execute(procQuery)
print 'What is self?'
print self.a
return self.a
def __str__(self):
return 'ARC'
def run(self):
print 'IN'
if __name__ == '__main__':
tasks = multiprocessing.JoinableQueue()
results = multiprocessing.Queue()
num_consumers = multiprocessing.cpu_count() * 2
consumers = [Consumer(tasks, results) for i in xrange(num_consumers)]
for w in consumers:
w.start()
pyConnX = psycopg2.connect("dbname='geobase_1' host = 'localhost'")
pyConnX.set_isolation_level(0)
pyCursorX = pyConnX.cursor()
pyCursorX.execute('SELECT count(*) FROM cities WHERE gid_fkey IS NULL')
temp = pyCursorX.fetchall()
num_job = temp[0]
num_jobs = num_job[0]
pyCursorX.execute('SELECT city_id FROM city WHERE gid_fkey IS NULL')
cityIdListTuple = pyCursorX.fetchall()
cityIdList = []
for x in cityIdListTuple:
cityIdList.append(x[0])
for i in xrange(num_jobs):
tasks.put(Task(cityIdList[i - 1]))
for i in xrange(num_consumers):
tasks.put(None)
while num_jobs:
result = results.get()
print result
num_jobs -= 1
Now I have another question which I have posted here:
[Create DB connection and maintain on multiple processes
(multiprocessing)](http://stackoverflow.com/questions/7555680/create-db-
connection-and-maintain-on-multiple-processes-multiprocessing)
Hopefully we can get rid of some overhead and speed this baby up even more.
|
Making multidimensional lists in python
Question: This question maybe asked earlier (atleast topic wise) , but still I couldn't
find a solution for my specific problem. Basically, I need a multidimensional
array in python. Such that:
I will be able to access contents in list by :
contenets[no_of_record][record].
So I have a file like :
101, Mrs. Jones' Math Class, 10100000001, Jones, Barbara, , , , 10100000011, Gutierrez, Kimberly, 2
101, Mrs. Jones' Math Class, 10100000001, Jones, Barbara, , , , 10100000013, Mercado, Toby, 1
101, Mrs. Jones' Math Class, 10100000001, Jones, Barbara, , , , 10100000014, Garcia, Lizzie, 1
101, Mrs. Jones' Math Class, 10100000001, Jones, Barbara, , , , 10100000015, Cruz, Alex, 1
Now I have to maintain a multi dimensional array where no_of_record points to
row number and the record will be the column number.
I want something similar to this:
contents=[][]
for lines in source_file:
contents[no_of_records][]=lines.rstrip().split(',')
print contents[no_of_records]
no_of_records+=1
I am sure the snippet above is syntactically wrong, I am just trying to give
an idea of what im searching for. Thanks for your help.
-Sethu
Answer: from the csv examples of the doc you should do something along the lines
import csv
rows = []
with open('some.csv', 'rb') as f:
reader = csv.reader(f)
for row in reader:
print row
rows.append(row)
|
Can't activate django admin screen
Question: Error page shows the following: Traceback (most recent call last):
File "/Library/Python/2.6/site-packages/django/core/servers/basehttp.py", line
283, in run self.result = application(self.environ, self.start_response)
File "/Library/Python/2.6/site-packages/django/core/handlers/wsgi.py", line
273, in **call** response = self.get_response(request)
File "/Library/Python/2.6/site-packages/django/core/handlers/base.py", line
153, in get_response response = self.handle_uncaught_exception(request,
resolver, sys.exc_info())
File "/Library/Python/2.6/site-packages/django/core/handlers/base.py", line
218, in handle_uncaught_exception return callback(request, **param_dict)
File "/Library/Python/2.6/site-packages/django/utils/decorators.py", line 93,
in _wrapped_view response = view_func(request, *args, **kwargs)
File "/Library/Python/2.6/site-packages/django/views/defaults.py", line 30, in
server_error t = loader.get_template(template_name) # You need to create a
500.html template.
File "/Library/Python/2.6/site-packages/django/template/loader.py", line 157,
in get_template template, origin = find_template(template_name)
File "/Library/Python/2.6/site-packages/django/template/loader.py", line 138,
in find_template raise TemplateDoesNotExist(name)
TemplateDoesNotExist: 500.html
Answer: Check your Template_Loaders in settings.py. It should look like this to
automatically find the default admin templates. The app_directories.Loader is
important here.
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
# 'django.template.loaders.eggs.Loader',
)
|
Python DST & Time Zone Detection After Addition
Question: So I currently have a line of code which looks like this:
t1 = datetime(self.year, self.month, self.day, self.hour, self.minute, self.second)
...
t2 = timedelta(days=dayNum, hours=time2.hour, minutes=time2.minute, seconds=time2.second)
sumVal = t1 + t2
I would like for the result to take into account any DST affects that might
occur (such as if I am at 11/4/2012 00:30 AM and add 3 hours, I would get
02:30 AM, due to a fall back for DST). I've looked at using pytz and python-
dateutil, and neither of them seem to support this, or at least not support it
without a separate file which contains all of the time zones. The kicker is
that the times may not necessarily be in the same time zone as the current
system, or even be in the past. I'm sure there is a simple way to do this (or
I would expect so out of Python), but nothing seems to be what I need right
now. Any ideas?
Answer: Perhaps `pytz`'s `normalize` method is what you are looking for:
import datetime as dt
import pytz
tz=pytz.timezone('Europe/London')
t1 = dt.datetime(2012,10,28,0,30,0)
t1=tz.localize(t1)
t2 = dt.timedelta(hours=3)
sumVal = t1 + t2
`sumVal` remains in BST:
print(repr(sumVal))
# datetime.datetime(2012, 10, 28, 3, 30, tzinfo=<DstTzInfo 'Europe/London' BST+1:00:00 DST>)
After normalization, `sumVal` is in GMT:
sumVal = tz.normalize(sumVal)
print(repr(sumVal))
# datetime.datetime(2012, 10, 28, 2, 30, tzinfo=<DstTzInfo 'Europe/London' GMT0:00:00 STD>)
Note, for London, the DST transition occurs at `2012-10-28 02:00:00`.
|
'str' object has no attribute '__dict__'
Question: I want to serialize a dictionary to JSON in Python. I have this 'str' object
has no attribute '**dict** ' error. Here is my code...
from django.utils import simplejson
class Person(object):
a = ""
person1 = Person()
person1.a = "111"
person2 = Person()
person2.a = "222"
list = {}
list["first"] = person1
list["second"] = person2
s = simplejson.dumps([p.__dict__ for p in list])
And the exception is;
Traceback (most recent call last):
File "/base/data/home/apps/py-ide-online/2.352580383594527534/shell.py", line 380, in post
exec(compiled_code, globals())
File "<string>", line 17, in <module>
AttributeError: 'str' object has no attribute '__dict__'
Answer: How about
s = simplejson.dumps([p.__dict__ for p in list.itervalues()])
|
MongoDB not that faster than MySQL?
Question: I discovered mongodb some months ago,and after reading this
[post](http://www.vedana.it/it/component/content/article/9-linux/62-testing-
mongodb-vs-mysql-with-python-scripting-under-linux), I thought mongodb was
really faster than mysql, so I decided to build my own bench, the problem is
that I do not have the same result than the above post's author, especially
for quering the database : mongodb seems to be slower than MyISAM tables.
Could you have a look to my python code, may be there is something wrong in it
:
from datetime import datetime
import random
import MySQLdb
import pymongo
mysql_db=MySQLdb.connect(user="me",passwd="mypasswd",db="test_kv")
c=mysql_db.cursor()
connection = pymongo.Connection()
mongo_db = connection.test
kvtab = mongo_db.kvtab
nb=1000000
thelist=[]
for i in xrange(nb):
thelist.append((str(random.random()),str(random.random())))
t1=datetime.now()
for k,v in thelist:
c.execute("INSERT INTO key_val_tab (k,v) VALUES ('" + k + "','" + v + "')")
dt=datetime.now() - t1
print 'MySQL insert elapse :',dt
t1=datetime.now()
for i in xrange(nb):
c.execute("select * FROM key_val_tab WHERE k='" + random.choice(thelist)[0] + "'")
result=c.fetchone()
dt=datetime.now() - t1
print 'MySQL select elapse :',dt
t1=datetime.now()
for k,v in thelist:
kvtab.insert({"key":k,"value":v})
dt=datetime.now() - t1
print 'Mongodb insert elapse :',dt
kvtab.ensure_index('key')
t1=datetime.now()
for i in xrange(nb):
result=kvtab.find_one({"key":random.choice(thelist)[0]})
dt=datetime.now() - t1
print 'Mongodb select elapse :',dt
Notes:
* both MySQL and mongodb are on locahost.
* both MySQL and mongodb has the 'key' column indexed
MySQL Table:
CREATE TABLE IF NOT EXISTS `key_val_tab` (
`k` varchar(24) NOT NULL,
`v` varchar(24) NOT NULL,
KEY `kindex` (`k`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1;
Versions are:
* MySQL: 5.1.41
* mongodb : 1.8.3
* python : 2.6.5
* pymongo : 2.0.1
* Linux : Ubuntu 2.6.32 32Bits with PAE
* Hardware : Desktop core i7 2.93 Ghz
Results (for 1 million inserts/selects) :
MySQL insert elapse : 0:02:52.143803
MySQL select elapse : 0:04:43.675914
Mongodb insert elapse : 0:00:49.038416 -> mongodb much faster for insert
Mongodb select elapse : 0:05:10.409025 -> ...but slower for quering (thought was the opposite)
Answer: Sigh. These kind of benchmarks, and I use the term loosely in this case,
usually break down from the very start. MySQL isn't a "slower" database than
MongoDB. One is a relational database, the other a NoSQL document store. They
will/should be faster in the functional areas that they were designed to
cover. In the case of MySQL (or any RDBMS) and MongoDB this overlap isn't as
big as a lot of people assume it is. It's the same kind of broken apples and
oranges comparison you get with Redis vs. MongoDB discussions.
There are so many variables (app functional requirements, hardware resources,
concurrency, configuration, scalability, etc.) to consider that any benchmark
or article that ends with "MongoDB is faster than MySQL" or vice versa is
generalizing results to the point of uselessness.
If you want to do benchmark first define a strict set of functional
requirements and business rules and then implement them as efficiently as
possible on both persistence solutions. The result will be that one is faster
than the other and in almost all cases the faster approach has some relevant
downsides that might still make the slower solution more viable depending on
requirements.
All this is ignoring that the benchmark above doesn't simulate any sort of
real world scenario. There wont be a lot of apps doing max throughput inserts
without any sort of threading/concurrency (which impacts performance on most
storage solutions significantly).
Finally, comparing inserts like this is a little broken too. MongoDB can
achieve amazing insert throughput with fire and forget bulk inserts or can be
orders of magnitude slower with fsynced, replicated writes. The thing here is
that MongoDB offers you a choice where MySQL doesn't (or less so). So here the
comparison only make sense of the business requirements allow for fire and
forget type writes (Which boil down to, "I hope it works, but no biggy if it
didn't")
TL;DR stop doing simple throughput benchmarks. They're almost always useless.
|
Bitnami Djangostack + Eclipse IDE?
Question: I'm trying to setup the Eclipse (with pyDev) to work with Bitnami Djangostack
in Mac OS X. I have installed the Djangostack and it works all right.
Problem is that I can't get the Eclipse to understand Djangostack. I've added
the Djangostack python interpreter to the PyDev-setup. And also I added the
apps/django folder to the Libraries. apps/django folder exist in the
djangostack folder. Still, when I'm trying to create PyDev Django project,
Eclipse cannot find Django (import django do not work). Any ideas what other
things I'd have to setup before Eclipse can find the Djangostack installation?
Answer: It seems it cannot find the django package.
Make sure you're adding it to the PYTHONPATH.
i.e.: if it's installed at:
/foo/bar/django
/foo/bar/django/__init__.py
make sure that /foo/bar/ is in your interpreter PYTHONPATH (and make sure that
/foo/bar/django is NOT in the PYTHONPATH).
|
Handling ascii char in python string
Question: i have file having name `"SSE-Künden, SSE-Händler.pdf"` which having those two
`unicode char ( ü,ä)` when i am printing this file name on python interpreter
the unicode values are getting converted into respective ascii value i guess
`'SSE-K\x81nden, SSE-H\x84ndler.pdf'` but i want to
test dir contains the pdf file of name 'SSE-Künden, SSE-Händler.pdf'
i tried this: path = 'C:\test' for a,b,c in os.walk(path): print c
['SSE-K\x81nden, SSE-H\x84ndler.pdf']
how do i convert this ascii chars to its respective unicode vals and i want to
show the original name(`"SSE-Künden, SSE-Händler.pdf"`) on interpreter and
also writeing into some file as it is.how do i achive this. I am using Python
2.6 and windows OS.
Thanks.
Answer: Assuming your terminal supports displaying the characters, iterate over the
list of files and print them individually (or use Python 3, which displays
Unicode in lists):
Python 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> for p,d,f in os.walk(u'.'):
... for n in f:
... print n
...
SSE-Künden, SSE-Händler.pdf
Also note I used a Unicode string (u'.') for the path. This instructs
`os.walk` to return Unicode strings as opposed to byte strings. When dealing
with non-ASCII filenames this is a good idea.
In Python 3 strings are Unicode by default and non-ASCII characters are
displayed to the user instead of displayed as escape codes:
Python 3.2.1 (default, Jul 10 2011, 21:51:15) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> for p,d,f in os.walk('.'):
... print(f)
...
['SSE-Künden, SSE-Händler.pdf']
|
How can I get Django application name for a file inside that application?
Question: Having full path for the file that is a part of django application I would
like to get a Django application name.
For example for this path:
/lib/python2.6/site-packages/django/contrib/auth/tests/auth_backends.py
Application name is `auth`.
I wonder how this Django application name could be programatically calculated
for specific filename inside an app.
Background: I want to integrate calling Django test management command from a
vim editor. It should run tests for an app currently edited file belongs.
Answer: You can get module by filename with
[`__import__`](http://docs.python.org/library/functions.html#__import__). You
can get it's package name with `__package__` attribute. Then you can check
which app from `settings.INSTALLED_APPS` is a substring of package name. It's
your app.
|
Run Time Error (exit status 1) when submitting puzzle in Python
Question: I have python 2.7 installed on my windows computer. I'm trying to email a
puzzle answer to Spotify, which is running Python 2.6.6. When I submit my *.py
source code, I'm getting the following error:
Run Time Error
Exited, exit status: 1
I only have "import sys". I've run tons of stress tests - possible inputs are
1 ≤ m ≤ 10 000 lines, I've tested with 1 million+ values with zero problems.
I've tried printing with print & sys.stdout.write.
When I send in dummie test code (I run my full algorithm but only print
garbage instead of my answer - ie, print "test!"), I get the expected "Wrong
Answer" back.
I have no idea where to start debugging - any tips/help at all?
Thanks! -Sam
Answer: I got the same error. As I see it's not python output but just an answer from
spotify bot that your program threw an exception in some tests. Maybe the real
output isn't shown to prevent debugging using the bot.
When you print dummy data fist test fails and you get 'Wrong Answer'.
When you print real output first test may pass but next throw an exception and
you get 'Run Time Error'.
I fixed one defect with possible exception in my script and Run Time Error
went away.
|
Pyglet: equivalent of pygame.Rect
Question: I am contemplating migrating from pygame to pyglet (main reason: move from
Python to Pypy). However, I found no rectangle collision tools in the pyglet
doc, while I use pygame.Rect quite often.
Do you know how pyglet deals with rectangle collision (maybe with OpenGl
funcs, but I do not know them) ?
Thanks
Answer: Pyglet doesn't not have this system. You will have to implement it on your
own. You could possibly import just pygame's rect and use it within pyglet.
Pyglet is only an opengl interface, not a game toolkit.
(this was a hurdle for me way back when, but you'll get over it. stick with
it, pyglet is the right direction to go from pygame)
|
Python 2.5 time.time() as decimal
Question: Is it possible to receive the output of time.time() in Python 2.5 as a
Decimal?
If not (and it has to be a float), then is it possible to guarantee that
inaccuracy will always be more than (rather than less than) the original
value. In other words:
>>> repr(0.1)
'0.10000000000000001' # More than 0.1 which is what I want
>>> repr(0.99)
'0.98999999999999999' # Less than 0.99 which is unacceptable
Code example:
import math, time
sleep_time = 0.1
while True:
time_before = time.time()
time.sleep(sleep_time)
time_after = time.time()
time_taken = time_after - time_before
assert time_taken >= sleep_time, '%r < %r' % (time_taken, sleep_time)
**EDIT:**
Now using the following (which does not fail in testing but could still
theoretically fail):
import time
from decimal import Decimal
def to_dec(float_num):
return Decimal('%2f' % float_num)
sleep_time = to_dec(0.1)
while True:
time_before = to_dec(time.time())
time.sleep(float(sleep_time))
time_after = to_dec(time.time())
time_taken = time_after - time_before
assert time_taken >= sleep_time, '%r < %r' % (time_taken, sleep_time)
print 'time_taken (%s) >= sleep_time (%s)' % (time_taken, sleep_time)
Answer: You could simply multiple `time.time()` by some value to get the precision you
want (note that many calls can't guarantee sub-second accuracy anyways). So,
startTime = int(time.time() * 100)
#...
endTime = int(time.time() * 100)
Will satisfy your condition that `endTime - startTime >= sleepTime`
|
syntax error with KeyError in python 3.2
Question: I'm a beginner using python 3.2 and i have a book whos code is all in python
2.6. i wrote part of a program and keep getting: Syntax Error: invalid syntax
Then python's IDLE highlights the comma after KeyError in my code:
from tank import Tank
tanks = { "a":Tank("Alice"), "b":Tank("Bob"), "c":Tank("Carol")}
alive_tanks = len(tanks)
while alive_tanks > 1:
print
for tank_name in sorted( tanks.keys() ):
print (tank_name, tanks[tank_name])
first = raw_input("Who fires? ").lower()
second = raw_input("Who at? ").lower()
try:
first_tank = tanks[first]
second_tank = tanks[second]
except KeyError, name:
print ("No such tank exists!", name)
continue
Answer: Instead of
except KeyError, name:
try
except KeyError as name:
Its a difference between Python 2.x and Python 3.x. The first form is no
longer supported.
|
Converting a parent child relationship into Json in python
Question: I have a list of list like the below. The first column is the parent, second
is the child, and the third are node attributes. I need to convert the below
to a JSON format like the following.
0 0 "flair" 1000
0 1 "analytics" 1000
1 2 "cluster" 1000
2 3 "AgglomerativeCluster" 1000
2 4 "CommunityStructure" 1000
1 5 "Graph" 1000
5 6 "BetweennessCentrality" 1000
5 7 "LinkDistance"
* * *
pc = []
pc.append([0, 0 ,"flair", 1000])
pc.append([0,1, "analytics", 1000])
pc.append([1, 2, "cluster", 1000])
pc.append([2 ,3, "AgglomerativeCluster", 1000])
pc.append([2 ,4, "CommunityStructure" ,1000])
pc.append([1 ,5, "Graph", 1000])
pc.append([5, 6, "BetweennessCentrality", 1000])
pc.append([5, 7, "LinkDistance",1000])
{
"name": "flare",
"children": [
{
"name": "analytics",
"children": [
{
"name": "cluster",
"children": [
{"name": "AgglomerativeCluster", "size": 3938},
{"name": "CommunityStructure", "size": 3812},
]
},
{
"name": "graph",
"children": [
{"name": "BetweennessCentrality", "size": 3534},
{"name": "LinkDistance", "size": 5731}
]
}
]
}
]
}
Answer: A little change to your input, for a root node "flair", I use '-1' as its
parent id instead of '0'.
import json
pc = []
pc.append([-1, 0 ,"flair", 1000])
pc.append([0,1, "analytics", 1000])
pc.append([1, 2, "cluster", 1000])
pc.append([2 ,3, "AgglomerativeCluster", 1000])
pc.append([2 ,4, "CommunityStructure" ,1000])
pc.append([1 ,5, "Graph", 1000])
pc.append([5, 6, "BetweennessCentrality", 1000])
pc.append([5, 7, "LinkDistance",1000])
def listToDict(input):
root = {}
lookup = {}
for parent_id, id, name, attr in input:
if parent_id == -1:
root['name'] = name;
lookup[id] = root
else:
node = {'name': name}
lookup[parent_id].setdefault('children', []).append(node)
lookup[id] = node
return root
result = listToDict(pc)
print result
print json.dumps(result)
|
random string in python
Question: I am trying to make a script that will generate a random string of text when i
run it.
I have got far but im having a problem with formatting.
Here is the code im using
import random
alphabet = 'abcdefghijklmnopqrstuvwxyz'
min = 5
max = 15
name = random.sample(alphabet,random.randint(min,max))
print name
And when ever i end up with this
['i', 'c', 'x', 'n', 'y', 'b', 'g', 'r', 'h', 'p', 'w', 'o']
I am trying to format so it is one line of string so for example
['i', 'c', 'x', 'n', 'y', 'b', 'g', 'r', 'h', 'p', 'w', 'o'] = icxnybgrhpwo
Answer: [`join()`](http://docs.python.org/library/string.html#string.join) it:
>>> name = ['i', 'c', 'x', 'n', 'y', 'b', 'g', 'r', 'h', 'p', 'w', 'o']
>>> ''.join(name)
'icxnybgrhpwo'
|
error in matplotlib library in python while using csv2rec
Question: I am working in Ipython, trying to load a csv file.
from matplotlib import *
data=matplotlib.mlab.csv2rec('helix.csv',delimiter='\t')
Here is the error message
IOError Traceback (most recent call last)
/mnt/hgfs/docs/python/<ipython console> in <module>()
/usr/lib/pymodules/python2.7/matplotlib/mlab.pyc in csv2rec(fname, comments, skiprows, checkrows, delimiter, converterd, names, missing, missingd, use_mrecords)
2125
2126 # reset the reader and start over
-> 2127 fh.seek(0)
2128 reader = csv.reader(fh, delimiter=delimiter)
2129 process_skiprows(reader)
IOError: [Errno 29] Illegal seek
Does someone already run on this error? I tried to re-install everything, I am
working with Python2.7 and I have Matplotlib v0.99.3, Numpy v1.5.1,
Ipython0.10.1
Answer: I tried with this file:
snp1,snp2,snp3
A,A,A
A,B,A
B,B,B
and here is the result:
In [3]: csv2rec('helix.csv')
Out[3]:
rec.array([('A', 'A', 'A'), ('A', 'B', 'A'), ('B', 'B', 'B')],
dtype=[('snp1', '|S1'), ('snp2', '|S1'), ('snp3', '|S1')])
I have matplotlib 1.0.1, so you might try updating it, I do not have access to
older matplotlib for testing.
|
Python style: should I avoid commenting my import statements?
Question: I'll try to make this as closed-ended of a question as possible:
I often find that I have a group of imports that go together, like a bunch of
mathy imports; but later on I might delete or move the section of code which
uses those imported items to another file. The problem is that I often forget
why I was using a particular import (for example, I use the Counter class very
often, or random stuff from itertools.) For this reason I might like to have
comments that specify what my imports are for; that way if I no longer need
them, I can just delete the whole chunk.
Is it considered bad style to have comments in with my import statements?
Answer: Well, the beautiful thing about python is that it is (or should be) explicit.
In this case, so long as you're not doing * imports, (which is considered bad
practice), you'll KNOW whether an import is referenced simply by doing a grep
for the namespace. And then you'll know whether you can delete it.
Allow me to also add a counterpoint to the other answers. 'Too much'
commenting can _indeed_ be bad practice. You shouldn't be adding redundant
comments to code which is patently obvious as to its function. Additionally,
comments, just like code, must be maintained. If you're creating excessive
comments, you're creating a ton more work for yourself.
|
Why is __init__.py not being called?
Question: I'm using Python 2.7 and have the following files:
./__init__.py
./aoeu.py
`__init__.py` has the following contents:
aoeu aoeuaoeu aoeuaoeuaoeu
so I would expect running aoeu.py to error when Python tries to load
`__init__.py`, but it doesn't. The behavior is the same whether PYTHONPATH is
set to '.' or unset.
What's going on?
Answer: `__init__.py` makes the enclosing directory a package. It won't be executed
unless you actually try to import the package directly.
|
Last element in xml not getting picked up
Question: I have a python 3 script below that is supposed to download an xml file and
split it into smaller files with only 500 items each. I am having two
problems:
1. the last item in the original xml is not present in the split files
2. if the original xml was 1000 items long it will create a 3rd empty xml file.
Can anyone tell me where there could be such an error in my code to cause
these symptoms?
import urllib.request as urllib2
from lxml import etree
def _yield_str_from_net(url, car_tag):
xml_file = urllib2.urlopen(url)
for _, element in etree.iterparse(xml_file, tag=car_tag):
yield etree.tostring(element, pretty_print=True).decode('utf-8')
element.clear()
def split_xml(url, car_tag, save_as):
output_file_num = 1
net_file_iter = _yield_str_from_net(url, car_tag)
while True:
file_name = "%s%s.xml" % (save_as, output_file_num)
print("Making %s" % file_name)
with open(file_name, mode='w', encoding='utf-8') as the_file:
for elem_count in range(500): # want only 500 items
try:
elem = next(net_file_iter)
except StopIteration:
return
the_file.write(elem)
print("processing element #%s" % elem_count)
output_file_num += 1
if __name__ == '__main__':
split_xml("http://www.my_xml_url.com/",
'my_tag',
'my_file')
Answer: The second one is no error but by design. After reading 1000 elements the
iterator does not yet know that there is no further item and thus continues
with the `while True` loop.
It would be great if iterators would have a
[`hasNext`](http://stackoverflow.com/questions/1966591/hasnext-in-python-
iterators) then you could replace it by `while hasNext` in order to overcome
this issue. Unfortunately there is no such thing in python.
For the first question: currently I can't see anything in your code explaining
this issue.
|
Retrieve a cookie set in Python serverside
Question: G'day, I'm following a guide found here:
<http://www.doughellmann.com/PyMOTW/Cookie/>
which has the code:
c = Cookie.SimpleCookie()
c.load(HTTP_COOKIE)
to retrieve a cookie previously set (by the server), but my server does not
have the HTTP_COOKIE variable, so how else can I do it?
I would prefer to continue using the above guide's method, but if there is
something far better I am willing to consider it.
Otherwise, I'm not using any frameworks (just raw .py files) and would like to
keep it that way.
Cheers
Answer: The way discussed in the comments is:
import os
def getcookies():
cookiesDict = {}
if 'HTTP_COOKIE' in os.environ:
cookies = os.environ['HTTP_COOKIE']
cookies = cookies.split('; ')
for cookie in cookies:
cookie = cookie.split('=')
cookiesDict[cookie[0]] = cookie[1]
return cookiesDict
which would then return a dictionary of cookies as `key -> value`
cookies = getcookies()
userID = cookies['userID']
and obviously you would then add error handling
However there are also other methods, eg., using the [`cookie`
module](http://docs.python.org/2/library/cookie.html#Cookie.BaseCookie.load%20cookie%20module)
|
Django recursive imports
Question: I have two apps: pt and tasks.
pt.models has a Member model. tasks.models has a Filters model.
Member model has a foreign key to Filters model (one for a member). Filters
has M2M field to Member as it holds some kind of filtering settings.
So, I must recursively import both models to get everything synced what is
impossible in Python.
Any ideas?
Answer: Again, circular imports are not an error in Python, only using names that
don't yet exist when doing so.
From [the
docs](https://docs.djangoproject.com/en/dev/ref/models/fields/#module-
django.db.models.fields.related):
> If you need to create a relationship on a model that has not yet been
> defined, you can use the name of the model, rather than the model object
> itself...
|
Python 3 concurrent.futures socket server works with ThreadPoolExecutor but not ProcessPoolExecutor
Question: I am trying to create a simple socket server using the new concurrent.futures
classes. I can get it to work fine with ThreadPoolExecutor but it just hangs
when I use ProcessPoolExecutor and I can't understand why. Given the
circumstance, I thought it might have something to do with trying to pass
something to the child process that wasn't pickle-able but I don't that is the
case. A simplified version of my code is below. I would appreciate any advice.
import concurrent.futures
import socket, os
HOST = ''
PORT = 9001
def request_handler(conn, addr):
pid = os.getpid()
print('PID', pid, 'handling connection from', addr)
while True:
data = conn.recv(1024)
if not data:
print('PID', pid, 'end connection')
break
print('PID', pid, 'received:', bytes.decode(data))
conn.send(data)
conn.close()
def main():
with concurrent.futures.ProcessPoolExecutor(max_workers=4) as executor:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.bind((HOST, PORT))
sock.listen(10)
print("Server listening on port", PORT)
while True:
conn, addr = sock.accept()
executor.submit(request_handler, conn, addr)
conn.close()
print("Server shutting down")
if __name__ == '__main__':
main()
Answer: Good call Donkopotamus, you should have posted that as the answer.
I wrapped the handler up in a try-block and when it attempts to call
conn.recv() I got the exception: [Errno 10038] An operation was attempted on
something that is not a socket. Since the worker process was dying badly, it
deadlocked. With the try-block in place catching the error, I can continue to
connect as much as I want without any deadlock.
I do not get this error if I change the code to use a ThreadPoolExecutor. I
have tried several options and it looks like there is no way to properly pass
a socket to the worker process in a ProcessPoolExecutor. I have found other
references to this around the internet but they say it can be done on some
platforms but not others. I have tried it on Windows, Linux and MacOS X and it
doesn't work on any of them. This seems like a significant limitation.
|
Exception handling in Python Tornado
Question: I am trying to handle exception occurred in `AsyncClient.fetch` in this way:
from tornado.httpclient import AsyncHTTPClient
from tornado.httpclient import HTTPRequest
from tornado.stack_context import ExceptionStackContext
from tornado import ioloop
def handle_exc(*args):
print('Exception occured')
return True
def handle_request(response):
print('Handle request')
http_client = AsyncHTTPClient()
with ExceptionStackContext(handle_exc):
http_client.fetch('http://some123site.com', handle_request)
ioloop.IOLoop.instance().start()
and see next output:
WARNING:root:uncaught exception
Traceback (most recent call last):
File "/home/crchemist/python-3.2/lib/python3.2/site-packages/tornado-2.0-py3.2.egg/tornado/simple_httpclient.py", line 259, in cleanup
yield
File "/home/crchemist/python-3.2/lib/python3.2/site-packages/tornado-2.0-py3.2.egg/tornado/simple_httpclient.py", line 162, in __init__
0, 0)
socket.gaierror: [Errno -5] No address associated with hostname
Handle request
What am I doing wrong?
Answer: According to the [Tornado
documentation](http://www.tornadoweb.org/documentation/httpclient.html#tornado.httpclient.AsyncHTTPClient.fetch):
If an error occurs during the fetch, the HTTPResponse given to the callback
has a non-None error attribute that contains the exception encountered during
the request. You can call `response.rethrow()` to throw the exception (if any)
in the callback.
from tornado.httpclient import AsyncHTTPClient
from tornado.httpclient import HTTPRequest
from tornado.stack_context import ExceptionStackContext
from tornado import ioloop
import traceback
def handle_exc(*args):
print('Exception occured')
return True
def handle_request(response):
if response.error is not None:
with ExceptionStackContext(handle_exc):
response.rethrow()
else:
print('Handle request')
http_client = AsyncHTTPClient()
http_client.fetch('http://some123site.com', handle_request)
http_client.fetch('http://google.com', handle_request)
ioloop.IOLoop.instance().start()
* * *
The message you're seeing on the console is only a warning (sent through
`logging.warning`). It's harmless, but if it really bothers you, see the
[logging](http://docs.python.org/library/logging.html) module for how to
filter it.
|
Python binding to ImageMagick
Question: I am looking for a good Python binding to
[ImageMagick](http://www.imagemagick.org/), but there seem a lot of bindings
already. I am not sure that which of these is the right tool for my job. Can
you guys recommend me one?
Here is the list of my requirements and preferences (in order of importance):
1. Must be available on PyPI (to simplify our deployment)
2. Prefer `ctypes` over C API extension — we will go PyPy soon
3. Pythonic API design and naming conventions
4. Good documentation (especially API references)
Answer: ~~I found the package
myself:[magickwand](http://pypi.python.org/pypi/magickwand) is a
`ctypes`-based ImageMagick binding for Python. Yet it has no documentation at
all, it still satisfies most of my requirements.~~
Plus: I finally started my own project: [Wand](http://wand-py.org/).
|
celery get tasks count
Question: I am using python celery+rabbitmq. I can't find a way to get task count in
some queue. Some thing like this:
celery.queue('myqueue').count()
Is it posible to get tasks count from certaint queue?
One solution is to run external command from my python scrpit:
"rabbitmqctl list_queues -p my_vhost"
and parse results, is it good way to do this?
Answer: I suppose that using rabbitmqctl command is not good solution, especially on
my ubuntu server, where rabbitmqctl can be executed only with root privileges.
By playing with pika objects I found working solution:
import pika
from django.conf import settings
def tasks_count(queue_name):
''' Connects to message queue using django settings and returns count of messages in queue with name queue_name. '''
credentials = pika.PlainCredentials(settings.BROKER_USER, settings.BROKER_PASSWORD)
parameters = pika.ConnectionParameters( credentials=credentials,
host=settings.BROKER_HOST,
port=settings.BROKER_PORT,
virtual_host=settings.BROKER_VHOST)
connection = pika.BlockingConnection(parameters=parameters)
channel = connection.channel()
queue = channel.queue_declare(queue=queue_name, durable=True)
message_count = queue.method.message_count
return message_count
I did not find documentation about inspecting the AMQP queue with pika, so I
do not know about solution's correctness.
|
problem with deserialization xml to objects - unwanted split by special chars
Question: I try to deserialize xml to objects, and i met a problem with encoding of
various items in xml tree.
**XML example:**
<?xml version="1.0" encoding="utf-8"?>
<results>
<FlightTravel>
<QuantityOfPassengers>6</QuantityOfPassengers>
<Id>N5GWXM</Id>
<InsuranceId>330992</InsuranceId>
<TotalTime>3h 00m</TotalTime>
<TransactionPrice>540.00</TransactionPrice>
<AdditionalPrice>0</AdditionalPrice>
<InsurancePrice>226.56</InsurancePrice>
<TotalPrice>9561.31</TotalPrice>
<CompanyName>XXXXX</CompanyName>
<TaxID>111-11-11-111</TaxID>
<InvoiceStreet>Jagiellońska</InvoiceStreet>
<InvoiceHouseNo>8</InvoiceHouseNo>
<InvoiceZipCode>Jagiellońska</InvoiceZipCode>
<InvoiceCityName>Warszawa</InvoiceCityName>
<PayerStreet>Jagiellońska</PayerStreet>
<PayerHouseNo>8</PayerHouseNo>
<PayerZipCode>11-111</PayerZipCode>
<PayerCityName>Warszawa</PayerCityName>
<PayerEmail>[email protected]</PayerEmail>
<PayerPhone>123123123</PayerPhone>
<Segments>
<Segment0>
<DepartureAirport>WAW</DepartureAirport>
<DepartureDate>śr. 06 lip</DepartureDate>
<DepartureTime>07:50</DepartureTime>
<ArrivalAirport>VIE</ArrivalAirport>
<ArrivalDate>śr. 06 lip</ArrivalDate>
<ArrivalTime>09:15</ArrivalTime>
</Segment0>
<Segment1>
<DepartureAirport>VIE</DepartureAirport>
<DepartureDate>śr. 06 lip</DepartureDate>
<DepartureTime>10:00</DepartureTime>
<ArrivalAirport>SZG</ArrivalAirport>
<ArrivalDate>śr. 06 lip</ArrivalDate>
<ArrivalTime>10:50</ArrivalTime>
</Segment1>
</Segments>
</FlightTravel>
</results>
**XML Deserialization function in python:**
# -*- coding: utf-8 -*-
from lxml import etree
import codecs
class TitleTarget(object):
def __init__(self):
self.text = []
def start(self, tag, attrib):
self.is_title = True #if tag == 'Title' else False
def end(self, tag):
pass
def data(self, data):
if self.is_title:
self.text.append(data)
def close(self):
return self.text
parser = etree.XMLParser(target = TitleTarget())
infile = 'Flights.xml'
results = etree.parse(infile, parser)
out = open('wynik.txt', 'w')
out.write('\n'.join(results))
out.close()
**Output:**
['6', 'N5GWXM', '330992', '3h 00m', '540.00 ', '0', '226.56', '9561.31',
'XXXXX', '111-11-11-111', 'Jagiello', 'ń', 'ska', '8', 'Jagiello', 'ń', 'ska',
'Warszawa', 'Jagiello', 'ń', 'ska', '8', '11-111', 'Warszawa', 'no-
[email protected]', '123123123', 'WAW', 'ś', 'r. 06 lip', '07:50', 'VIE', 'ś', 'r.
06 lip', '09:15', 'VIE', 'ś', 'r. 06 lip', '10:00', 'SZG', 'ś', 'r. 06 lip',
'10:50']
In item '**Jagiellońska** ' is special char '**ń** '. When parser appending
data to array then char 'ń' is some king of split character and my question is
why this is happening? Rest of items are appending to array correctly. In item
'śr 06.lip' is exactly the same situation.
Answer: The problem is that the `data` method of your target class may be called more
than once per element. This may happen if the feeder crosses a block boundary,
for example. Looks like it can also happen when it hits a non-ASCII character.
This is ancient legend. I can't find where this is documented. However if you
change your target class to something like the following, it will work. I have
tested it on your data.
class TitleTarget(object):
def __init__(self):
self.text = []
def start(self, tag, attrib):
self.is_title = True #if tag == 'Title' else False
if self.is_title:
self.text.append(u'')
def end(self, tag):
pass
def data(self, data):
if self.is_title:
self.text[-1] += data
def close(self):
return self.text
To get a better grasp of what your output is like, do `print repr(results)`
after the parse call. You should now see such pieces of unsplit text as
u'Jagiello\u0144ska\n '
u'\u015br. 06 lip\n '
|
ImportError: No module named paramiko
Question: I have installed "python-paramiko" and "python-pycrypto" in Red hat linux. But
still when i run the sample program i get "ImportError: No module named
paramiko".
I checked the installed packages using below command and got confirmed.
ncmdvstk:~/pdem $ rpm -qa | grep python-p
python-paramiko-1.7.6-1.el3.rf
python-pycrypto-2.3-1.el3.pp
My sample program which give the import error:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(
paramiko.AutoAddPolicy())
ssh.connect('127.0.0.1', username='admin',
password='admin')
Answer: Actually all these packages were installed outside the python folder. And all
I did was linking the packages from python folder to packages folder.
It worked perfectly.
|
Module import Error Python
Question: I just installed **lxml** for parsing xml file in python. I am using
**TextMate** as an IDE. Problem is that when I try to import lxml `(from lxml
import entree)` then I get
**ImportError** :'No module named lxml'
But when I use **Terminal** then everything is fine
Python 2.7.2 (v2.7.2:8527427914a2, Jun 11 2011, 15:22:34)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from lxml import etree
>>> root=etree.element("root")
>>> root=etree.Element("root")
>>> print (root.tag)
root
>>> root.append(etree.Element("child1"))
>>> child2 = etree.SubElement(root, "child2")
>>> child3 = etree.SubElement(root, "child3")
>>> print (etree.tostring(root,pretty_print=True))
<root>
<child1/>
<child2/>
<child3/>
</root>
It's pretty weird. Does it have something to do with TextMate?
Suggestion Please!
Answer: This most probably means that you have more than one python installation on
your system and that TextMate and the Terminal using different ones by
default.
One workaround: In your python file, you can specify an interpreter directive
to point to the python installation (and executable) of your choice:
#!/usr/local/bin/python
# Even thought standard python is in /usr/bin/python, here we want another ...
|
iotop script does not work via custom script execution
Question: I have CSF installed (configure safe firewall), it has a function to allow you
to have custom scripts executed on load average events.
My script:
##!/usr/bin/env bash
iotop -bto --iter=1 2>&1 | mail -s "$HOSTNAME iotop output" incidents@
It works fine via bash shell but when executed by lfd (the monitoring process
of CSF), I get the following output:
Traceback (most recent call last):
File "/usr/bin/iotop", line 9, in <module>
from iotop.ui import main
File "/usr/lib/python2.6/site-packages/iotop/ui.py", line 13, in
<module>
from iotop.data import find_uids, TaskStatsNetlink, ProcessList, Stats
File "/usr/lib/python2.6/site-packages/iotop/data.py", line 36, in
<module>
from iotop import ioprio, vmstat
File "/usr/lib/python2.6/site-packages/iotop/ioprio.py", line 52, in
<module>
__NR_ioprio_get = find_ioprio_syscall_number(IOPRIO_GET_ARCH_SYSCALL)
File "/usr/lib/python2.6/site-packages/iotop/ioprio.py", line 38, in
find_ioprio_syscall_number
bits = platform.architecture()[0]
File "/usr/lib64/python2.6/platform.py", line 1073, in architecture
output = _syscmd_file(executable, '')
File "/usr/lib64/python2.6/platform.py", line 1021, in _syscmd_file
rc = f.close()
IOError: [Errno 10] No child processes
Can anyone shed some light on this?
Answer: Internally it calls an equivalent of:
import os
import sys
f = os.popen('file -b "%s" 2> %s' % (sys.executable, os.devnull))
f.read()
f.close()
For `popen()` to work it has to be able to get the `SIGCHLD` signal telling it
that a child process exited. It seems that the environment that executes
`iotop` has a custom reaper process that intercepts `SIGCHLD` and prevents
python from getting notified about the process exiting. Thus when the function
calls `.close()`, python tries to kill the process that is already dead and
gets an error from the operating system.
If you cannot reconfigure the environment to let `SIGCHLD` pass, I think
you'll have ro resort to ugly hacking.
Wrapping `iotop` in a script that monkey-patches `platform.architecture()`
with a function that always returns the same tuple (something like `('64bit',
'ELF')`—consult the output of real `architecture()`) should let you progress.
Alternatively, you can just make a local copy of the `platform.py` file and
edit that directly, setting `PYTHONPATH` for the cron job to point to that new
file.
|
Decrypt in Python an encrypted message in Java
Question: I'm trying to decrypt in Python (with M2Crypto) an encrypted message generated
in Java with this [library](http://www.androidsnippets.com/encryptdecrypt-
strings)
My code (which I actually found here) works decrypting messages encrypted by
itself, but not from Java's library, I get the following error:
EVPError: 'wrong final block length'
I have tried both *aes_128_cbc* and *aes_128_ecb* and I get the same error.
I guess the failure is that Java's result is Ascii's encoded and Python's code
is expecting some other encoding (as it works with base64) but I don't know
where to make the change (in my Python's code). I'm open to use any other
Python encryption library.
Thanks
import M2Crypto
from base64 import b64encode, b64decode
ENC=1
DEC=0
def AES_build_cipher(key, iv, op=ENC):
""""""""
return M2Crypto.EVP.Cipher(alg='aes_128_cbc', key=key, iv=iv, op=op)
def AES_encryptor(key,msg, iv=None):
""""""
#Decode the key and iv
key = b64decode(key)
if iv is None:
iv = '\0' * 16
else:
iv = b64decode(iv)
# Return the encryption function
def encrypt(data):
cipher = AES_build_cipher(key, iv, ENC)
v = cipher.update(data)
v = v + cipher.final()
del cipher
v = b64encode(v)
return v
print "AES encryption successful\n"
return encrypt(msg)
def AES_decryptor(key,msg, iv=None):
""""""
#Decode the key and iv
key = b64decode(key)
print key
print
if iv is None:
iv = '\0' * 16
else:
iv = b64decode(iv)
# Return the decryption function
def decrypt(data):
data = b64decode(data)
cipher = AES_build_cipher(key, iv, DEC)
v = cipher.update(data)
v = v + cipher.final()
del cipher
return v
print "AES dencryption successful\n"
return decrypt(msg)
if __name__ == "__main__":
result = AES_decryptor(b64encode(SECRET_KEY), msg=encrypted_message)
Answer: What does "ascii encoded" mean? As you know, my code expected base64 input and
produced base64 output. Removing the calls to `b64decode` and `b64encode` in
the `encrypt` and `decrypt` functions will let you pass in raw data, then
it'll be up to you to decode the input from Java into raw bytes.
|
reading content from a cmd window via python
Question: I'm trying to connect to an existing cmd window and read its content.
It is an arbitrary cmd window and not a child process.
Any ideas how this can be done with python?
Thanks in advance, Omer.
Answer: ** **Note: the[original version of the
question](http://stackoverflow.com/revisions/7567680/1) asked how to read AND
write to a cmd window** **
**Writing**
You can write to an existing command window with code such as:
from pywinauto import application
app = application.Application()
app.connect_(path= r"C:\WINDOWS\system32\cmd.exe")
dlg = app.top_window_()
dlg.TypeKeys('hello world')
Notes:
1. I installed the latest version of pywinauto into a Python 2.6 installation direct from the Mercurial repository with the command:
`pip install -e hg+https://code.google.com/p/pywinauto/#egg=pywinauto`
2. I would make this rather more robust than assuming the path to cmd.exe! Documentation on selecting the application is at <http://pywinauto.googlecode.com/hg/pywinauto/docs/HowTo.html>
**Reading**
Reading from an existing command window appears to be somewhat more difficult!
Someone on the [pywinauto-users mailing
list](https://lists.sourceforge.net/lists/listinfo/pywinauto-users) has got it
working & is offering to post a working example:
<http://thread.gmane.org/gmane.comp.python.pywinauto.user/249/focus=252> I
suggest you get in touch with him.
|
Does Python have C#/Java-style interfaces?
Question: I worked for a few months as a C# programmer, and got used to the idea of
~~generics/templates~~ interfaces, which I could pass to a library without
caring how the object was created.
I'm about to start on a relatively large project, probably in python (I've
written a lot of python before, but mostly my own code for data analysis etc),
and was wondering if there exists a similar concept in this language? I've
tried googling for it and not come up with much.
If the answer's no, that's fine, but in that case what do people generally do
instead?
Answer: > If the answer's no, that's fine, but in that case what do people generally
> do instead?
[Duck Typing](http://c2.com/cgi/wiki?DuckTyping).
What's important is to approach Python by dropping the technical baggage of
C#.
Learn Python as a **new** language. Don't try to map concepts between Python
and C#. That way lies madness.
* * *
> "INTERFACES, not generics, or templates"
Doesn't matter. All that static type declaration technology isn't necessary.
For that matter type casting in order to break the rules isn't necessary
either.
|
Python datetime randomly breaking
Question: This isn't the first time this has happened to me so now I'm looking for an
answer because I'm completely stumped.
I have code running in a production environment for over 3 months now and it
worked absolutely fine, then out of no where I started to get errors in
python.
'method_descriptor' object has no attribute 'today'
Exception Value:
'method_descriptor' object has no attribute 'today'
Exception Location: /admin/views/create.py in process, line 114
/admin/views/create.py in process
order = Orders(uid=0, accepted=0, canview='', files=0, date=datetime.date.today(), due=dueDate,
As you can see, I'm using the following which works absolutely fine from the
python shell:
>>> import datetime
>>> datetime.date.today()
>>> datetime.date(2011, 9, 27)
Answer: Your code is importing `datetime.datetime` somewhere, instead of just
`datetime`, e.g. `from datetime import datetime`.
>>> import datetime
>>> datetime.datetime.date.today()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'method_descriptor' object has no attribute 'today'
|
Updating a tk ProgressBar from a multiprocess.proccess in python3
Question: I have successfully created a threading example of a thread which can update a
Progressbar as it goes. However doing the same thing with multiprocessing has
so far eluded me. I'm beginning to wonder if it is possible to use tkinter in
this way. Has anyone done this?
I am running on OS X 10.7. I know from looking around that different OS's may
behave very differently, especially with multiprocessing and tkinter.
I have tried a producer which talks directly to the widget, through both
namespaces and event.wait, and event.set. I have done the same thing with a
producer talking to a consumer which is either a method or function which
talks to the widget. All of these things successfully run, but do not update
the widget visually. Although I have done a get() on the IntVar the widget is
bound to and seen it change, both when using widget.step() and/or
widget.set(). I have even tried running a separate tk() instance inside the
sub process. Nothing updates the Progressbar.
Here is one of the simpler versions. The sub process is a method on an object
that is a wrapper for the Progressbar widget. The tk GUI runs as the main
process. I also find it a little odd that the widget does not get destroyed at
the end of the loop, which is probably a clue I'm not understanding the
implications of.
import multiprocessing
from tkinter import *
from tkinter import ttk
import time
root = Tk()
class main_window:
def __init__(self):
self.dialog_count = 0
self.parent = root
self.parent.title('multiprocessing progess bar')
frame = ttk.Labelframe(self.parent)
frame.pack(pady=10, padx=10)
btn = ttk.Button(frame, text="Cancel")
btn.bind("<Button-1>", self.cancel)
btn.grid(row=0, column=1, pady=10)
btn = ttk.Button(frame, text="progress_bar")
btn.bind("<Button-1>", self.pbar)
btn.grid(row=0, column=2, pady=10)
self.parent.mainloop()
def pbar(self, event):
name="producer %d" % self.dialog_count
self.dialog_count += 1
pbar = pbar_dialog(self.parent, title=name)
event = multiprocessing.Event()
p = multiprocessing.Process(target=pbar.consumer, args=(None, event))
p.start()
def cancel(self, event):
self.parent.destroy()
class pbar_dialog:
toplevel=None
pbar_count = 0
def __init__(self, parent, ns=None, event=None, title=None, max=100):
self.ns = ns
self.pbar_value = IntVar()
self.max = max
pbar_dialog.pbar_count += 1
self.pbar_value.set(0)
if not pbar_dialog.toplevel:
pbar_dialog.toplevel= Toplevel(parent)
self.frame = ttk.Labelframe(pbar_dialog.toplevel, text=title)
#self.frame.pack()
self.pbar = ttk.Progressbar(self.frame, length=300, variable=self.pbar_value)
self.pbar.grid(row=0, column=1, columnspan=2, padx=5, pady=5)
btn = ttk.Button(self.frame, text="Cancel")
btn.bind("<Button-1>", self.cancel)
btn.grid(row=0, column=3, pady=10)
self.frame.pack()
def set(self,value):
self.pbar_value.set(value)
def step(self,increment=1):
self.pbar.step(increment)
print ("Current", self.pbar_value.get())
def cancel(self, event):
self.destroy()
def destroy(self):
self.frame.destroy()
pbar_dialog.pbar_count -= 1
if pbar_dialog.pbar_count == 0:
pbar_dialog.toplevel.destroy()
def consumer(self, ns, event):
for i in range(21):
#event.wait(2)
self.step(5)
#self.set(i)
print("Consumer", i)
self.destroy()
if __name__ == '__main__':
main_window()
**For contrast, here is the threading version which works perfectly.**
import threading
from tkinter import *
from tkinter import ttk
import time
root = Tk()
class main_window:
def __init__(self):
self.dialog_count = 0
self.parent = root
self.parent.title('multiprocessing progess bar')
frame = ttk.Labelframe(self.parent)
frame.pack(pady=10, padx=10)
btn = ttk.Button(frame, text="Cancel")
btn.bind("<Button-1>", self.cancel)
btn.grid(row=0, column=1, pady=10)
btn = ttk.Button(frame, text="progress_bar")
btn.bind("<Button-1>", self.pbar)
btn.grid(row=0, column=2, pady=10)
self.parent.mainloop()
def producer(self, pbar):
i=0
while i < 101:
time.sleep(1)
pbar.step(1)
i += 1
pbar.destroy()
def pbar(self, event):
name="producer %d" % self.dialog_count
self.dialog_count += 1
pbar = pbar_dialog(self.parent, title=name)
p = threading.Thread(name=name, target=self.producer, args=(pbar,))
p.start()
#p.join()
def cancel(self, event):
self.parent.destroy()
class pbar_dialog:
toplevel=None
pbar_count = 0
def __init__(self, parent, ns=None, event=None, title=None, max=100):
self.ns = ns
self.pbar_value = IntVar()
self.title = title
self.max = max
pbar_dialog.pbar_count += 1
if not pbar_dialog.toplevel:
pbar_dialog.toplevel= Toplevel(parent)
self.frame = ttk.Labelframe(pbar_dialog.toplevel, text=title)
#self.frame.pack()
self.pbar = ttk.Progressbar(self.frame, length=300, variable=self.pbar_value)
self.pbar.grid(row=0, column=1, columnspan=2, padx=5, pady=5)
btn = ttk.Button(self.frame, text="Cancel")
btn.bind("<Button-1>", self.cancel)
btn.grid(row=0, column=3, pady=10)
self.frame.pack()
self.set(0)
def set(self,value):
self.pbar_value.set(value)
def step(self,increment=1):
self.pbar.step(increment)
def cancel(self, event):
self.destroy()
def destroy(self):
self.frame.destroy()
pbar_dialog.pbar_count -= 1
if pbar_dialog.pbar_count == 0:
pbar_dialog.toplevel.destroy()
pbar_dialog.toplevel = None
def automatic(self, ns, event):
for i in range(1,100):
self.step()
if __name__ == '__main__':
main_window()
Answer: Doing something similar, I ended up having to use a combination of threads and
processes - the GUI front end had two threads: one for tkinter, and one
reading from a multiprocessing.Queue and calling gui.update() - then the back-
end processes would write updates into that Queue
|
Python expand OSX path with spaces in it
Question: I'm trying to modify the plist file at:
`/Volumes/MacintoshHD/Users/christian/Library/Application Support/iPhone
Simulator/4.3.2/Library/Preferences/com.apple.Accessibility.plist`
Here's my noob python script:
import plistlib
import os.path
#set path
prefs_path = os.path.expanduser("~/Library/Application\ Support/iPhone\ Simulator/5.0/Library/Preferences/com.apple.Accessibility.plist")
#parse
prefs = plistlib.readPlist(prefs_path)
I get `IOError: 2, 'No such file or directory'`
If I remove the backslashes from the path I get `ExpatError: 'not well-formed
(invalid token): line 1, column 8'`
## Update
[Ignacio Vazquez-Abram's
answer](http://stackoverflow.com/questions/7578850/python-expand-osx-path-
with-spaces-in-it/7578921#7578921) suggests that the file is corrupted. It's
still editable with Xcode and viewable with Quicklook. Also the simulator
works fine.
When I open it in textmate all I see is this:
bplist00ÿ
D _ApplicationAccessibilityEnabled_VOTQuickNavEnabled_AccessibilityEnabled]ScreenCurtain_"VoiceOverTouchRotorItemsPreference_AXInspector.enabled_AXInspector.frame_AXInspectorEnabled Ø"%(+.147:=@“ YRotorItemWEnabledYCharacter “ TWord “ TLine “ VHeader “ TLink “ [FormElement “# ZTableStart “&YListStart“)YLandmarks“,[VisitedLink“/^NonVisitedLink“2VButton“5YTextField“8UImage“;ZStaticText“>\InternalLink“ATZoom_{{0, 0}, {276, 166}} ;Pguö∞ƒŸ⁄€‹›Òˆ#().56;@AFRSXcdistyÉÑâïñõ™´∞∑∏Ω«»Õ”‘Ÿ‰Â͘¯˝F
Are the iPhone simulator plists not editable by python or is this a corrupted
file? Is there a way around this?
Answer: The expat error means that you've gotten to the file, but there's something
wrong with it. Open the file and verify that it's not been corrupted.
**EDIT:**
Aaah, it's a binary plist. Yeah, `plistlib` can't handle those. You want
[`libplist`](https://github.com/JonathanBeck/libplist) instead.
|
python subprocess dd and stdout
Question: I am using subprocess to create a random file from /dev/random using unix dd.
Now, if i want the data output of dd to be written to a file instead of
stdout. so here's the code am using,
import subprocess
out_fd = open('test_file','w')
def os_system_dd():
global out_fd
out_fd.write("executing the time dd command\n")
cmd_list = ['time','dd','if=/dev/random', 'of=/home/anand/sys_entropy_random', 'bs=1M' ,'count=5']
a = subprocess.Popen(cmd_list,stdout=out_fd)
a.wait()
if __name__ == '__main__':
os_system_dd()
This doesn't print the dd output to the file and instead prints it in the
stdout. Is this a specific functionality of dd command? Or am i missing some
thing about how subprocess works?
Answer: `dd` outputs its debugging information on stderr, not stdout:
import subprocess
out_fd = open('test_file','w')
def os_system_dd():
out_fd.write("executing the time dd command\n")
cmd_list = ['time','dd','if=/dev/random', 'of=/home/anand/sys_entropy_random',
'bs=1M' ,'count=5']
a = subprocess.Popen(cmd_list,stderr=out_fd) # notice stderr
a.communicate()
if __name__ == '__main__':
os_system_dd()
|
Converting Numpy Array to OpenCV Array
Question: I'm trying to convert a 2D Numpy array, representing a black-and-white image,
into a 3-channel OpenCV array (i.e. an RGB image).
Based on [code
samples](https://code.ros.org/trac/opencv/browser/trunk/opencv/samples/python2/find_obj.py)
and [the
docs](http://opencv.willowgarage.com/documentation/cpp/miscellaneous_image_transformations.html#cv-
cvtcolor) I'm attempting to do this via Python like:
import numpy as np, cv
vis = np.zeros((384, 836), np.uint32)
h,w = vis.shape
vis2 = cv.CreateMat(h, w, cv.CV_32FC3)
cv.CvtColor(vis, vis2, cv.CV_GRAY2BGR)
However, the call to CvtColor() is throwing the following cpp-level Exception:
OpenCV Error: Image step is wrong () in cvSetData, file /build/buildd/opencv-2.1.0/src/cxcore/cxarray.cpp, line 902
terminate called after throwing an instance of 'cv::Exception'
what(): /build/buildd/opencv-2.1.0/src/cxcore/cxarray.cpp:902: error: (-13) in function cvSetData
Aborted
What am I doing wrong?
Answer: Your code can be fixed as follows:
import numpy as np, cv
vis = np.zeros((384, 836), np.float32)
h,w = vis.shape
vis2 = cv.CreateMat(h, w, cv.CV_32FC3)
vis0 = cv.fromarray(vis)
cv.CvtColor(vis0, vis2, cv.CV_GRAY2BGR)
Short explanation:
1. `np.uint32` data type is not supported by OpenCV (it supports `uint8`, `int8`, `uint16`, `int16`, `int32`, `float32`, `float64`)
2. `cv.CvtColor` can't handle numpy arrays so both arguments has to be converted to OpenCV type. `cv.fromarray` do this conversion.
3. Both arguments of `cv.CvtColor` must have the same depth. So I've changed source type to 32bit float to match the ddestination.
Also I recommend you use newer version of OpenCV python API because it uses
numpy arrays as primary data type:
import numpy as np, cv2
vis = np.zeros((384, 836), np.float32)
vis2 = cv2.cvtColor(vis, cv2.COLOR_GRAY2BGR)
|
os.walk multiple directories at once
Question: > **Possible Duplicate:**
> [How to join two generators in
> Python?](http://stackoverflow.com/questions/3211041/how-to-join-two-
> generators-in-python)
Is there a way in python to use os.walk to traverse multiple directories at
once?
my_paths = []
path1 = '/path/to/directory/one/'
path2 = '/path/to/directory/two/'
for path, dirs, files in os.walk(path1, path2):
my_paths.append(dirs)
The above example doesn't work (as os.walk only accepts one directory), but I
was hoping for a more elegant solution rather than calling os.walk twice (plus
then I can sort it all at once). Thanks.
Answer: To treat multiples iterables as one, use
[`itertools.chain`](http://docs.python.org/library/itertools.html#itertools.chain):
from itertools import chain
paths = ('/path/to/directory/one/', '/path/to/directory/two/', 'etc.', 'etc.')
for path, dirs, files in chain.from_iterable(os.walk(path) for path in paths):
|
Python non-ascii characters
Question: I have a python file that creates and populates a table in ms sql. The only
sticking point is that the code breaks if there are any non-ascii characters
or single apostrophes (and there are quite a few of each). Although I can run
the replace function to rid the strings of apostrophes, I would prefer to keep
them intact. I have also tried converting the data into utf-8, but no luck
there either.
Below are th error messages I get:
"'ascii' codec can't encode character u'\2013' in position..." (for non-ascii characters)
and for the single quotes
class 'pyodbc.ProgrammingError'>: ('42000', "[42000] [Microsoft][ODBC SQL Server Driver][SQL Server] Incorrect syntax near 'S, 230 X 90M.; Eligibilty....
When I try to encode string in utf-8, I instead get the following error
message:
<type 'exceptions.UnicodeDecodeError'>: ascii' codec can't decode byte 0xe2 in position 219: ordinal not in range(128)
The python code is included below. I believe the point in the code where this
break occurs is after the following line: InsertValue =
str(row.GetValue(CurrentField['Name'])).
# -*- coding: utf-8 -*-
import pyodbc
import sys
import arcpy
import arcgisscripting
gp = arcgisscripting.create(9.3)
SQL_KEYWORDS = ['PERCENT', 'SELECT', 'INSERT', 'DROP', 'TABLE']
#SourceFGDB = '###'
#SourceTable = '###'
SourceTable = sys.argv[1]
TempInputName = sys.argv[2]
SourceTable2 = sys.argv[3]
#---------------------------------------------------------------------------------------------------------------------
# Target Database Settings
#---------------------------------------------------------------------------------------------------------------------
TargetDatabaseDriver = "{SQL Server}"
TargetDatabaseServer = "###"
TargetDatabaseName = "###"
TargetDatabaseUser = "###"
TargetDatabasePassword = "###"
# Get schema from FGDB table.
# This should be an ordered list of dictionary elements [{'FGDB_Name', 'FGDB_Alias', 'FGDB_Type', FGDB_Width, FGDB_Precision, FGDB_Scale}, {}]
if not gp.Exists(SourceTable):
print ('- The source does not exist.')
sys.exit(102)
#### Should see if it is actually a table type. Could be a Feature Data Set or something...
print(' - Processing Items From : ' + SourceTable)
FieldList = []
Field_List = gp.ListFields(SourceTable)
print(' - Getting number of rows.')
result = gp.GetCount_management(SourceTable)
Number_of_Features = gp.GetCount_management(SourceTable)
print(' - Number of Rows: ' + str(Number_of_Features))
print(' - Getting fields.')
Field_List1 = gp.ListFields(SourceTable, 'Layer')
Field_List2 = gp.ListFields(SourceTable, 'Comments')
Field_List3 = gp.ListFields(SourceTable, 'Category')
Field_List4 = gp.ListFields(SourceTable, 'State')
Field_List5 = gp.ListFields(SourceTable, 'Label')
Field_List6 = gp.ListFields(SourceTable, 'DateUpdate')
Field_List7 = gp.ListFields(SourceTable, 'OBJECTID')
for Current_Field in Field_List1 + Field_List2 + Field_List3 + Field_List4 + Field_List5 + Field_List6 + Field_List7:
print(' - Field Found: ' + Current_Field.Name)
if Current_Field.AliasName in SQL_KEYWORDS:
Target_Name = Current_Field.Name + '_'
else:
Target_Name = Current_Field.Name
print(' - Alias : ' + Current_Field.AliasName)
print(' - Type : ' + Current_Field.Type)
print(' - Length : ' + str(Current_Field.Length))
print(' - Scale : ' + str(Current_Field.Scale))
print(' - Precision: ' + str(Current_Field.Precision))
FieldList.append({'Name': Current_Field.Name, 'AliasName': Current_Field.AliasName, 'Type': Current_Field.Type, 'Length': Current_Field.Length, 'Scale': Current_Field.Scale, 'Precision': Current_Field.Precision, 'Unique': 'UNIQUE', 'Target_Name': Target_Name})
# Create table in SQL Server based on FGDB table schema.
cnxn = pyodbc.connect(r'DRIVER={SQL Server};SERVER=###;DATABASE=###;UID=sql_webenvas;PWD=###')
cursor = cnxn .cursor()
#### DROP the table first?
try:
DropTableSQL = 'DROP TABLE dbo.' + TempInputName + '_Test;'
print DropTableSQL
cursor.execute(DropTableSQL)
dbconnection.commit()
except:
print('WARNING: Can not drop table - may not exist: ' + TempInputName + '_Test')
CreateTableSQL = ('CREATE TABLE ' + TempInputName + '_Test '
' (Layer varchar(500), Comments varchar(5000), State int, Label varchar(500), DateUpdate DATETIME, Category varchar(50), OBJECTID int)')
cursor.execute(CreateTableSQL)
cnxn.commit()
# Cursor through each row in the FGDB table, get values, and insert into the SQL Server Table.
# We got Number_of_Features earlier, just use that.
Number_Processed = 0
print(' - Processing ' + str(Number_of_Features) + ' features.')
rows = gp.SearchCursor(SourceTable)
row = rows.Next()
while row:
if Number_Processed % 10000 == 0:
print(' - Processed ' + str(Number_Processed) + ' of ' + str(Number_of_Features))
InsertSQLFields = 'INSERT INTO ' + TempInputName + '_Test ('
InsertSQLValues = 'VALUES ('
for CurrentField in FieldList:
InsertSQLFields = InsertSQLFields + CurrentField['Target_Name'] + ', '
InsertValue = str(row.GetValue(CurrentField['Name']))
if InsertValue in ['None']:
InsertValue = 'NULL'
# Use an escape quote for the SQL.
InsertValue = InsertValue.replace("'","' '")
if CurrentField['Type'].upper() in ['STRING', 'CHAR', 'TEXT']:
if InsertValue == 'NULL':
InsertSQLValues = InsertSQLValues + "NULL, "
else:
InsertSQLValues = InsertSQLValues + "'" + InsertValue + "', "
elif CurrentField['Type'].upper() in ['GEOMETRY']:
## We're not handling geometry transfers at this time.
if InsertValue == 'NULL':
InsertSQLValues = InsertSQLValues + '0' + ', '
else:
InsertSQLValues = InsertSQLValues + '1' + ', '
else:
InsertSQLValues = InsertSQLValues + InsertValue + ', '
InsertSQLFields = InsertSQLFields[:-2] + ')'
InsertSQLValues = InsertSQLValues[:-2] + ')'
InsertSQL = InsertSQLFields + ' ' + InsertSQLValues
## print InsertSQL
cursor.execute(InsertSQL)
cnxn.commit()
Number_Processed = Number_Processed + 1
row = rows.Next()
print(' - Processed all ' + str(Number_Processed))
del row
del rows
Answer: James, I believe the real issue is that your are not using Unicode accross the
board. Try to do the following:
* Make sure that your input file that you are using to populate the DB is in UTF-8 and that you are reading it with the UTF-8 encoder.
* Make sure your DB is actually storing the data as Unicode
* When you retrieve data from the file or from the DB or want to manipulate strings (with the + operator for instance) you need to make sure that all parts are proper Unicode. You can NOT use the str() method. You need to use unicode() as Dave pointed out. If you define strings in your code use u'my string' instead of 'my string' (otherwise it is not considered unicode).
Also, please provide us the full stack trace and the exception name.
|
Configuring and installing python2.6.7 and mod_wsgi3.3 on RHEL for production
Question: This is a long question detailing all that I did from the start. Hope it
helps. I am working on a django application and need to deploy it on to the
production server. The production server is a virtual server managed by IT,
and I do not have the root access. They have given me rights to manage the
installations of my modules in /swadm and /home/swadm. So I have planned to do
create the following arrangement:
* `/swadm/etc/httpd/conf` where I maintain httpd.conf
* `/swadm/etc/httpd/user-modules` where I maintain my apache modules (mod_wsgi)
* `/swadm/var/www/django/app` where I maintain my django code
* `/swadm/usr/local/python/2.6` where I will maintain my python 2.6.7 installation with modules like django, south etc.
* `/home/swadm/setup` where I will be storing the required source tarballs and doing all the building and installing out of.
* `/home/swadm/workspace` where I will be maintaining application code that is in development.
The system has python2.4.3 and python2.6.5 installed but IT recommended that I
maintain my own python installation if I required lot of custom modules to be
installed (which I would be).
So I downloaded python2.6.7 source. I needed to ensure python is installed
such that its shared library is available. When I ran the configure script
with only the option `--enable-shared` and
`--prefix=/swadm/usr/local/python/2.6`, it would get installed but
surprisingly point to the system's installation of python2.6.5.
$ /swadm/usr/local/python/2.6/bin/python
Python 2.6.5 (r265:79063, Feb 28 2011, 21:55:45)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-50)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
So I ran the configure script following instructions from [Building Python
with --enable-shared in non-standard
location](http://koansys.com/tech/building-python-with-enable-shared-in-non-
standard-location) as
./configure --enable-shared --prefix=/swadm/usr/local/python/2.6 LDFLAGS="-Wl,-rpath /swadm/usr/local/python/2.6/lib"
Also making sure I had created the directories beforehand ( as the link
suggests) to avoid the errors expected. Now typing
`/swadm/usr/local/python/2.6/bin/python` would start the correct python
version 2.6.7. So I moved on to configuring and installing mod_wsgi. I
configured it as
./configure --with-python=/swadm/usr/local/python/2.6/bin/python
the Makefile that was created tries to install the module into
`/usr/lib64/httpd/modules` and I have no write permissions there, so I
modified the makefile to install into `/swadm/etc/httpd/user-modules`. (There
might be a command argument but I could not figure it out). The module got
created fine. A test wsgi script which I used was
import sys
def application(environ, start_response):
status = '200 OK'
output = 'Hello World!'
output = output + str(sys.version_info)
output = output + '\nsys.prefix = %s' % repr(sys.prefix)
output = output + '\nsys.path = %s' % repr(sys.path)
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
And the output shown was, surprisingly
Hello World!(2, 6, 5, 'final', 0)
sys.prefix = '/swadm/usr/local/python/2.6'
sys.path = ['/swadm/usr/local/python/2.6/lib64/python26.zip', '/swadm/usr/local/python/2.6/lib64/python2.6/', '/swadm/usr/local/python/2.6/lib64/python2.6/plat-linux2', '/swadm/usr/local/python/2.6/lib64/python2.6/lib-tk', '/swadm/usr/local/python/2.6/lib64/python2.6/lib-old', '/swadm/usr/local/python/2.6/lib64/python2.6/lib-dynload']`
So you see somehow the mod_wsgi module still got configured with the system's
python 2.6.5 installation and not my custom one. I tried various things
detailed in the [mod_wsgi
documentation](http://code.google.com/p/modwsgi/wiki/InstallationIssues)
* Set `WSGIPythonHome` in httpd.conf to `/swadm/usr/local/python/2.6` and `WSGIPythonPath` to `/swadm/usr/local/python/2.6/lib/python2.6`
* Created a symlink in the python config directory to point to the libpython2.6.so file
$ ln -s ../../libpython2.6.so
When I do `ldd libpython2.6.so` this is what I see:
$ ldd libpython2.6.so
linux-vdso.so.1 => (0x00007fffc47fc000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b666ed62000)
libdl.so.2 => /lib64/libdl.so.2 (0x00002b666ef7e000)
libutil.so.1 => /lib64/libutil.so.1 (0x00002b666f182000)
libm.so.6 => /lib64/libm.so.6 (0x00002b666f385000)
libc.so.6 => /lib64/libc.so.6 (0x00002b666f609000)
/lib64/ld-linux-x86-64.so.2 (0x00000031aba00000)
And `ldd mod_wsgi.so` gives
$ ldd /swadm/etc/httpd/user-modules/mod_wsgi.so
linux-vdso.so.1 => (0x00007fff1ad6e000)
libpython2.6.so.1.0 => /usr/lib64/libpython2.6.so.1.0 (0x00002af03aec7000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00002af03b270000)
libdl.so.2 => /lib64/libdl.so.2 (0x00002af03b48c000)
libutil.so.1 => /lib64/libutil.so.1 (0x00002af03b690000)
libm.so.6 => /lib64/libm.so.6 (0x00002af03b893000)
libc.so.6 => /lib64/libc.so.6 (0x00002af03bb17000)
/lib64/ld-linux-x86-64.so.2 (0x00000031aba00000)
I have been trying re-installing and re-configuring python and mod_wsgi but to
no avail. Please let me know where I am going wrong. (Sorry for the very long
post)
**TLDR** ; System with non-root access has default python installation. I am
maintaining my own python and python modules. mod_wsgi configured and built
with the custom python, still points to the system's python when I run a test
script that prints out the sys version_info and path.
**UPDATE** : On Browsing through the stackoverflow (should have done it
earlier) I found this answer by Graham Dumpleton on [mod_wsgi python2.5 ubuntu
11.04 problem](http://stackoverflow.com/questions/6438260/mod-wsgi-
python2-5-ubuntu-11-04-problem) which solved the error for me. Now when I do
`ldd mod_wsgi.so` I see that it is linked to the correct shared library of
python. I now installed Django and MySQLdb using my custom python install. And
Now I am facing this error:
The following error occurred while trying to extract file(s) to the Python egg
cache:
[Errno 13] Permission denied: '/var/www/.python-eggs'
The Python egg cache directory is currently set to:
/var/www/.python-eggs
Perhaps your account does not have write access to this directory? You can
change the cache directory by setting the PYTHON_EGG_CACHE environment
variable to point to an accessible directory.
So I did change the value of `PYTHON_EGG_CACHE` by doing `export
PYTHON_EGG_CACHE=/swadm/var/www/.python-eggs`. but I am still getting the same
error. I am investigating more. Will update when I solve this.
Answer: Egg cache issue solved by setting environment variable in WSGI script:
<http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Access_Rights_Of_Apache_User>
or in Apache configuration:
<http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIPythonEggs>
http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess
Which of latter two is used depends on whether using emebedded mode or daemon
mode.
|
Mental block with os.walk - want to process a file not the filename string
Question: Trying to implement a little script to move the older log files out of apache
(actually using a simple bash script to do this in 'real life' - this is just
an exercise to practice using Python). I'm getting the filename as a string as
the variable f, but I want this to actually be a file when i pass it to
self.processFile(root, f, age, inString).
I tried opening the actual file a few different ways, but I'm missing the
target, and end up getting an error, a path that doesn't always seem to be
correct, or just a string. I'll blame it on the late night, but I'm blanking
on the best way to open f as a file right before passing to self.processFile
(where it will be gzipped). Usually its something very simple that i'm
missing, so I have to assume that's the case here. I'd appreciate any
constructive advice/direction.
"""recursive walk through /usr/local/apache2.2/logs"""
for root, dirs, files in os.walk(basedir):
for f in files:
m=self.fileFormatRegex.match(f)
if m:
if (('access_log.' in f) or
('error.' in f) or
('access.' in f) or
('error_log.' in f) or
('mod_jk.log.' in f)):
#This is where i'd like to open the file using the filename f
self.processFile(root, f, age, inString)
Answer: Use `os.path.abspath`:
self.processFile(root, open(os.path.abspath(f)), age, inString)
Like so:
import os
for root, dirs, files in os.walk(basedir):
for f in files:
m=self.fileFormatRegex.match(f)
if m:
if (set('access_log.', 'error.', 'access.', 'error_log.','mod_jk.log.').intersection(set(f))):
self.processFile(root, open(os.path.abspath(f)), age, inString)
Or `os.path.join`:
import os
for root, dirs, files in os.walk(basedir):
for f in files:
m=self.fileFormatRegex.match(f)
if m:
if (set('access_log.', 'error.', 'access.', 'error_log.','mod_jk.log.').intersection(set(f))):
self.processFile(root, open(os.path.join(r"/", root, f)), age, inString)
# Sometimes the leading / isnt necessary, like this:
# self.processFile(root, open(os.path.join(root, f)), age, inString)
More about [`os.path`](http://docs.python.org/library/os.path.html)
* * *
Yet another way using `file()` instead of `open()` (does almost the same thing
as open):
self.processFile(root, file(os.path.join(root, f), "r"), age, inString)
self.processFile(root, file(os.path.abspath(f), "r+"), age, inString)
|
Generating second graph in different window - VPython
Question: If I have a graph like this in VPython: graphX = gcurve(color = color.cyan),
how can I make another graph (graphY = gcurve(color = color.red)) in a
different window (different set of axes)?
Answer: use gdisplay() to create graph window:
from visual.graph import *
graphX = gcurve(color = color.cyan)
gdisplay()
graphY = gcurve(color = color.red)
|
Python - SqlAlchemy: Filter query by great circle distance?
Question: I am using Python and Sqlalchemy to store latitude and longitude values in a
Sqlite database. I have created a [hybrid
method](http://www.sqlalchemy.org/docs/orm/extensions/hybrid.html) for my
Location object,
@hybrid_method
def great_circle_distance(self, other):
"""
Tries to calculate the great circle distance between the two locations
If it succeeds, it will return the great-circle distance
multiplied by 3959, which calculates the distance in miles.
If it cannot, it will return None.
"""
return math.acos( self.cos_rad_lat
* other.cos_rad_lat
* math.cos(self.rad_lng - other.rad_lng)
+ self.sin_rad_lat
* other.sin_rad_lat
) * 3959
All the values like `cos_rad_lat` and `sin_rad_lat` are values I pre-
calculated to optimize the calculation. Anyhow, when I run the following
query,
pq = Session.query(model.Location).filter(model.Location.great_circle_distance(loc) < 10)
I get the following error,
line 809, in great_circle_distance
* math.cos(self.rad_lng - other.rad_lng)
TypeError: a float is required
When I print the values for `self.rad_lng` and `other.rad_lng` I get, for
example,
self.rad_lng: Location.rad_lng
other.rad_lng: -1.29154947064
What am I doing wrong?
Answer: You can't really use the `math` module that way:
>>> c = toyschema.Contact()
>>> c.lat = 10
>>> c.lat
10
>>> import math
>>> math.cos(c.lat)
-0.83907152907645244
>>> math.cos(toyschema.Contact.lat)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: a float is required
You'll have combine `sqalchemy.func.*` in place of `math.*` in a
[`@great_circle_distance.expression`
method](http://www.sqlalchemy.org/docs/orm/extensions/hybrid.html#defining-
expression-behavior-distinct-from-attribute-behavior) for all of that kind of
cleverness. Unfortunately, you can't do that with sqlite, either; it [doesn't
provide trig functions](http://www.sqlite.org/lang_corefunc.html) You could
use PostgreSQL, which does, or you can try to [add these functions to sqlite
yourself:](http://www.sqlite.org/c3ref/create_function.html)
**EDIT** It's actually not to hard to add functions to sqlite: This is _NOT_
tested.
Have to add the math functions to sqlite:
engine = sqlalchemy.create_engine("sqlite:///:memory:/")
raw_con = engine.raw_connection()
raw_con.create_function("cos", 1, math.cos)
raw_con.create_function("acos", 1, math.acos)
class Location(...):
...
@hybrid_method
def great_circle_distance(self, other):
"""
Tries to calculate the great circle distance between
the two locations by using the Haversine formula.
If it succeeds, it will return the Haversine formula
multiplied by 3959, which calculates the distance in miles.
If it cannot, it will return None.
"""
return math.acos( self.cos_rad_lat
* other.cos_rad_lat
* math.cos(self.rad_lng - other.rad_lng)
+ self.sin_rad_lat
* other.sin_rad_lat
) * 3959
@great_circle_distance.expression
def great_circle_distance(cls, other):
return sqlalchemy.func.acos( cls.cos_rad_lat
* other.cos_rad_lat
* sqlalchemy.func.cos(cls.rad_lng - other.rad_lng)
+ cls.sin_rad_lat
* other.sin_rad_lat
) * 3959
|
Python equivalent to Bash $()
Question: I search the Python equivalent for the following Bash code:
VAR=$(echo $VAR)
Pseudo Python code could be:
var = print var
Can you help? :-)
Regards
Edit:
I search a way to do this:
for dhIP in open('dh-ips.txt', 'r'):
gi = GeoIP.new(GeoIP.GEOIP_MEMORY_CACHE)
print gi.country_code_by_addr(print dhIP) # <-- this line is my problem
In Bash i would do it like this:
print gi.country_code_by_addr($(dhIP)) # only pseudo code...
Hope it's more clear now.
Edit2:
Thank you all! Here's my solution which works. Thanks to Liquid_Fire for the
remark with the newline char and thanks to hop for his code!
import GeoIP
fp = open('dh-ips.txt', 'r')
gi = GeoIP.new(GeoIP.GEOIP_MEMORY_CACHE)
try:
for dhIP in fp:
print gi.country_code_by_addr(dhIP.rstrip("\n"))
finally:
fp.close()
Answer: You don't need a `print` in there, just use the name of the variable:
for dhIP in open('dh-ips.txt', 'r'):
gi = GeoIP.new(GeoIP.GEOIP_MEMORY_CACHE)
print gi.country_code_by_addr(dhIP)
Also note that iterating through a file object gives you lines with the
newline characters at the end. You may want to use something like
`dhIP.rstrip("\n")` to remove them before passing it on to
`country_code_by_addr`.
|
What is the correct import statement to use for a Select object in webdriver 2.4 under python?
Question: I am writing tests with selenium webdriver 2.4 on python 2.7.
The documentation (http://seleniumhq.org/docs/03_webdriver.html) demonstrates
the ability to manipulate select form elements as follows:
Select select = new Select(driver.findElement(By.xpath("//select")));
select.deselectAll();
select.selectByVisibleText("Edam");
I need to manipulate select form elements like this but in python. However I
cannot figure out what to import to successfully instantiate the Select
object.
What should my import statement be?
Thanks.
Answer: > However I cannot figure out what to import to successfully instantiate the
> Select object.
It is available as: `from selenium.webdriver.support.ui import Select`
See also:
<http://selenium.googlecode.com/svn/trunk/docs/api/py/webdriver_support/selenium.webdriver.support.select.html#module-
selenium.webdriver.support.select>
|
Search and Replace Strings in a Text File Using Wildcards
Question: Trying to do a search/replace in python using wildcards on the contents of a
text file:
If the contents of the text file looks like:
"all_bcar_v0038.ma";
"all_bcar_v0002.ma";
"all_bcar_v0011.ma";
"all_bcar_v0011.ma";
Looking to replace all the version numbers with v1000 to get this:
"all_bcar_v1000.ma";
"all_bcar_v1000.ma";
"all_bcar_v1000.ma";
"all_bcar_v1000.ma";
And have the file written out.
I've tried the below but what happens is the script only catches the first
version number, leaving the others untouched:
def replaceAll(file,searchExp,replaceExp):
for line in fileinput.input(file, inplace=1):
if searchExp in line:
line = line.replace(searchExp,replaceExp)
sys.stdout.write(line)
rigs = ['all_bcar']
rigs_latest = ['all_bcar_v1000']
old_pattern = []
old_compiled = []
old = []
old_version = []
for rig in range(len(rigs)):
old_pattern.append("/" + rigs[rig] + "_(.*).ma")
fin = open(txt_file, "r")
old_compiled.append(re.compile(old_pattern[rig]))
old.append(old_compiled[rig].search(fin.read()))
old_version.append(old[rig].group(1).strip())
old_rig = (rigs[rig] + "_" + old_version[rig])
replaceAll(txt_file,old_rig,rigs_latest[rig])
fin.close()
Not sure how to keep the search looping to find other versions and to avoid
the versions that have already been replaced, to skip any versions that equal
"v1000".
Answer: Your life would be vastly improved with the use of
[regexes](http://www.regular-expressions.info/python.html). You should be able
to do this in 2 lines (assuming I understand the problem correctly):
import re
for line in fileinput.input(file, inplace=1):
re.sub(r"_v\d{4}", "_v1000", line)
|
improving my python script
Question: I have an interesting python script (not sure if it has been done before) It
uses
import os
os.system("say %s" % say)
#and I have added;
os.system("say -v whisper %s" % say)
but now there are new voices in lion and i want to know how to get those
voices and if there is a centralized list.

Answer: The manual doesn't doc which voices are available. But I believe the syntax is
just using the name like so:
> say -v Karen Hello
I don't have access to my Mac right now but I found this list from
[here](http://www.tuaw.com/2011/03/02/mac-os-x-lion-offers-high-quality-
multilingual-voices/):
* American English: Jill, Samantha and Tom
* Australian English: Karen and Lee
* British English: Daniel, Emily and Serena
* South African English: Tessa
There is also other languages as well.
**UPDATE:**
`say -v ?` spits out:
MacBook-Austin:~ Austin$ say -v ?
Agnes en_US # Isn't it nice to have a computer that will talk to you?
Albert en_US # I have a frog in my throat. No, I mean a real frog!
Alex en_US # Most people recognize me by my voice.
Bad News en_US # The light you see at the end of the tunnel is the headlamp of a fast approaching train.
Bahh en_US # Do not pull the wool over my eyes.
Bells en_US # Time flies when you are having fun.
Boing en_US # Spring has sprung, fall has fell, winter's here and it's colder than usual.
Bruce en_US # I sure like being inside this fancy computer
Bubbles en_US # Pull the plug! I'm drowning!
...
If you do not see the voice you are looking for when you do a `say -v ?` you
can [install more](http://osxdaily.com/2011/07/25/how-to-add-new-voices-to-
mac-os-x-lion/).
|
How to set up multiple django versions on single apache service?
Question: I'm using Windows XP and want to know how can I create multiple django
versions on a single apache service through virtual host(of course).
I'm trying to do that with one instance of python too. Should i create 1
instance of python for each django version or django needs only its eggs to
work, so I can have several eggs in just one python version?
Answer: You can do something like this in your httpd.conf
NameVirtualHost 0.0.0.0:80
<VirtualHost 0.0.0.0:80>
ServerName myserver.com
ServerAdmin [email protected]
DocumentRoot "/path/to/html/root"
ErrorLog "/path/to/apache-error.log"
CustomLog "/path/to/apache-access.log" common
Options ExecCGI FollowSymLinks MultiViews
AddHandler wsgi-script .wsgi
WSGIDaemonProcess djangoapp1
WSGIProcessGroup djangoapp1
WSGIScriptAlias / /path/to/djangoapp1.wsgi
Alias /static /path/to/static/files
DirectoryIndex index.html index.cgi
AddHandler cgi-script .cgi .pl
</VirtualHost>
NameVirtualHost 0.0.0.0:81
<VirtualHost 0.0.0.0:81>
ServerName myserver.com
ServerAdmin [email protected]
DocumentRoot "/path/to/html/root"
ErrorLog "/path/to/apache-error.log"
CustomLog "/path/to/apache-access.log" common
Options ExecCGI FollowSymLinks MultiViews
AddHandler wsgi-script .wsgi
WSGIDaemonProcess djangoapp2
WSGIProcessGroup djangoapp2
WSGIScriptAlias / /path/to/djangoapp2.wsgi
Alias /static /path/to/static/files
DirectoryIndex index.html index.cgi
AddHandler cgi-script .cgi .pl
</VirtualHost>
And then, in your djangoapp1.wsgi/djangoapp2.wsgi script you can define the
different django versions and applications:
#!/usr/bin/python
import os
import sys
sys.path.append('')
sys.path.append('/path/to/python2.7/site-packages')
sys.path.append('/path/to/python2.7/dist-packages/Django-1.3-py2.7.egg ')
... etc ...
sys.path.append('/path/to/djangoapp1/src')
os.environ['DJANGO_SETTINGS_MODULE'] = 'djangoapp1.settings'
os.environ['PYTHON_EGG_CACHE'] = '/tmp'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
|
ElementTree's iter() equivalent in Python2.6
Question: I have this code with ElementTree that works well with Python 2.7. I needed to
get all the nodes with the name "A" under "X/Y" node.
from xml.etree.ElementTree import ElementTree
verboseNode = topNode.find("X/Y")
nodes = list(verboseNode.iter("A"))
However, when I tried to run it with Python 2.6, I got this error.
ionCalculateSkewConstraint.py", line 303, in getNodesWithAttribute
nodes = list(startNode.iter(nodeName))
AttributeError: _ElementInterface instance has no attribute 'iter'
It looks like that Python 2.6 ElementTree's node doesn't have the iter(). How
can I implement the iter() with Python 2.6?
Answer: Not sure if this is what you are looking for, as `iter()` appears to be around
in 2.6, but there's `getiterator()`
<http://docs.python.org/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element.getiterator>
|
403 Forbidden Error for Python-Suds contacting Sharepoint
Question: I'm using Python's SUDs lib to access Sharepoint web services. I followed the
standard doc from Suds's website. For the past 2 days, no matter which service
I access, the remote service always returns 403 Forbidden.
I'm using Suds 0.4 so it has built-in support for accessing Python NTLM.
Let me know if anyone has a clue about this.
from suds import transport
from suds import client
from suds.transport.https import WindowsHttpAuthenticated
import logging
logging.basicConfig(level=logging.INFO)
logging.getLogger('suds.client').setLevel(logging.DEBUG)
ntlm = WindowsHttpAuthenticated(username='USER_ID', password='PASS')
c_lists = client.Client(url='https://SHAREPOINT_URL/_vti_bin/Lists.asmx?WSDL', transport=ntlm)
#c_lists = client.Client(url='https://SHAREPOINT_URL/_vti_bin/spsearch.asmx?WSDL')
#print c_lists
listsCollection = c_lists.service.GetListCollection()
Answer: Are you specifying the username as `DOMAIN\USER_ID` as indicated in [examples
for the python-ntlm](http://code.google.com/p/python-ntlm/) library? (Also see
[this answer](http://stackoverflow.com/questions/218987/how-can-i-use-
sharepoint-via-soap-from-python/5403203#5403203)).
|
How should I resolve my appspot backup failure?
Question: I'm trying to make a backup but it won't:
2011-10-01 09:22:43.706 /remote_api 302 5ms 0cpu_ms 0kb
213.89.134.0 - - [01/Oct/2011:05:22:43 -0700] "GET /remote_api HTTP/1.1" 302 0 - - "montaoproject.appspot.com" ms=6 cpu_ms=0 api_cpu_ms=0 cpm_usd=0.000032
$ python ./appcfg.py download_data --application=montaoproject --url=http://montaoproject.appspot.com/remote_api --filename=montao.data
Downloading data records.
[INFO ] Logging to bulkloader-log-20111001.122234
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
[INFO ] Opening database: bulkloader-progress-20111001.122234.sql3
[INFO ] Opening database: bulkloader-results-20111001.122234.sql3
[INFO ] Connecting to montaoproject.appspot.com/remote_api
Please enter login credentials for montaoproject.appspot.com
Email: niklasro
Password for niklasro:
[INFO ] Authentication Failed
app.yaml:
- url: /remote_api
script: remote_api.py
remote_api.py:
from google.appengine.ext.remote_api import handler
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
import re
MY_SECRET_KEY = 'thetopsecret'
cookie_re = re.compile('^"([^:]+):.*"$')
class ApiCallHandler(handler.ApiCallHandler):
def CheckIsAdmin(self):
login_cookie = self.request.cookies.get('dev_appserver_login', '')
match = cookie_re.search(login_cookie)
if (match and match.group(1) == MY_SECRET_KEY
and 'X-appcfg-api-version' in self.request.headers):
return True
else:
self.redirect('/_ah/login')
return False
application = webapp.WSGIApplication([('.*', ApiCallHandler)])
def main():
run_wsgi_app(application)
if __name__ == '__main__':
main()
**Update** The server status is 302 and the method in `remote_api.py`is not
reached:
2011-11-08 09:02:40.214 /remote_api?rtok=935015419683 302 12ms 0kb
213.89.134.0 - - [08/Nov/2011:03:02:40 -0800] "GET /remote_api?rtok=935015419683 HTTP/1.1" 302 0 - - "montaoproject.appspot.com" ms=13 cpu_ms=0 api_cpu_ms=0 cpm_usd=0.000026
Answer: When using this approach to get remote API to work with open ID (from
<http://blog.notdot.net/2010/06/Using-remote-api-with-OpenID-authentication>),
I think you need to specify the secret key (ie 'thetopsecret') as the email
when prompted (and then just hit enter when prompted for the password).
|
get formated output from mysql via python
Question: Hi I want the following output from my query:
OK|Abortedclients=119063 Aborted_connects=67591 Binlog_cache_disk_use=0
But I dont know how to generate it. this is my script:
#!/usr/bin/env python
import MySQLdb
conn = MySQLdb.connect (host = "...", user="...", passwd="...")
cursor = conn.cursor ()
cursor.execute ("SHOW GLOBAL STATUS")
rs = cursor.fetchall ()
#print rs
print "OK|"
for row in rs:
print "%s=%s" % (row[0], row[1])
cursor.close()
this is what I get now:
OK|
Aborted_clients=119063
Aborted_connects=67591
Binlog_cache_disk_use=0
Answer: Build the string using
[join](http://docs.python.org/library/stdtypes.html#str.join):
print('OK|'+' '.join(['{0}={1}'.format(*row) for row in rs]))
`' '.join(iterable)` creates a string out of the strings in iterable, joined
together with a space `' '` in between the strings.
* * *
To fix the code you posted with minimal changes, you could add a comma at the
end of the print statements:
print "OK|",
for row in rs:
print "%s=%s" % (row[0], row[1]),
This suppresses the automatic addition of a newline after each print
statement. It does, however, add a space (which is not what you said you
wanted):
OK| Aborted_clients=0 ...
|
Can Basemap draw a detailed coastline at the city level?
Question: I'm trying to draw a detailed coastline of the NYC area using Basemap in
Python. Using the full resolution dataset, Manhattan looks like rectangle, and
the Hudson doesn't show up at all about midtown.
Here's the code I'm using. Any suggestions?
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
m = Basemap(projection='merc',llcrnrlat=40.55,urcrnrlat=40.82,\
llcrnrlon=-74.1, urcrnrlon=-73.82, lat_ts=40.5,resolution='f')
m.drawcoastlines()
m.drawrivers()
m.fillcontinents(color='coral',lake_color='aqua')
m.drawmapboundary(fill_color='aqua')
plt.show()
_EDIT_
On further exploration, it looks like my issue is with the rivers,
specifically. I can get the detailed boundaries of the oceanic coastline, but
there is still no Hudson river and no Harlem river. Current code:
m = Basemap(projection='merc',llcrnrlat=40.55,urcrnrlat=40.82,\
llcrnrlon=-74.1, urcrnrlon=-73.82, lat_ts=40.5,resolution='f')
m.drawmapboundary(fill_color='#85A6D9')
m.drawcoastlines(color='#6D5F47', linewidth=.4)
m.drawrivers(color='#6D5F47', linewidth=.4)
m.fillcontinents(color='white',lake_color='#85A6D9')
plt.show()
Answer: The data shipped with basemap is quite low resolution (high resolution data
takes up quite a lot of space).
You can load shapefiles to display with basemap. I've borrowed code from [this
tutorial](http://www.geophysique.be/fr/2011/01/27/matplotlib-basemap-
tutorial-07-shapefiles-unleached/) to do that before. The tutorial also links
to [GADM](http://www.gadm.org/country), where you can download high resolution
shapefiles for any country.
|
Can't install module with python-pip properly
Question: I would like to install a module but `pip` is not installing it in the right
directory which I assume should be `/usr/local/lib/python2.7/site-packages/`.
After all, I just installed Python 2.7.2 today. Originally I had 2.6.5 and had
installed modules successfully there. So I think something is wrong with my
Python path.
**How to have all my module installations go to the proper python2.7
directory?**
s3z@s3z-laptop:~$ pip install requests
Requirement already satisfied: requests in /usr/local/lib/python2.6/dist-packages/requests-0.6.1-py2.6.egg
Installing collected packages: requests
Successfully installed requests
s3z@s3z-laptop:~$ python
Python 2.7.2 (default, Oct 1 2011, 14:26:08)
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named requests
>>> ^Z
[3]+ Stopped python
Also here is what my Python directories look like now
<http://pastie.org/2623543>
Answer: After you installed Python 2.7, did you install the Python 2.7 version of
easy_install and PIP? The existing installations are configured to use Python
2.6 by default which may be causing your issue.
|
How can I use boto to stream a file out of Amazon S3 to Rackspace Cloudfiles?
Question: I'm copying a file from S3 to Cloudfiles, and I would like to avoid writing
the file to disk. The Python-Cloudfiles library has an object.stream() call
that looks to be what I need, but I can't find an equivalent call in boto. I'm
hoping that I would be able to do something like:
shutil.copyfileobj(s3Object.stream(),rsObject.stream())
Is this possible with boto (or I suppose any other s3 library)?
Answer: The Key object in boto, which represents on object in S3, can be used like an
iterator so you should be able to do something like this:
>>> import boto
>>> c = boto.connect_s3()
>>> bucket = c.lookup('garnaat_pub')
>>> key = bucket.lookup('Scan1.jpg')
>>> for bytes in key:
... write bytes to output stream
Or, as in the case of your example, you could do:
>>> shutil.copyfileobj(key, rsObject.stream())
|
wxPython and windows 7 taskbar
Question: For brevity's sake: I'm trying to implement
[this](http://stackoverflow.com/questions/1736394/using-windows-7-taskbar-
features-in-pyqt/1744503#1744503) with wxPython, but I'm struggling to fit
that code into a script based on wxPython.
My simple PyQt test code works fine. Here it is:
from PyQt4 import QtGui
from threading import Thread
import time
import sys
import comtypes.client as cc
import comtypes.gen.TaskbarLib as tbl
TBPF_NOPROGRESS = 0
TBPF_INDETERMINATE = 0x1
TBPF_NORMAL = 0x2
TBPF_ERROR = 0x4
TBPF_PAUSED = 0x8
cc.GetModule("taskbar.tlb")
taskbar = cc.CreateObject("{56FDF344-FD6D-11d0-958A-006097C9A090}", interface=tbl.ITaskbarList3)
class MainWindow(QtGui.QMainWindow):
def __init__(self, parent=None):
QtGui.QMainWindow.__init__(self, parent)
self.setWindowTitle("Test")
self.progress_bar = QtGui.QProgressBar(self)
self.setCentralWidget(self.progress_bar)
self.progress_bar.setRange(0, 100)
self.progress = 0
self.show()
thread = Thread(target=self.counter)
thread.setDaemon(True)
thread.start()
def counter(self):
while True:
self.progress += 1
if self.progress > 100:
self.progress = 0
time.sleep(.2)
self.progress_bar.setValue(self.progress)
taskbar.HrInit()
hWnd = self.winId()
taskbar.SetProgressState(hWnd, TBPF_ERROR)
taskbar.SetProgressValue(hWnd, self.progress, 100)
app = QtGui.QApplication(sys.argv)
ui = MainWindow()
sys.exit(app.exec_())
But, when I try to execute the wxPython counterpart, the taskbar doesn't work
as expected. Here's the wxPython code:
import wx
import time
import comtypes.client as cc
import comtypes.gen.TaskbarLib as tbl
from threading import Thread
TBPF_NOPROGRESS = 0
TBPF_INDETERMINATE = 0x1
TBPF_NORMAL = 0x2
TBPF_ERROR = 0x4
TBPF_PAUSED = 0x8
cc.GetModule("taskbar.tlb")
taskbar = cc.CreateObject("{56FDF344-FD6D-11d0-958A-006097C9A090}", interface=tbl.ITaskbarList3)
class MainWindow(wx.Frame):
def __init__(self, parent, ID, title):
wx.Frame.__init__(self, parent, ID, title)
self.panel = wx.Panel(self)
self.gauge = wx.Gauge(self.panel)
self.gauge.SetValue(0)
self.progress = 0
self.Show()
thread = Thread(target=self.counter)
thread.setDaemon(True)
thread.start()
def counter(self):
while True:
self.progress += 1
if self.progress > 100:
self.progress = 0
time.sleep(.2)
self.gauge.SetValue(self.progress)
taskbar.HrInit()
hWnd = self.GetHandle()
taskbar.SetProgressState(hWnd, TBPF_ERROR)
taskbar.SetProgressValue(hWnd, self.progress, 100)
app = wx.PySimpleApp()
frame = MainWindow(None, wx.ID_ANY, "Test")
app.SetTopWindow(frame)
app.MainLoop()
In particular I think the issue is due to the wxWindow window handle (hWnd)
method, that differ from its Qt equivalent, the former returning an integer
and the latter a "sip.voidptr object".
The problem is that I already wrote the whole code (1200+ lines) with
wxPython, thus i can't re-write it to use Qt (not to talk about the different
licenses).
What do you think about it? Should I give up?
Thanks a lot in advance :)
**EDIT**
Thanks to Robert O'Connor, now it works. However, I still can't get why
`GetHandle` returns an integer while `winId` returns an object. In the .idl
file the argument hwnd is declared as `long` in all the function definitions.
Maybe this is a simple question too ;) Any Ideas?
Answer: On the following line:
hWnd = self.panel.GetId()
You want to use `GetHandle()` instead of `GetId()`.
**Edit:** This was originally posted as a comment, but I suppose it would be
more appropriate for me to repost as an answer.
Regarding the edit to your question: If it now works I guess there isn't a
problem anymore ;) Okay, seriously though..
Ints and Longs are unified in Python and if I had to guess comtypes might be
doing some coercion in the background. I don't know if it's necessary to worry
about such details when dealing with comtypes in general, but it doesn't seem
to matter much in this case.
Now I have no experience with PyQT, but in Python you can define special
methods on objects such as `__int__` and `__long__` to emulate, well, Ints and
Longs. If I had to guess, the object you're getting in PyQT defines one of
those methods.
|
Binary string in Python issues
Question: For some reason I'm having a heck of a time figuring out how to do this in
Python. I am trying to represent a binary string in a string variable, and all
I want it to have is
0010111010
However, no matter how I try to format it as a string, Python always chops off
the leading zeroes, which is giving me a headache in trying to parse it out.
I'd hoped [this question](http://stackoverflow.com/questions/733454/best-way-
to-format-integer-as-string-with-leading-zeros) would have helped, but it
doesn't really...
Is there a way to force Python to stop auto-converting my string to an
integer?
I have tried the following:
val = ""
if (random.random() > 0.50):
val = val + "1"
else
val = val + "0"
and
val = ""
if (random.random() > 0.50):
val = val + "%d" % (1)
else:
val = val + "%d" % (0)
I had stuck it into an array previously, but ran into issues inserting that
array into another array, so I figured it would just be easier to parse it as
a string.
Any thoughts on how to get my leading zeroes back? The string is supposed to
be a fixed length of 10 bits if that helps.
* * *
Edit:
The code:
def create_string(x):
for i in xrange(10): # 10 random populations
for j in xrange(int(x)): # population size
v = ''.join(choice(('0','1')) for _ in range(10))
arr[i][j] = v
return arr
a = create_string(5)
print a
Hopefully the output I'm seeing will show you why I'm having issues:
[[ 10000100 1100000001 101010110 111011 11010111]
[1001111000 1011011100 1110110111 111011001 10101000]
[ 110010001 1011010111 1100111000 1011100011 1000100001]
[ 10011010 1000011001 1111111010 11100110 110010101]
[1101010000 1010110101 110011000 1100001001 1010100011]
[ 10001010 1100000001 1110010000 10110000 11011010]
[ 111011 1000111010 1100101 1101110001 110110000]
[ 110100100 1100000000 1010101001 11010000 1000011011]
[1110101110 1100010101 1110001110 10011111 101101100]
[ 11100010 1111001010 100011101 1101010 1110001011]]
The issue here isn't only with printing, I also need to be able to manipulate
them on a per-element basis. So if I go to play with the first element, then
it returns a 1, not a 0 (on the first element).
Answer: If I understood you right, you could do it this way:
a = 0b0010111010
'{:010b}'.format(a)
#The output is: '0010111010'
_Python 2.7_
It uses string [`format`
method](http://docs.python.org/release/2.7/library/string.html#format-
specification-mini-language).
This is the answer if you want to represent the binary string with leading
zeros.
If you are just trying to generate a random string with a binary you could do
it this way:
from random import choice
''.join(choice(('0','1')) for _ in range(10))
# Update
Unswering your update. I made a code which has a different output if compared
to yours:
from random import choice
from pprint import pprint
arr = []
def create_string(x):
for i in xrange(10): # 10 random populations
arr.append([])
for j in xrange(x): # population size
v = ''.join(choice(('0','1')) for _ in range(10))
arr[-1].append(v)
return arr
a = create_string(5)
pprint(a)
The output is:
[['1011010000', '1001000010', '0110101100', '0101110111', '1101001001'],
['0010000011', '1010011101', '1000110001', '0111101011', '1100001111'],
['0011110011', '0010101101', '0000000100', '1000010010', '1101001000'],
['1110101111', '1011111001', '0101100110', '0100100111', '1010010011'],
['0100010100', '0001110110', '1110111110', '0111110000', '0000001010'],
['1011001011', '0011101111', '1100110011', '1100011001', '1010100011'],
['0110011011', '0001001001', '1111010101', '1110010010', '0100011000'],
['1010011000', '0010111110', '0011101100', '1111011010', '1011101110'],
['1110110011', '1110111100', '0011000101', '1100000000', '0100010001'],
['0100001110', '1011000111', '0101110100', '0011100111', '1110110010']]
Is this what you are looking for?
|
Python observable implementation that supports multi-channel subscribers
Question: In a twisted application I have a series of resource controller/manager
classes that interact via the Observable pattern. Generally most observers
will subscribe to a specific channel (ex. "foo.bar.entity2") but there are a
few cases where I'd like to know about all event in a specific channel (ex.
"foo.*" ) so I wrote something like the following:
from collections import defaultdict
class SimplePubSub(object):
def __init__(self):
self.subjects = defaultdict(list)
def subscribe(self, subject, callbackstr):
"""
for brevity, callbackstr would be a valid Python function or bound method but here is just a string
"""
self.subjects[subject].append(callbackstr)
def fire(self, subject):
"""
Again for brevity, fire would have *args, **kwargs or some other additional message arguments but not here
"""
if subject in self.subjects:
print "Firing callback %s" % subject
for callback in self.subjects[subject]:
print "callback %s" % callback
pubSub = SimplePubSub()
pubSub.subscribe('foo.bar', "foo.bar1")
pubSub.subscribe('foo.foo', "foo.foo1")
pubSub.subscribe('foo.ich.tier1', "foo.ich.tier3_1")
pubSub.subscribe('foo.ich.tier2', "foo.ich.tier2_1")
pubSub.subscribe('foo.ich.tier3', "foo.ich.tier2_1")
#Find everything that starts with foo
#say foo.bar is fired
firedSubject = "foo.bar"
pubSub.fire(firedSubject)
#outputs
#>>Firing callback foo.bar
#>>callback foo.bar1
#but let's say I want to add a callback for everything undr foo.ich
class GlobalPubSub(SimplePubSub):
def __init__(self):
self.globals = defaultdict(list)
super(GlobalPubSub, self).__init__()
def subscribe(self, subject, callback):
if subject.find("*") > -1:
#assumes global suscriptions would be like subject.foo.* and we want to catch all subject.foo's
self.globals[subject[:-2]].append(callback)
else:
super(GlobalPubSub, self).subscribe(subject, callback)
def fire(self, subject):
super(GlobalPubSub, self).fire(subject)
if self.globals:
for key in self.globals.iterkeys():
if subject.startswith(key):
for callback in self.globals[key]:
print "global callback says", callback
print "Now with global subscriptions"
print
pubSub = GlobalPubSub()
pubSub.subscribe('foo.bar', "foo.bar1")
pubSub.subscribe('foo.foo', "foo.foo1")
pubSub.subscribe('foo.ich.tier1', "foo.ich.tier3_1")
pubSub.subscribe('foo.ich.tier2', "foo.ich.tier2_1")
pubSub.subscribe('foo.ich.tier3', "foo.ich.tier2_1")
pubSub.subscribe("foo.ich.*", "All the ichs, all the time!")
#Find everything that starts with foo.ich
firedSubject = "foo.ich.tier2"
pubSub.fire(firedSubject)
#outputs
#>>Firing callback foo.bar
#>>callback foo.bar1
#>>Now with global subscriptions
#
#>>Firing callback foo.ich.tier2
#>>callback foo.ich.tier2_1
#>>global callback says All the ichs, all the time!
Is this as good as it gets without resorting to some sort of exotic construct
( tries for example )? I'm looking for an affirmation that I'm on the right
track or a better alternative suggestion on a global subscription handler
that's pure python ( no external libraries or services ).
Answer: This looks like you are on the right track to me. I was using
[PyPubSub](http://pubsub.sourceforge.net/) with a wxPython app for a bit, and
then ended up implementing my own "more simple" version that, at its root,
looks very similar to what you've done here except with a few more bells and
whistles that you'd probably end up implementing as you fill out your
requirements.
The answer given [here](http://stackoverflow.com/questions/1904351/python-
observer-pattern-examples-tips) is also lot like what you have done as well.
This [answer](http://stackoverflow.com/questions/1092531/event-system-in-
python) goes into examples that are a bit different approach.
There are a number existing libraries besides
[PyPubSub](http://pubsub.sourceforge.net/) out there, such as
[pydispatch](http://pydispatcher.sourceforge.net/) and
[blinker](http://discorporate.us/projects/Blinker/), that might be worth
looking at for reference or ideas.
|
Is this an appropriate use of python's built-in hash function?
Question: I need to compare large chunks of data for equality, and I need to compare
many per second, _fast_. Every object is guaranteed to be the same size, and
it is possible/likely they may only be slightly different (in unknown
positions).
I have seen, from the interactive session below, using `==` operator for byte
strings can be slower if the differences are towards the end of the string,
and it can be very fast if there is a difference near the start.
I thought there might be some way to speed things up using some sort of hash,
of course computing the md5 hash and comparing is a fair whack slower, but
python's inbuilt hash does seem to speed things up significantly.
However, I have no idea about the implementation details of this hash, is it
really hash-like in that I can be comfortable that when `hash(a) == hash(b)`
then `a == b` is very likely? I am happy to have a few incorrect results if a
hash collision is reasonably rare (rare in the sense of needing [an array of
200 PS3s several hours to make a
collision](http://www.win.tue.nl/hashclash/rogue-ca/))
In [1]: import hashlib
In [2]: with open('/dev/urandom') as f:
...: spam = f.read(2**20 - 1)
...:
In [3]: spamA = spam + 'A'
In [4]: Aspam = 'A' + spam
In [5]: spamB = spam + 'B'
In [6]: timeit spamA == spamB
1000 loops, best of 3: 1.59 ms per loop
In [7]: timeit spamA == Aspam
10000000 loops, best of 3: 66.4 ns per loop
In [8]: timeit hashlib.md5(spamA) == hashlib.md5(spamB)
100 loops, best of 3: 4.42 ms per loop
In [9]: timeit hashlib.md5(spamA) == hashlib.md5(Aspam)
100 loops, best of 3: 4.39 ms per loop
In [10]: timeit hash(spamA) == hash(spamB)
10000000 loops, best of 3: 157 ns per loop
In [11]: timeit hash(spamA) == hash(Aspam)
10000000 loops, best of 3: 160 ns per loop
Answer: Python's hash function is designed for speed, and maps into a 64-bit space.
Due to the [birthday paradox](http://en.wikipedia.org/wiki/Birthday_problem),
this means you'll likely get a collision at about 5 billion entries (probably
way earlier, since the hash function is not cryptographical). Also, the
precise definition of `hash` is up to the Python implementation, and may be
architecture- or even machine-specific. Don't use it you want the same result
on multiple machines.
md5 is designed as a cryptographic hash function; even slight perturbations in
the input totally change the output. It also maps into a 128-bit space, which
makes it unlikely you'll ever encounter a collision at all unless you're
specifically looking for one.
If you can handle collisions (i.e. test for equality between all members in a
bucket, possibly by using a cryptographic algorithm like MD5 or SHA2),
Python's hash function is perfectly fine.
One more thing: To save space, you should store the data in binary form if you
write it to disk. (i.e. `struct.pack('!q', hash('abc'))` /
`hashlib.md5('abc').digest()`).
As a side note: [`is`](http://stackoverflow.com/questions/2438667/what-is-the-
semantics-of-is-operator-in-python) is not equivalent to `==` in Python. You
mean `==`.
|
Python MySQLdb freeze on connect attempt
Question: I'm trying to use MySQLdb in Python to connect to the database I have
established on a Pagoda Box application. First, I open the Pagoda tunnel to
the database with:
$ pagoda tunnel -a <app-name>
and it returns that the tunnel has been successfully opened, with the
connection available on 127.0.0.1:3307. I run the following commands in Python
IDLE:
import MySQLdb
conn = MySQLdb.connect(host = '127.0.0.1', port=3307,user='user',passwd='pass')
after which the IDLE screen freezes (i.e. it is stuck on the current command
on an infinite loop). It seems to be hung up on the Connect() method in
connections.py. I'm not sure why this is or how to fix it. Any guidance you
may have is greatly appreciated
=========================================UPDATE==========================================
I let the script run longer, and I additionally tried to connect using
phpmyadmin, as well as a simple mysqli_connect() script on my local computer.
All of them returned:
MySQL error: 2013, “Lost connection to MySQL server at 'reading initial
communication packet', system error: 0”
This seems to be the root of the problem. Is there some configuration I can
fix on my PC to eliminate the issue?
Answer: As a general rule whenever you are trying to figure out where a program is
stalled you can either launch it with strace (strace myprog.py) or you can run
an strace on the pid (strace -p pidnum)
for example, my ipython has a pid of 12608 so I run
strace -po 12608
then I run
con = MySQLdb.connect('xxx.xxx.xxx.xxx', 'user', '', 'database');
and I see the strace produce successful connections
setsockopt(4, SOL_SOCKET, SO_RCVTIMEO, "\2003\341\1\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0
setsockopt(4, SOL_SOCKET, SO_SNDTIMEO, "\2003\341\1\0\0\0\0\0\0\0\0\0\0\0\0", 16) = 0
setsockopt(4, SOL_IP, IP_TOS, [8], 4) = 0
setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0
setsockopt(4, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0
I am not sure how this works with Pagoda Box, but in a normal terminal session
this is where I would start
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.