text
stringlengths 226
34.5k
|
---|
How to sort a Python list descending based on item count, but return a list with sorted items only, and not a count also?
Question: I am generating some large lists in python, and I need to figure out the
absolute fastest way to return a sort list from something like this:
myList = ['a','b','c','a','c','a']
and return a list of it in descending order, based on the count of the items,
so it looks like this.
sortedList = ['a','c','b']
I have been using Counter().most_common(), but this returns a tuple in
descending order with the item, and the number of times in appears in the
list. I really just need a tuple or list in descending order based off count,
with just the items, and not the count amounts. Any ideas?
Edit: So would doing something like this be faster?
myList = ['a','b','c','a','c','a']
count = Counter(myList).most_common()
res = [k for k,v in count]
Answer:
from collections import Counter
myList = ['a','b','c','a','c','a']
res = [k for k,v in Counter(myList).most_common()]
# ['a', 'c', 'b']
|
python threading wont start in background
Question: this is my class for turn on , turn off , and blink LED in raspberry pi i want
stop thread blink in some time ... but thread wont run in background ...
> class LED:
>
>
> _GPIOPORT=None
> flagstop=0
> flag=threading.Event()
>
>
> def __init__(self,GPIONUM):
> self._GPIOPORT=GPIONUM
> GPIO.setmode(GPIO.BOARD)
> GPIO.setwarnings(False)
> GPIO.setup(self._GPIOPORT, GPIO.OUT)
> self.flag.set()
>
>
> def TurnOn(self):
> self.flag.clear()
> print 'Turn On ...'
> GPIO.output(self._GPIOPORT,True)
>
> def TurnOff(self):
> self.flag.clear()
> print 'Turn Off ...'
> GPIO.output(self._GPIOPORT,False)
>
> def Blink(self,delay,e):
> print 'Thread Blink Create ...'
> t1=threading.Thread(self.BlinkThread(delay,e))
> t1.start()
> print 'Thread Started'
>
> def BlinkThread(self,delaytime,event):
> print 'BlinkTrehad Start ....'
> while not event:
> GPIO.output(self._GPIOPORT,False)
> sleep(delaytime)
> GPIO.output(self._GPIOPORT,True)
> sleep(delaytime)
>
and this is my module for use this class :
import LED
from time import sleep
import threading
if __name__ == '__main__':
e=threading.Event()
e.set()
ledblue=LED.LED(11)
ledblue.TurnOn()
sleep(1)
ledblue.Blink(1,e)
ledblue.TurnOn()
never "Thread Started" print in consol and loop run in line
Answer: You are calling BlinkThread() in the Thread constructor instead of just
passing the function reference and arguments and letting the thread start the
function. Change it to:
def Blink(self,delay,e):
print 'Thread Blink Create ...'
t1=threading.Thread(target=self.BlinkThread, args=(delay,e))
t1.start()
print 'Thread Started'
|
Speech library installation error
Question: I am new to Python and I installed the
[`speech`](https://pypi.python.org/pypi/speech) library. But whenever I'm
importing `speech` from Python shell it's giving the error
>>> import speech
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import speech
File "C:\Python34\lib\site-packages\speech-0.5.2-py3.4.egg\speech.py", line 55, in <module>
from win32com.client import constants as _constants
File "C:\Python34\lib\site-packages\win32com\__init__.py", line 5, in <module>
import win32api, sys, os
ImportError: DLL load failed: The specified module could not be found.
Answer: You need to install `pywin32`.
Use `easy_install pywin32` or `pip install pywin32`
|
Random number generator in Python on Linux
Question: This is the code that i've written:
import os
import string
import random
def id_generator(size=8, chars=string.digits):
return ''.join(random.choice(chars) for _ in range(size))
b="echo " + id_generator() + "> file.txt"
os.system(100*b)
Now the problem is that when i run the script it returns one random number not
100. I want that when i open **file.txt** it must be like:
123456
241721
126428
How can i resolve this problem?
Answer: The redirection overwrites the previous contents of the file. You want `>>`,
not `>`.
There is no need in using the obsolete `os.system()` and redirections here, by
the way. You can do:
with open('file.txt', 'w') as f:
for _ in range(100):
f.write(id_generator() + '\n')
This is also more cross-platform.
|
How to add c compiler options to distutils
Question: I want to call Mathematica from C code and C code from Python. I have the
individual parts working, but I can't put everything together.
When I compile the C code that calls Mathematica, then I use the following
command in makefile
$(CC) -O mlcall.c -I$(INCDIR) -L$(LIBDIR) -l${MLLIB} ${EXTRALIBS} -o $@
Where
MLINKDIR = /opt/Mathematica-9.0/SystemFiles/Links/MathLink/DeveloperKit
SYS=Linux-x86-64
CADDSDIR = ${MLINKDIR}/${SYS}/CompilerAdditions
INCDIR = ${CADDSDIR}
LIBDIR = ${CADDSDIR}
MLLIB = ML64i3
My question is how can I use distutils with those same options (currently I'm
getting undefined symbol: MLActivate error, when calling from Python and I
think the problem is because I'm not using these options)?
I saw the answer <http://stackoverflow.com/a/16078447/1335014> and tried to
use CFLAGS (by running the following script):
MLINKDIR=/opt/Mathematica-9.0/SystemFiles/Links/MathLink/DeveloperKit
SYS="Linux-x86-64"
CADDSDIR=${MLINKDIR}/${SYS}/CompilerAdditions
INCDIR=${CADDSDIR}
LIBDIR=${CADDSDIR}
MLLIB=ML64i3
EXTRALIBS="-lm -lpthread -lrt -lstdc++"
export CFLAGS="-I$INCDIR -L$LIBDIR -l${MLLIB} ${EXTRALIBS}"
So I get the following output for `echo $CFLAGS`
-I/opt/Mathematica-9.0/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions -L/opt/Mathematica-9.0/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions -lML64i3 -lm -lpthread -lrt -lstdc++
Which seems correct, but didn't have any effect. Maybe because I'm adding more
than one option.
Answer: I realized that if you don't modify the C source code then nothing is compiled
(the original error I had stayed because of that reason). Using CFLAGS, as I
did, actually does fix the problem.
After digging around in distutils documentations I found two additional fixes
(these only use `setup.py` and don't require any environment variables).
1:
from distutils.core import setup, Extension
module1 = Extension('spammodule',
sources = ['spammodule.c'],
extra_compile_args=["-I/opt/Mathematica-9.0/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions"],
extra_link_args=["-L/opt/Mathematica-9.0/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions", "-lML64i3", "-lm", "-lpthread", "-lrt", "-lstdc++"])
setup (name = 'MyPackage',
version = '1.0',
description = 'This is a demo package',
ext_modules = [module1])
2:
from distutils.core import setup, Extension
module1 = Extension('spammodule',
sources = ['spammodule.c'],
include_dirs=['/opt/Mathematica-9.0/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions'],
library_dirs=['/opt/Mathematica-9.0/SystemFiles/Links/MathLink/DeveloperKit/Linux-x86-64/CompilerAdditions'],
libraries=["ML64i3", "m", "pthread", "rt", "stdc++"])
setup (name = 'MyPackage',
version = '1.0',
description = 'This is a demo package',
ext_modules = [module1])
|
Python: Aggregate data for different users on different days
Question: I'm a new Python user and learning how to manipulate/aggregate data.
I have some sample data of the format:
User Date Price
A 20130101 50
A 20130102 20
A 20130103 30
B 20130201 40
B 20130202 20
and so on.
I'm looking for some aggregates around each user and expecting an output for
mean spend like:
User Mean_Spend
A 33
B 30
I could read line by line and get aggregates for one user but I'm struggling
to read the data for different users.
Any suggestions highly appreciated around how to read the file for different
users.
Thanks
Answer: The collections have a `Counter` object
([documentation](https://docs.python.org/2/library/collections.html#counter-
objects)) based off of `Dictionary` that's meant for this kind of quick
summation. Naively, you could use one to accumulate the spend amounts, and
another to tally the number of transactions, and then divide.
from collections import Counter
accumulator = Counter()
transactions = Counter()
# assuming your input is exactly as shown...
with open('my_foo.txt', 'r') as f:
f.readline() # skip header line
for line in f.readlines():
parts = line.split()
transactions[parts[0]] += 1
accumulator[parts[0]]+=int(parts[2])
result = dict((k, float(accumulator[k])/transactions[k]) for k in transactions)
|
Django CSV export
Question: I want to export the csv file form database, I have the following errors: that
say tuple index out of range, i don't know why
Request Method: GET
Request URL: http://www.article/export_excel/
Django Version: 1.6.2
Exception Type: IndexError
Exception Value: tuple index out of range
Exception Location: /var/www/article/views.py in export_excel, line 191
Python Executable: /usr/bin/python
Python Version: 2.6.6
Python Path:
['/usr/lib/python2.6/site-packages/pip-1.5.2-py2.6.egg',
'/usr/lib64/python26.zip',
'/usr/lib64/python2.6',
'/usr/lib64/python2.6/plat-linux2',
'/usr/lib64/python2.6/lib-tk',
'/usr/lib64/python2.6/lib-old',
'/usr/lib64/python2.6/lib-dynload',
'/usr/lib64/python2.6/site-packages',
'/usr/lib/python2.6/site-packages',
'/usr/lib/python2.6/site-packages/setuptools-0.6c11-py2.6.egg-info',
Server time: Thu, 22 May 2014 14:45:02 +0900
Traceback Switch to copy-and-paste view
/usr/lib/python2.6/site-packages/django/core/handlers/base.py in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
...
▶ Local vars
/var/www/article/views.py in export_excel
day = att[2].day
...
▶ Local vars
This is my views.py :
from datetime import datetime, time, date, timedelta
class workLog(object):
def __init__(self, name, day, attTime, leaveTime):
self.name = name
self.day = day
self.attTime = attTime
self.leaveTime = leaveTime
def export_excel(request):
from staffprofile.models import Myattendance,Myleavework
response = HttpResponse(mimetype='application/vnd.ms-excel; charset="Shift_JIS"')
response['Content-Disposition'] = 'attachment; filename=file.csv'
writer = csv.writer(response)
titles = ["No","name","day","attendance_time", "leave_time"
writer.writerow(titles)
obj_all = attendance.objects.filter(user_id = 3).values_list('user', 'contact_date', 'contact_time').order_by("-contact_date")
lea = leavework.objects.filter(user_id = 3).values_list('contact_time').order_by('-contact_date')
S = Staff.objects.all()
row = [workLog('name', i, None, None) for i in range(32)]
for att in obj_all.filter(user_id = 3).values_list('contact_date'):
day = att[2]
log = row[day]
if log.attTime is None:
log.attTime = att[2]
elif log.attTime < att[2]:
log.attTime = att[2]
for leav in lea:
day = leav[2].day
log = row[day]
if log.leaveTime is None:
log.leaveTime = leav[2]
elif log.leaveTime < leav[2]:
log.leaveTime = leav[2]
for log in row:
if log.attTime is not None:
if log.leaveTime is not None:
row.append((log.attTime, log.leaveTime))
else:
row.append(None)
else:
if log.leaveTime is not None:
row(None)
writer.writerow(row)
return response
Answer: <https://docs.djangoproject.com/en/dev/ref/models/querysets/#values-list>
Statement `obj_all.filter(user_id = 3).values_list('contact_date')` return an
n * 1 dimension array. The length of `attr` is 1.
|
Scatter-plot matrix with lowess smoother
Question: **What would the Python code be for a scatter-plot matrix with lowess
smoothers similar to the following one?** 
I'm not sure about the original source of the graph. I saw it on [this
post](http://stats.stackexchange.com/questions/64789/exploring-a-scatter-plot-
matrix-for-many-variables) on CrossValidated. The ellipses define the
covariance according to the original post. I'm not sure what the numbers mean.
Answer: I adapted the pandas scatter_matrix function and got a decent result:
import pandas as pd
import numpy as np
frame = pd.DataFrame(np.random.randn(100, 4), columns=['A','B','C','D'])
fig = scatter_matrix_lowess(frame, alpha=0.4, figsize=(12,12));
fig.suptitle('Scatterplot matrix with lowess smoother', fontsize=16);

* * *
This is the code for `scatter_matrix_lowess`:
def scatter_matrix_lowess(frame, alpha=0.5, figsize=None, grid=False,
diagonal='hist', marker='.', density_kwds=None,
hist_kwds=None, range_padding=0.05, **kwds):
"""
Draw a matrix of scatter plots with lowess smoother.
This is an adapted version of the pandas scatter_matrix function.
Parameters
----------
frame : DataFrame
alpha : float, optional
amount of transparency applied
figsize : (float,float), optional
a tuple (width, height) in inches
ax : Matplotlib axis object, optional
grid : bool, optional
setting this to True will show the grid
diagonal : {'hist', 'kde'}
pick between 'kde' and 'hist' for
either Kernel Density Estimation or Histogram
plot in the diagonal
marker : str, optional
Matplotlib marker type, default '.'
hist_kwds : other plotting keyword arguments
To be passed to hist function
density_kwds : other plotting keyword arguments
To be passed to kernel density estimate plot
range_padding : float, optional
relative extension of axis range in x and y
with respect to (x_max - x_min) or (y_max - y_min),
default 0.05
kwds : other plotting keyword arguments
To be passed to scatter function
Examples
--------
>>> df = DataFrame(np.random.randn(1000, 4), columns=['A','B','C','D'])
>>> scatter_matrix_lowess(df, alpha=0.2)
"""
import matplotlib.pyplot as plt
from matplotlib.artist import setp
import pandas.core.common as com
from pandas.compat import range, lrange, lmap, map, zip
from statsmodels.nonparametric.smoothers_lowess import lowess
df = frame._get_numeric_data()
n = df.columns.size
fig, axes = plt.subplots(nrows=n, ncols=n, figsize=figsize, squeeze=False)
# no gaps between subplots
fig.subplots_adjust(wspace=0, hspace=0)
mask = com.notnull(df)
marker = _get_marker_compat(marker)
hist_kwds = hist_kwds or {}
density_kwds = density_kwds or {}
# workaround because `c='b'` is hardcoded in matplotlibs scatter method
kwds.setdefault('c', plt.rcParams['patch.facecolor'])
boundaries_list = []
for a in df.columns:
values = df[a].values[mask[a].values]
rmin_, rmax_ = np.min(values), np.max(values)
rdelta_ext = (rmax_ - rmin_) * range_padding / 2.
boundaries_list.append((rmin_ - rdelta_ext, rmax_+ rdelta_ext))
for i, a in zip(lrange(n), df.columns):
for j, b in zip(lrange(n), df.columns):
ax = axes[i, j]
if i == j:
values = df[a].values[mask[a].values]
# Deal with the diagonal by drawing a histogram there.
if diagonal == 'hist':
ax.hist(values, **hist_kwds)
elif diagonal in ('kde', 'density'):
from scipy.stats import gaussian_kde
y = values
gkde = gaussian_kde(y)
ind = np.linspace(y.min(), y.max(), 1000)
ax.plot(ind, gkde.evaluate(ind), **density_kwds)
ax.set_xlim(boundaries_list[i])
else:
common = (mask[a] & mask[b]).values
ax.scatter(df[b][common], df[a][common],
marker=marker, alpha=alpha, **kwds)
# The following 2 lines are new and add the lowess smoothing
ys = lowess(df[a][common], df[b][common])
ax.plot(ys[:,0], ys[:,1], 'red', linewidth=1)
ax.set_xlim(boundaries_list[j])
ax.set_ylim(boundaries_list[i])
ax.set_xlabel('')
ax.set_ylabel('')
_label_axis(ax, kind='x', label=b, position='bottom', rotate=True)
_label_axis(ax, kind='y', label=a, position='left')
if j!= 0:
ax.yaxis.set_visible(False)
if i != n-1:
ax.xaxis.set_visible(False)
for ax in axes.flat:
setp(ax.get_xticklabels(), fontsize=8)
setp(ax.get_yticklabels(), fontsize=8)
return fig
def _label_axis(ax, kind='x', label='', position='top',
ticks=True, rotate=False):
from matplotlib.artist import setp
if kind == 'x':
ax.set_xlabel(label, visible=True)
ax.xaxis.set_visible(True)
ax.xaxis.set_ticks_position(position)
ax.xaxis.set_label_position(position)
if rotate:
setp(ax.get_xticklabels(), rotation=90)
elif kind == 'y':
ax.yaxis.set_visible(True)
ax.set_ylabel(label, visible=True)
# ax.set_ylabel(a)
ax.yaxis.set_ticks_position(position)
ax.yaxis.set_label_position(position)
return
def _get_marker_compat(marker):
import matplotlib.lines as mlines
import matplotlib as mpl
if mpl.__version__ < '1.1.0' and marker == '.':
return 'o'
if marker not in mlines.lineMarkers:
return 'o'
return marker
|
Process Memory grows huge -Tornado CurlAsyncHTTPClient
Question: I am using Tornado CurlAsyncHTTPClient. My process memory keeps growing for
both blocking and non blocking requests when I instantiate corresponding
httpclients for each request. This memory usage growth does not happen if I
just have one instance of the
httpclients(tornado.httpclient.HTTPClient/tornado.httpclient.AsyncHTTPClient)
and reuse them.
Also If I use SimpleAsyncHTTPClient instead of CurlAsyncHTTPClient this memory
growth doesnot happen irrespective of how I instantiate.
Here is a sample code that reproduces this,
import tornado.httpclient
import json
import functools
instantiate_once = False
tornado.httpclient.AsyncHTTPClient.configure('tornado.curl_httpclient.CurlAsyncHTTPClient')
hc, io_loop, async_hc = None, None, None
if instantiate_once:
hc = tornado.httpclient.HTTPClient()
io_loop = tornado.ioloop.IOLoop()
async_hc = tornado.httpclient.AsyncHTTPClient(io_loop=io_loop)
def fire_sync_request():
global count
if instantiate_once:
global hc
if not instantiate_once:
hc = tornado.httpclient.HTTPClient()
url = '<Please try with a url>'
try:
resp = hc.fetch(url)
except (Exception,tornado.httpclient.HTTPError) as e:
print str(e)
if not instantiate_once:
hc.close()
def fire_async_requests():
#generic response callback fn
def response_callback(response):
response_callback_info['response_count'] += 1
if response_callback_info['response_count'] >= request_count:
io_loop.stop()
if instantiate_once:
global io_loop, async_hc
if not instantiate_once:
io_loop = tornado.ioloop.IOLoop()
requests = ['<Please add ur url to try>']*5
response_callback_info = {'response_count': 0}
request_count = len(requests)
global count
count +=request_count
hcs=[]
for url in requests:
kwargs ={}
kwargs['method'] = 'GET'
if not instantiate_once:
async_hc = tornado.httpclient.AsyncHTTPClient(io_loop=io_loop)
async_hc.fetch(url, callback=functools.partial(response_callback), **kwargs)
if not instantiate_once:
hcs.append(async_hc)
io_loop.start()
for hc in hcs:
hc.close()
if not instantiate_once:
io_loop.close()
if __name__ == '__main__':
import sys
if sys.argv[1] == 'sync':
while True:
output = fire_sync_request()
elif sys.argv[1] == 'async':
while True:
output = fire_async_requests()
Here set instantiate_once variable to True, and execute python check.py sync
or python check.py async. The process memory increases continuously
With instantiate_once=False, this doesnot happen.
Also If I use SimpleAsyncHTTPClient instead of CurlAsyncHTTPClient this memory
growth doesnot happen.
I have python 2.7/ tornado 2.3.2/ pycurl(libcurl/7.26.0 GnuTLS/2.12.20
zlib/1.2.7 libidn/1.25 libssh2/1.4.2 librtmp/2.3)
I could reproduce the same issue with latest tornado 3.2
Please help me to understand this behaviour and figure out the right way of
using tornado as http library.
Answer: HTTPClient and AsyncHTTPClient are designed to be reused, so it will always be
more efficient not to recreate them all the time. In fact, AsyncHTTPClient
will try to magically detect if there is an existing AsyncHTTPClient on the
same IOLoop and use that instead of creating a new one.
But even though it's better to reuse one http client object, it shouldn't leak
to create many of them as you're doing here (as long as you're closing them).
This looks like a bug in pycurl: <https://github.com/pycurl/pycurl/issues/182>
|
Test failures ("no transaction is active") with Ghost.py
Question: I have a Django project that does some calculations in Javascript.
I am using [Ghost.py](http://jeanphix.me/Ghost.py/) to try and incorporate
efficient tests of the Javascript calculations into the Django test suite:
from ghost.ext.django.test import GhostTestCase
class CalculationsTest(GhostTestCase):
def setUp(self):
page, resources = self.ghost.open('http://localhost:8081/_test/')
self.assertEqual(page.http_status, 200)
def test_frobnicate(self):
result, e_resources = self.ghost.evaluate('''
frobnicate(test_data, "argument");
''')
self.assertEqual(result, 1.204)
(where `frobnicate()` is a Javascript function on the test page.
This works very well if I run one test at a time.
If, however, I run `django-admin.py test`, I get
Traceback (most recent call last):
...
result = self.run_suite(suite)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/test/runner.py", line 113, in run_suite
).run(suite)
File "/usr/lib64/python2.7/unittest/runner.py", line 151, in run
test(result)
File "/usr/lib64/python2.7/unittest/suite.py", line 70, in __call__
return self.run(*args, **kwds)
File "/usr/lib64/python2.7/unittest/suite.py", line 108, in run
test(result)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/test/testcases.py", line 184, in __call__
super(SimpleTestCase, self).__call__(result)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/ghost/test.py", line 53, in __call__
self._post_teardown()
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/test/testcases.py", line 796, in _post_teardown
self._fixture_teardown()
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/test/testcases.py", line 817, in _fixture_teardown
inhibit_post_syncdb=self.available_apps is not None)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/core/management/__init__.py", line 159, in call_command
return klass.execute(*args, **defaults)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/core/management/base.py", line 285, in execute
output = self.handle(*args, **options)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/core/management/base.py", line 415, in handle
return self.handle_noargs(**options)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/core/management/commands/flush.py", line 81, in handle_noargs
self.emit_post_syncdb(verbosity, interactive, db)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/core/management/commands/flush.py", line 101, in emit_post_syncdb
emit_post_sync_signal(set(all_models), verbosity, interactive, database)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/core/management/sql.py", line 216, in emit_post_sync_signal
interactive=interactive, db=db)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/dispatch/dispatcher.py", line 185, in send
response = receiver(signal=self, sender=sender, **named)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/contrib/auth/management/__init__.py", line 82, in create_permissions
ctype = ContentType.objects.db_manager(db).get_for_model(klass)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/contrib/contenttypes/models.py", line 47, in get_for_model
defaults = {'name': smart_text(opts.verbose_name_raw)},
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/db/models/manager.py", line 154, in get_or_create
return self.get_queryset().get_or_create(**kwargs)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/db/models/query.py", line 388, in get_or_create
six.reraise(*exc_info)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/db/models/query.py", line 380, in get_or_create
obj.save(force_insert=True, using=self.db)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/db/transaction.py", line 305, in __exit__
connection.commit()
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/db/backends/__init__.py", line 168, in commit
self._commit()
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/db/backends/__init__.py", line 136, in _commit
return self.connection.commit()
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/db/utils.py", line 99, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/carl/.virtualenvs/fecfc/lib/python2.7/site-packages/django/db/backends/__init__.py", line 136, in _commit
return self.connection.commit()
django.db.utils.OperationalError: cannot commit - no transaction is active
(running with `django-nose` gives weirder, inconsistent results)
Any clues on how to prevent this issue, which is currently standing in the way
of CI?
Answer: I haven't used Ghost myself but for a similar test I had to use
[TransactionTestCase](https://docs.djangoproject.com/en/1.8/topics/testing/tools/#django.test.TransactionTestCase)
to get a similar test working. Could you try changing GhostTestCase and see if
that's working?
|
Problen running python scripts outside of Eclipse
Question: All my python scripts work just fine when I run them in Eclipse, however when
I drage them over to the python.exe, they never work, the cmd opens and closes
immediately. If I try to do it with a command in cmd, so it doesn't close, I
get errors like:
**ImportErrror: No module named _Some Name_**
and the likes. How can I resolve this issue?
Answer: Your pathing is wrong. In eclipse right click on the project properties >
PyDev - PYTHONPATH or Project References. The source folders show all of the
pathing that eclipse automatically handles. It is likely that the module you
are trying to import is in the parent directory.
Project
src/
my_module.py
import_module.py
You may want to make a python package/library.
Project
bin/
my_module.py
lib_name/
__init__.py
import_module.py
other_module.py
In this instance my_module.py has no clue where import_module.py is. It has to
know where the library is. I believe other_module.py as well as my_module.py
needs to have from lib_name import import_module if they are in a library.
my_module.py
# Add the parent directory to the path
CURRDIR = os.path.dirname(inspect.getfile(inspect.currentframe()))
PARENTDIR = os.path.dirname(CURRDIR)
sys.path.append(PARENTDIR)
from lib_name import import_module
import_module.do_something()
ADVANCED: The way I like to do things is to add a setup.py file which uses
setuptools. This file handles distributing your project library. It is
convenient if you are working on several related projects. You can use pip or
the setup.py commandline arguments to create a link to the library in your
site-packages python folder (The location for all of the installed python
libraries).
Terminal
pip install -e existing/folder/that/has/setup.py
This adds a link to your in the easy_install.pth file to the directory
containing your library. It also adds some egg-link file in site-packages.
Then you don't really have to worry about pathing for that library just use
from lib_name import import_module.
|
socket error - python
Question: i want to get the local private machine's address, running the following piece
of code:
socket.gethostbyaddr(socket.gethostname())
gives the error:
socket.herror: [Errno 2] Host name lookup failure
i know i can see local machine's address, by using
socket.gethostbyname(socket.gethostname())
but it shows the public address of my network (or machine) and ifcofig shows
another address for my wlan. can some one help me on this issue? Thanks
Answer: I believe you're going to find
[netifaces](https://pypi.python.org/pypi/netifaces/0.10.4) a little more
useful here.
It appears to be a cross-platform library to deal with Network Interfaces.
**Example:**
>>> from netifaces import interfaces, ifaddresses
>>> interfaces()
['lo', 'sit0', 'enp3s0', 'docker0']
>>> ifaddresses("enp3s0")
{17: [{'broadcast': 'ff:ff:ff:ff:ff:ff', 'addr': 'bc:5f:f4:97:5a:69'}], 2: [{'broadcast': '10.0.0.255', 'netmask': '255.255.255.0', 'addr': '10.0.0.2'}], 10: [{'netmask': 'ffff:ffff:ffff:ffff::', 'addr': '2001:470:edee:0:be5f:f4ff:fe97:5a69'}, {'netmask': 'ffff:ffff:ffff:ffff::', 'addr': 'fe80::be5f:f4ff:fe97:5a69%enp3s0'}]}
>>>
>>> ifaddresses("enp3s0")[2][0]["addr"]
'10.0.0.2' # <-- My Desktop's LAN IP Address.
|
Python\Numpy: Comparing arrays with NAN
Question: Why are the following two lists not equal?
a = [1.0, np.NAN]
b = np.append(np.array(1.0), [np.NAN]).tolist()
I am using the following to check for identicalness.
((a == b) | (np.isnan(a) & np.isnan(b))).all(), np.in1d(a,b)
Using `np.in1d(a, b)` it seems the `np.NAN` values are not equal but I am not
sure why this is. Can anyone shed some light on this issue?
Answer: `NaN` values never compare equal. That is, the test `NaN==NaN` is always
`False` _by definition of`NaN`_.
So `[1.0, NaN] == [1.0, NaN]` is also `False`. Indeed, once a `NaN` occurs in
any list, it cannot compare equal to any other list, even itself.
If you want to test a variable to see if it's `NaN` in `numpy`, you use the
`numpy.isnan()` function. I don't see any obvious way of obtaining the
comparison semantics that you seem to want other than by “manually” iterating
over the list with a loop.
Consider the following:
import math
import numpy as np
def nan_eq(a, b):
for i,j in zip(a,b):
if i!=j and not (math.isnan(i) and math.isnan(j)):
return False
return True
a=[1.0, float('nan')]
b=[1.0, float('nan')]
print( float('nan')==float('nan') )
print( a==a )
print( a==b )
print( nan_eq(a,a) )
It will print:
False
True
False
True
The test `a==a` succeeds because, presumably, Python's idea that references to
the same object are equal trumps what would be the result of the element-wise
comparison that `a==b` requires.
|
Simplegui module for Window 7
Question: Is there any way to install SimpleGUI module in window 7 for python 3.2 with
all other module dependencies or having Linux OS is only the way to have that!
Answer: Yes. Go to: <https://pypi.python.org/pypi/SimpleGUICS2Pygame/>
1. Download whichever Python you use
2. Change the EGG extension to ZIP
3. Extract in Python
4. Instead of 'import simplegui' type: import SimpleGUICS2Pygame.simpleguics2pygame as simplegui
DONE!
|
Dynamic functions creation from json, python
Question: I am new to python and I need to create class on the fly from the following
json:
{
"name": "ICallback",
"functions": [
{
"name": "OnNavigation",
"parameters": [
{"name":"Type", "type": "int", "value": "0"},
{"name":"source", "type": "int", "value": "0"},
{"name":"tabId", "type": "string", "value": ""},
{"name":"Url", "type": "string", "value": ""},
{"name":"Context", "type": "int", "value": "0"}
]
}
]
}
I found how to create class, but I don't understand how to create methods on
the fly. For now function will just raise `NotImplemented` exception.
Answer: So you already know to create a class:
class Void(object):
pass
kls = Void()
You want to create a method from a JSON, but I'm going to do it from a string
that can be created from the JSON:
from types import MethodType
d = {}
exec "def OnNavigation(self, param): return param" in d
kls.OnNavigation = MethodType(d["OnNavigation"], kls)
# setattr(kls, "OnNavigation", MethodType(d["OnNavigation"], kls))
print kls.OnNavigation("test")
Should output _test_.
|
Control line ending of Print in Python
Question: I've read elsewhere that I can prevent print from going to the next line by
adding a "," to the end of the statement. However, is there a way to control
this conditionally? To sometimes end the line and sometimes not based on
variables?
Answer: One solution without future imports:
print "my line of text",
if print_newline:
print
which is equivalent to:
from __future__ import print_function
print("my line of text", end="\n" if print_newline else "")
|
Creating a gradebook with the pandas module
Question: So I have recently started teaching a course and wanted to handle my grades
using python and the pandas module. For this class the students work in groups
and turn in one assignment per table. I have a file with all of the students
that is formatted like such
Name, Email, Table
"John Doe", [email protected], 3
"Jane Doe", [email protected], 5
.
.
.
and another file with the grades for each table for the assignments done
Table, worksheet, another assignment, etc
1, 8, 15, 4
2, 9, 23, 5
3, 3, 20, 7
.
.
.
What I want to do is assign the appropriate grade to each student based on
their table number. Here is what I have done
import pandas as pd
t_data = pd.read_csv('table_grades.csv')
roster = pd.read_csv('roster.csv')
for i in range(1, len(t_data.columns)):
x = []
for j in range(len(roster)):
for k in range(len(t_data)):
if roster.Table.values[j] == k+1:
x.append(t_data[t_data.columns.values[i]][k])
roster[t_data.columns.values[i]] = x
Which does what I want but I feel like there must be a better way to do a task
like this using the pandas. I am new to pandas and appreciate any help.
Answer: IIUC -- unfortunately your code doesn't run for me with your data and you
didn't give example output, so I can't be sure -- you're looking for `merge`.
Adding a new student, Fred Smith, to table 3:
In [182]: roster.merge(t_data, on="Table")
Out[182]:
Name Email Table worksheet another assignment etc
0 John Doe [email protected] 3 3 20 7
1 Fred Smith [email protected] 3 3 20 7
[2 rows x 6 columns]
or maybe an outer merge, to make it easier to spot missing/misaligned data:
In [183]: roster.merge(t_data, on="Table", how="outer")
Out[183]:
Name Email Table worksheet another assignment etc
0 John Doe [email protected] 3 3 20 7
1 Fred Smith [email protected] 3 3 20 7
2 Jane Doe [email protected] 5 NaN NaN NaN
3 NaN NaN 1 8 15 4
4 NaN NaN 2 9 23 5
[5 rows x 6 columns]
|
calling python from R with instant output to console
Question: I run python scripts from R using the R command:
system('python test.py')
But my print statements in test.py do not appear in the R console until the
python program is finished. I would like to view the print statements as the
python program is running inside R. I have also tried `sys.stdout.write()`,
but the result is the same. Any help is greatly appreciated.
Here is my code for test.py:
import time
for i in range(10):
print 'i=',i
time.sleep(5)
Answer: Tested on Windows 8 with R v3.0.1
Simply right click on the r console then **untick/unselect** the `Buffered
Output` option (See image below). Now execute your code you shall see the
output of `print` statements!

Update:
I forgot to mention that I also needed to add `sys.stdout.flush()` after the
`print` statement in the python file.
import time
import sys
for i in range(5):
print 'i=',i
sys.stdout.flush()
time.sleep(1)
Also if you select the `Buffered Output` option then when you left click on
the r console while your script is executing you shall see the output. Keep
clicking and the output is shown. :)
|
cx_Freeze is not finding self defined modules in my project when creating exe
Question: I have a project that runs from GUI.py and imports modules I created.
Specifically it imports modules from a "Library" package that exists in the
same directory as GUI.py. I want to freeze the scripts with cx_Freeze to
create a windows executable, and I can create the exe, but when I try to run
it, I get: "ImportError: No module named Library."
I see in the output that all the modules that I import from Library aren't
imported. Here's what my setup.py looks like:
import sys, os
from cx_Freeze import setup, Executable
build_exe_options = {"packages":['Libary', 'Graphs', 'os'],
"includes":["tkinter", "csv", "subprocess", "datetime", "shutil", "random", "Library", "Graphs"],
"include_files": ['GUI','HTML','Users','Tests','E.icns', 'Graphs'],
}
base = None
exe = None
if sys.platform == "win32":
exe = Executable(
script="GUI.py",
initScript = None,
base = "Win32GUI",
targetDir = r"built",
targetName = "GUI.exe",
compress = True,
copyDependentFiles = True,
appendScriptToExe = False,
appendScriptToLibrary = False,
icon = None
)
base = "Win32GUI"
setup( name = "MrProj",
version = "2.0",
description = "My project",
options = {"build.exe": build_exe_options},
#executables = [Executable("GUI.py", base=base)]
executables = [exe]
)
I've tried everything that I could find in StackOverflow and made a lot of
fixes based upon similar problems people were having. However, no matter what
I do, I can't seem to get cx_freeze to import my modules in Library.
My setup.py is in the same directory as GUI.py and the Library directory.
I'm running it on a Windows 7 Laptop with cx_Freeze-4.3.3.
I have python 3.4 installed.
Any help would be a godsend, thank you very much!
Answer: If `Library` (funny name by the way) in `packages` doesn't work you could try
as a workaround to put it in the `includes` list. For this to work you maybe
have to explicitly include every single submodule like:
includes = ['Library', 'Library.submodule1', 'Library.sub2', ...]
For the `include_files` you have to add each file with full (relative) path.
Directories don't work.
You could of course make use of `os.listdir()` or the `glob` module to append
paths to your include_files like this:
from glob import glob
...
include_files = ['GUI.py','HTML','Users','E.icns', 'Graphs'],
include_files += glob('Tests/*/*')
...
In some cases something like `sys.path.insert(0, '.')` or even
`sys.path.insert(0, '..')` can help.
|
Selenium Webdriver with Firebug + NetExport + FireStarter not creating a har file in Python
Question: I am currently running Selenium with Firebug, NetExport, and (trying out)
FireStarter in Python trying to get the network traffic of a URL. I expect a
HAR file to appear in the directory listed, however nothing appears. When I
test it in Firefox and go through the UI, a HAR file is exported and saved so
I know the code itself functions as expected. After viewing multiple examples
I do not see what I am missing.
I am using Firefox 29.0.1 Firebug 1.12.8 FireStarter 0.1a6 NetExport 0.9b6
Has anyone else encountered this issue? I am receiving a "webFile.txt" file
being filled out correctly.
After looking up each version of the add-ons they are supposed to be
compatible with the version of Firefox I am using. I tried using Firefox
version 20, however that did not help. I am currently pulling source code.
In addition I have tried it with and without FireStarter, and I have tried
refreshing the page manually in both cases to try to generate a HAR.
My code looks like this:
import urllib2
import sys
import re
import os
import subprocess
import hashlib
import time
import datetime
from browsermobproxy import Server
from selenium import webdriver
import selenium
a=[];
theURL='';
fireBugPath = '/Users/tai/Documents/workspace/testSelenium/testS/firebug.xpi';
netExportPath = '/Users/tai/Documents/workspace/testSelenium/testS/netExport.xpi';
fireStarterPath = '/Users/tai/Documents/workspace/testSelenium/testS/fireStarter.xpi';
profile = webdriver.firefox.firefox_profile.FirefoxProfile();
profile.add_extension( fireBugPath);
profile.add_extension(netExportPath);
profile.add_extension(fireStarterPath);
#firefox preferences
profile.set_preference("app.update.enabled", False)
profile.native_events_enabled = True
profile.set_preference("webdriver.log.file", "/Users/tai/Documents/workspace/testSelenium/testS/webFile.txt")
profile.set_preference("extensions.firebug.DBG_STARTER", True);
profile.set_preference("extensions.firebug.currentVersion", "1.12.8");
profile.set_preference("extensions.firebug.addonBarOpened", True);
profile.set_preference("extensions.firebug.addonBarOpened", True);
profile.set_preference('extensions.firebug.consoles.enableSite', True)
profile.set_preference("extensions.firebug.console.enableSites", True);
profile.set_preference("extensions.firebug.script.enableSites", True);
profile.set_preference("extensions.firebug.net.enableSites", True);
profile.set_preference("extensions.firebug.previousPlacement", 1);
profile.set_preference("extensions.firebug.allPagesActivation", "on");
profile.set_preference("extensions.firebug.onByDefault", True);
profile.set_preference("extensions.firebug.defaultPanelName", "net");
#set net export preferences
profile.set_preference("extensions.firebug.netexport.alwaysEnableAutoExport", True);
profile.set_preference("extensions.firebug.netexport.autoExportToFile", True);
profile.set_preference("extensions.firebug.netexport.saveFiles", True);
profile.set_preference("extensions.firebug.netexport.autoExportToServer", False);
profile.set_preference("extensions.firebug.netexport.Automation", True);
profile.set_preference("extensions.firebug.netexport.showPreview", False);
profile.set_preference("extensions.firebug.netexport.pageLoadedTimeout", 15000);
profile.set_preference("extensions.firebug.netexport.timeout", 10000);
profile.set_preference("extensions.firebug.netexport.defaultLogDir", "/Users/tai/Documents/workspace/testSelenium/testS/har");
profile.update_preferences();
browser = webdriver.Firefox(firefox_profile=profile);
def openURL(url,s):
theURL = url;
time.sleep(6);
#browser = webdriver.Chrome();
browser.get(url); #load the url in firefox
time.sleep(3); #wait for the page to load
browser.execute_script("window.scrollTo(0, document.body.scrollHeight/5);")
time.sleep(1); #wait for the page to load
browser.execute_script("window.scrollTo(0, document.body.scrollHeight/4);")
time.sleep(1); #wait for the page to load
browser.execute_script("window.scrollTo(0, document.body.scrollHeight/3);")
time.sleep(1); #wait for the page to load
browser.execute_script("window.scrollTo(0, document.body.scrollHeight/2);")
time.sleep(1); #wait for the page to load
browser.execute_script("window.scrollTo(0, document.body.scrollHeight);")
searchText='';
time.sleep(20); #wait for the page to load
if(s.__len__() >0):
for x in range(0, s.__len__()):
searchText+= ("" + browser.find_element_by_id(x));
else:
searchText+= browser.page_source;
a=getMatches(searchText)
#print ("\n".join(swfLinks));
print('\n'.join(removeNonURL(a)));
# print(browser.page_source);
browser.quit();
return a;
def found_window(name):
try: browser.switch_to_window(name)
except NoSuchWindowException:
return False
else:
return True # found window
def removeFirstQuote(tex):
for x in tex:
b = x[1:];
if not b in a:
a.append(b);
return a;
def getMatches(t):
return removeFirstQuote(re.findall('([\"|\'][^\"|\']*\.swf)', t));
def removeNonURL(t):
a=[];
for b in t:
if(b.lower()[:4] !="http" ):
if(b[0] == "//"):
a.append(theURL+b[2:b.__len__()]);
else:
while(b.lower()[:4] !="http" and b.__len__() >5):
b=b[1:b.__len__()];
a.append(b);
else:
a.append(b);
return a;
openURL("http://www.chron.com",a);
Answer: I fixed this issue for my own work by setting a longer wait before closing the
browser. I think you are currently setting netexport to export after the
program has quit, so no file is written. The line causing this is:
profile.set_preference("extensions.firebug.netexport.pageLoadedTimeout", 15000);
From the [netexport source
code](https://github.com/firebug/netexport/blob/master/defaults/preferences/prefs.js)
we have that pageLoadedTimeout is the `Number of milliseconds to wait after
the last page request to declare the page loaded'. So I suspect all your minor
page loads are preventing netexport from having enough time to write the file.
One caveat is you set the system to automatically export after 10s so I'm not
sure why you are not acquiring half loaded json files.
|
Extract a number of continuous digits from a random string in python
Question: I am trying to parse this list of strings that contains ID values as a series
of 7 digits but I am not sure how to approach this.
lst1=[
"(Tower 3rd fl floor_WINDOW CORNER : option 2_ floor cut out_small_wood) : GA -
Floors : : Model Lines : id 3925810
(Tower 3rd fl floor_WINDOW CORNER : option 2_ floor cut out_small_wood) : GA - Floors : Floors : Floor : Duke new core floors : id 3925721",
"(Tower 3rd fl floor_WINDOW CORNER : option 3_ floor cut out_large_wood) : GA - Floors : : Model Lines : id 3976019
(Tower 3rd fl floor_WINDOW CORNER : option 3_ floor cut out_large_wood) : GA - Floors : Floors : Floor : Duke new core floors : id 3975995"
]
I really want to pull out just the digit values and combine them into one
string separated by a colon ";". The resulting list would be something like
this:
lst1 = ["3925810; 3925721", "3976019; 3975995"]
Answer: You can use regular expression, like this
import re
pattern = re.compile(r"\bid\s*?(\d+)")
print ["; ".join(pattern.findall(item)) for item in lst1]
# ['3925810; 3925721', '3976019; 3975995']

[Debuggex Demo](https://www.debuggex.com/r/WjsdMuFf3Ajy4sfR)
If you want to make sure that the numbers you pick will only be of length 7,
then you can do it like this
pattern = re.compile(r"\bid\s*?(\d{7})\D*?")

[Debuggex Demo](https://www.debuggex.com/r/4fFYJ9l06fEJScay)
The `\b` refers to the word boundary. So, it makes sure that `id` will be a
separate word and it is followed by 0 or more whitespace characters. And then
we match numeric digits `[0-9]`, `\d` is just a shorthand notation for the
same. `{7}` matches only seven times and followed by `\D`, its the inverse of
`\d`.
|
karger min cut algorithm in python 2.7
Question: Here is my code for the karger min cut algorithm.. To the best of my knowledge
the algorithm i have implemented is right. But I don get the answer right. If
someone can check what's going wrong I would be grateful.
import random
from random import randint
#loading data from the text file#
with open('data.txt') as req_file:
mincut_data = []
for line in req_file:
line = line.split()
if line:
line = [int(i) for i in line]
mincut_data.append(line)
#extracting edges from the data #
edgelist = []
nodelist = []
for every_list in mincut_data:
nodelist.append(every_list[0])
temp_list = []
for temp in range(1,len(every_list)):
temp_list = [every_list[0], every_list[temp]]
flag = 0
for ad in edgelist:
if set(ad) == set(temp_list):
flag = 1
if flag == 0 :
edgelist.append([every_list[0],every_list[temp]])
#karger min cut algorithm#
while(len(nodelist) > 2):
val = randint(0,(len(edgelist)-1))
print val
target_edge = edgelist[val]
replace_with = target_edge[0]
should_replace = target_edge[1]
for edge in edgelist:
if(edge[0] == should_replace):
edge[0] = replace_with
if(edge[1] == should_replace):
edge[1] = replace_with
edgelist.remove(target_edge)
nodelist.remove(should_replace)
for edge in edgelist:
if edge[0] == edge[1]:
edgelist.remove(edge)
print ('edgelist remaining: ',edgelist)
print ('nodelist remaining: ',nodelist)
The test case data is :
1 2 3 4 7
2 1 3 4
3 1 2 4
4 1 2 3 5
5 4 6 7 8
6 5 7 8
7 1 5 6 8
8 5 6 7
Please copy it in a text file and save it as "data.txt" and run the program
The answer should be : the number of min cuts is 2 and the cuts are at edges
[(1,7), (4,5)]
Answer: So Karger's algorithm is a `random alogorithm'. That is, each time you run it
it produces a solution which is in no way guaranteed to be best. The general
approach is to run it lots of times and keep the best solution. For lots of
configurations there will be many solutions which are best or approximately
best, so you heuristically find a good solution quickly.
As far as I can see, you are only running the algorithms once. Thus the
solution is unlikely to be the optimal one. Try running it 100 times in for
loop and holding onto the best solution.
|
Django with apache and wsgi throws ImportError
Question: I'm trying to deploy my Django app to an Apache server with no luck. I
succeeded with the WSGI sample application, and tried to host an empty Django
project. While it works properly with the manage.py runserver, it throws the
following error when using apache:
[notice] Apache/2.2.22 (Debian) PHP/5.4.4-14+deb7u9 mod_python/3.3.1 Python/2.7.3 mod_wsgi/2.7 configured -- resuming normal operations
[error] [client x.x.x.x] mod_wsgi (pid=8300): Exception occurred processing WSGI script '/usr/local/www/django/myapp/wsgi.py'.
[error] [client x.x.x.x] Traceback (most recent call last):
[error] [client x.x.x.x] File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/wsgi.py", line 187, in __call__
[error] [client x.x.x.x] self.load_middleware()
[error] [client x.x.x.x] File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 44, in load_middleware
[error] [client x.x.x.x] for middleware_path in settings.MIDDLEWARE_CLASSES:
[error] [client x.x.x.x] File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 54, in __getattr__
[error] [client x.x.x.x] self._setup(name)
[error] [client x.x.x.x] File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 49, in _setup
[error] [client x.x.x.x] self._wrapped = Settings(settings_module)
[error] [client x.x.x.x] File "/usr/local/lib/python2.7/dist-packages/django/conf/__init__.py", line 132, in __init__
[error] [client x.x.x.x] % (self.SETTINGS_MODULE, e)
[error] [client x.x.x.x] ImportError: Could not import settings 'myapp.settings' (Is it on sys.path? Is there an import error in the settings file?): No module named myapp.settings
My wsgi.py is the following:
"""
WSGI config for myapp project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
https://docs.djangoproject.com/en/1.6/howto/deployment/wsgi/
"""
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myapp.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
I have a wsgi.conf in the conf.d library for apache:
<VirtualHost *:80>
ServerName myapp.example.com
ServerAlias myapp
ServerAdmin [email protected]
DocumentRoot /var/www
<Directory /usr/local/www/django>
Order allow,deny
Allow from all
</Directory>
WSGIDaemonProcess myapp processes=2 threads=15 display-name=%{GROUP}
WSGIProcessGroup myapp
WSGIScriptAlias /myapp /usr/local/www/django/myapp/wsgi.py
LoadModule wsgi_module /usr/lib/apache2/modules/mod_wsgi.so
</VirtualHost>
WSGIPythonPath /usr/local/www/django/myapp
[SOLVED] Thanks, I started all over again, made the suggested modifications to
my configuration files, and now it's working. I couldn't flag both suggestions
correct, but I think both of them were necessary and I had a third (fourth,
fifth...) bug to, which went away after reinstallation.
Answer: It looks like you've been using an old guide for setting up apache2 / wsgi.
I'd recommend using the official guide at
<https://code.google.com/p/modwsgi/wiki/InstallationInstructions>
Anyway, your specific problem is that the wsgi application isn't picking up
the python path correctly. Change you VirtualHost conf to something like this
<VirtualHost *:80>
ServerName myapp.example.com
ServerAlias myapp
ServerAdmin [email protected]
DocumentRoot /usr/local/www/django/myapp
WSGIDaemonProcess myapp processes=2 threads=15 display-name=%{GROUP} python-path=/usr/local/www/django/myapp:/path/to/system/python/site-packages
WSGIProcessGroup myapp
WSGIScriptAlias / /usr/local/www/django/myapp/wsgi.py
<Directory /usr/local/www/django/myapp>
<Files wsgi.py>
Order allow,deny
Allow from all
</Files>
</Directory>
</VirtualHost>
|
Cannot connect to FTP server
Question: I'm not able to connect to FTP server getting below error :-
vmware@localhost ~]$ python try_ftp.py
Traceback (most recent call last):
File "try_ftp.py", line 5, in <module>
f = ftplib.FTP('ftp.python.org')
File "/usr/lib/python2.6/ftplib.py", line 116, in __init__
self.connect(host)
File "/usr/lib/python2.6/ftplib.py", line 131, in connect
self.sock = socket.create_connection((self.host, self.port), self.timeout)
File "/usr/lib/python2.6/socket.py", line 567, in create_connection
raise error, msg
socket.error: [Errno 101] Network is unreachable
I'm writing a very simple code
import ftplib
f = ftplib.FTP('ftp.python.org')
f.login('anonymous','[email protected]')
f.dir()
f.retrlines('RETR motd')
f.quit()
I checked my proxy settings , but it is set to "System proxy setttings"
Please suggest what should I do.
Thanks, Sam
Answer:
[torxed@archie ~]$ telnet ftp.python.org 21
Trying 82.94.164.162...
Connection failed: Connection refused
Trying 2001:888:2000:d::a2...
telnet: Unable to connect to remote host: Network is unreachable
It's not as much the hostname that's bad (ping works you mentioned) but the
default port of 21 is bad. Or they're not running a standard FTP server on
that host at all but rather they're using HTTP as a transport:
<https://www.python.org/ftp/python/>
Try against **ftp.acc.umu.se** instead.
[torxed@archie ~]$ python
Python 3.3.5 (default, Mar 10 2014, 03:21:31)
[GCC 4.8.2 20140206 (prerelease)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import ftplib
>>> f = ftplib.FTP('ftp.acc.umu.se')
>>>
|
python abstract attribute (not property)
Question: What's the best practice to define an abstract instance attribute, but not as
a property?
I would like to write something like:
class AbstractFoo(metaclass=ABCMeta):
@property
@abstractmethod
def bar(self):
pass
class Foo(AbstractFoo):
def __init__(self):
self.bar = 3
Instead of:
class Foo(AbstractFoo):
def __init__(self):
self._bar = 3
@property
def bar(self):
return self._bar
@bar.setter
def setbar(self, bar):
self._bar = bar
@bar.deleter
def delbar(self):
del self._bar
Properties are handy, but for simple attribute requiring no computation they
are an overkill. This is especially important for abstract classes which will
be subclassed and implemented by the user (I don't want to force someone to
use `@property` when he just could have written `self.foo = foo` in the
`__init__`).
[Abstract attributes in
Python](http://stackoverflow.com/questions/2736255/abstract-attributes-in-
python) question proposes as only answer to use `@property` and
`@abstractmethod`: it doesn't answer my question.
<http://code.activestate.com/recipes/577761-simple-abstract-constants-to-use-
when-abstractprop/> may be the right way, but I am not sure. It also only
works with class attributes and not instance attributes.
Answer: If you really want to enforce that a subclass define a given attribute, you
can use metaclass. Personally, I think it may be overkill and not very
pythonic, but you could do something like this:
class AbstractFooMeta(type):
def __call__(cls, *args, **kwargs):
"""Called when you call Foo(*args, **kwargs) """
obj = type.__call__(cls, *args, **kwargs)
obj.check_bar()
return obj
class AbstractFoo(object):
__metaclass__ = AbstractFooMeta
bar = None
def check_bar(self):
if self.bar is None:
raise NotImplementedError('Subclasses must define bar')
class GoodFoo(AbstractFoo):
def __init__(self):
self.bar = 3
class BadFoo(AbstractFoo):
def __init__(self):
pass
Basically the meta class redefine `__call__` to make sure `check_bar` is
called after the init on an instance.
GoodFoo() # ok
BadFoo () # yield NotImplementedError
|
How to use itertools to compute all combinations with repeating elements?
Question: I have tried to use
[itertools](https://docs.python.org/2/library/itertools.html) to compute all
combinations of a list `['a', 'b', 'c']` using `combinations_with_replacement`
with repeating elements. The problem is in the fact that the indices seem to
be used to distinguish the elements:
> Return r length subsequences of elements from the input iterable allowing
> individual elements to be repeated more than once.
>
> Combinations are emitted in lexicographic sort order. So, if the input
> iterable is sorted, the combination tuples will be produced in sorted order.
>
> Elements are treated as unique based on their position, not on their value.
> So if the input elements are unique, the generated combinations will also be
> unique.
Sot this code snippet:
import itertools
for item in itertools.combinations_with_replacement(['a','b','c'], 3):
print (item)
results in this output:
('a', 'a', 'a')
('a', 'a', 'b')
('a', 'a', 'c')
('a', 'b', 'b')
('a', 'b', 'c')
('a', 'c', 'c')
('b', 'b', 'b')
('b', 'b', 'c')
('b', 'c', 'c')
('c', 'c', 'c')
And what I need is the combination set to contain elements like: `('a', 'b',
'a')` which seem to be missing. How to compute the complete combination set?
Answer: It sounds like you want
[`itertools.product`](https://docs.python.org/2/library/itertools.html#itertools.product):
>>> from itertools import product
>>> for item in product(['a', 'b', 'c'], repeat=3):
... print item
...
('a', 'a', 'a')
('a', 'a', 'b')
('a', 'a', 'c')
('a', 'b', 'a')
('a', 'b', 'b')
('a', 'b', 'c')
('a', 'c', 'a')
('a', 'c', 'b')
('a', 'c', 'c')
('b', 'a', 'a')
('b', 'a', 'b')
('b', 'a', 'c')
('b', 'b', 'a')
('b', 'b', 'b')
('b', 'b', 'c')
('b', 'c', 'a')
('b', 'c', 'b')
('b', 'c', 'c')
('c', 'a', 'a')
('c', 'a', 'b')
('c', 'a', 'c')
('c', 'b', 'a')
('c', 'b', 'b')
('c', 'b', 'c')
('c', 'c', 'a')
('c', 'c', 'b')
('c', 'c', 'c')
>>>
|
DB2 Query Error SQL0204N Even With The Schema Defined
Question: I'm using pyodbc to access DB2 10.1.0
I have a login account named foobar and a schema with the same name. I have a
table named users under the schema.
When I'm logged in as foobar, I can run the following query successfully from
the command line:
select * from users
I have a small Python script that I'm using to connect to the database. The
script is:
#!/usr/bin/python
import pyodbc
if __name__ == "__main__":
accessString ="DRIVER={DB2};DATABASE=MYDATABASE;SERVER=localhost;UID=foobar; PWD=foobarish1;CURRENTSCHEMA=FOOBAR"
print accessString
cnxn = pyodbc.connect(accessString , autocommit=True)
cursor = cnxn.cursor()
query = "SELECT * FROM USERS"
cursor.execute(query)
rows = cursor.fetchall()
for row in rows:
print 'Row data'
print row[0]
cursor.close()
cnxn.close()
When I run the script, I get the following error:
('42S02', '[42S02] [IBM][CLI Driver][DB2/LINUXX8664] SQL0204N "FOOBAR.USERS"
is an undefined name. SQLSTATE=42704\n (-204) (SQLExecDirectW)')
This usually means that the schema isn't defined. However, if I change the
query in the script to:
VALUES CURRENT SCHEMA
the script runs successfully and it returns
FOOBAR
Does anyone know how to fix this so I can query the user table? Your
assistance and insight is appreciated.
EDIT: I've also tried adding the schema directly to the table name, making the
query
SELECT * FROM FOOBAR.USERS
and I still get the same error.
Answer: I had the same issue and solved it by setting the SCHEMA explicit by query:
SET CURRENT SCHEMA foobar
|
Plotting histogram from dictionary Python
Question: I have a dictionary with one value associated to each key.
I would like to plot this dictionary as a bar chart with `matplotlib`, set a
different color for each bar, and find a way to use long strings as legible
labels.
X = np.arange(len(dictionay))
pl.bar(X, dictionay.values(), align='center', width=0.5)
pl.xticks(X, dictionay.keys())
pl.xticks(rotation=20)
ymax = max(dictionay.values()) + 1
pl.ylim(0, ymax)
pl.show()
The result:

My keys are very long so we don't see anything clearly! And in addition it
would be great to have this plot sorted by y values. I know a dictionary can't
be sorted so may I go through a list?
Any ideas?
Thanks
Answer: > I would like to plot this dictionnay with matplotlib, set a different color
> for each key and find a way to plot the key which is a long string ... And
> in addition it would be great to have this plot sorted.
Unfortunately the best way I was able to graph long strings was to truncate
them. I arbitrarily chose 15 characters as the maximum length, you can use any
length you find appropriate.
The following code defines a dictionary (`Dictionary`), creates a list of
sorted keys and sorted values by value from largest to smallest, and truncates
keys that are too long to display well. When the bar plot is graphed, it is
done one bar at a time so it can set an individual color to the bar. The color
is chosen by iterating through the tuple defined at the beginning (Colors).
import numpy as np
import matplotlib.pyplot as plt
Dictionary = {"A":3,"C":5,"B":2,"D":3,"E":4,
"A very long key that will be truncated when it is graphed":1}
Dictionary_Length = len(Dictionary)
Max_Key_Length = 15
Sorted_Dict_Values = sorted(Dictionary.values(), reverse=True)
Sorted_Dict_Keys = sorted(Dictionary, key=Dictionary.get, reverse=True)
for i in range(0,Dictionary_Length):
Key = Sorted_Dict_Keys[i]
Key = Key[:Max_Key_Length]
Sorted_Dict_Keys[i] = Key
X = np.arange(Dictionary_Length)
Colors = ('b','g','r','c') # blue, green, red, cyan
Figure = plt.figure()
Axis = Figure.add_subplot(1,1,1)
for i in range(0,Dictionary_Length):
Axis.bar(X[i], Sorted_Dict_Values[i], align='center',width=0.5, color=Colors[i%len(Colors)])
Axis.set_xticks(X)
xtickNames = Axis.set_xticklabels(Sorted_Dict_Keys)
plt.setp(Sorted_Dict_Keys)
plt.xticks(rotation=20)
ymax = max(Sorted_Dict_Values) + 1
plt.ylim(0,ymax)
plt.show()
Output graph:

|
Is there a max image size (pixel width and height) within wx where png images lose there transparency?
Question: Initially, I loaded in 5 .png's with transparent backgrounds using wx.Image()
and every single one kept its transparent background and looked the way I
wanted it to on the canvas (it kept the background of the canvas). These png
images were about (200,200) in size. I proceeded to load a png image with a
transparent background that was about (900,500) in size onto the canvas and it
made the transparency a black box around the image. Next, I opened the image
up with gimp and exported the transparent image as a smaller size. Then when I
loaded the image into python the image kept its transparency. Is there a max
image size (pixel width and height) within wx where png images lose there
transparency? Any info would help. Keep in mind that I can't resize the
picture before it is loaded into wxpython. If I do that, it will have already
lost its transparency.
import wx
import os
def opj(path):
return apply(os.path.join, tuple(path.split('/')))
def saveSnapShot(dcSource):
size = dcSource.Size
bmp= wx.EmptyBitmap(size.width, size.height)
memDC = wx.MemoryDC()
memDC.SelectObject(bmp)
memDC.Blit(0, 0, size.width, size.height, dcSource, 0,0)
memDC.SelectObject(wx.NullBitmap)
img = bmp.ConvertToImage()
img.SaveFile('path to new image created', wx.BITMAP_TYPE_JPEG)
def main():
app = wx.App(None)
testImage = wx.Image(opj('path to original image'), wx.BITMAP_TYPE_PNG).ConvertToBitmap()
draw_bmp = wx.EmptyBitmap(1500, 1500)
canvas_dc = wx.MemoryDC(draw_bmp)
background = wx.Colour(208, 11, 11)
canvas_dc.SetBackground(wx.Brush(background))
canvas_dc.Clear()
canvas_dc.DrawBitmap(testImage,0, 0)
saveSnapShot(canvas_dc)
if __name__ == '__main__':
main()
Answer: I don't know if I got this right. But if I convert your example from MemoryDC
to PaintDC, then I could fix the transparency issue. The key was to pass True
to useMask in DrawBitmap method. If I omit useMask parameter, it will default
to False and no transparency will be used.
The documentation is here: <http://www.wxpython.org/docs/api/wx.DC-
class.html#DrawBitmap>
I hope this what you wanted to do...
import wx
class myFrame(wx.Frame):
def __init__(self, testImage):
wx.Frame.__init__(self, None, size=testImage.Size)
self.Bind(wx.EVT_PAINT, self.OnPaint)
self.testImage = testImage
self.Show()
def OnPaint(self, event):
dc = wx.PaintDC(self)
background = wx.Colour(255, 0, 0)
dc.SetBackground(wx.Brush(background))
dc.Clear()
#dc.DrawBitmap(self.testImage, 0, 0) # black background
dc.DrawBitmap(self.testImage, 0, 0, True) # transparency on, now red
def main():
app = wx.App(None)
testImage = wx.Image(r"path_to_image.png", wx.BITMAP_TYPE_PNG).ConvertToBitmap()
Frame = myFrame(testImage)
app.MainLoop()
if __name__ == '__main__':
main()
(Edit) Ok. I think your original example can be fixed in a similar way
memDC.Blit(0, 0, size.width, size.height, dcSource, 0,0, useMask=True)
canvas_dc.DrawBitmap(testImage,0, 0, useMask=True)
Just making sure that useMask is True was enough to fix the transparency issue
in your example, too.
|
Using Vagrant and VM with python-django & PostgreSQL
Question: I am trying to make a python-django project on a VM with Python/Django 2.7.6
and PostgreSQL 9.3.4 installed.
I am following [this](https://docs.djangoproject.com/en/1.6/intro/tutorial01/)
tutorial. After making
[changes](https://docs.djangoproject.com/en/1.6/ref/settings/#std:setting-
DATABASES) in settings.py for Postgres, when I do vagrant up and vagrant ssh,
and after python manage.py syncdb, it shows the following error.
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/lib/python2.7/dist-packages/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/usr/lib/python2.7/dist-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python2.7/dist-packages/django/core/management/base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/lib/python2.7/dist-packages/django/core/management/base.py", line 280, in execute
translation.activate('en-us')
File "/usr/lib/python2.7/dist-packages/django/utils/translation/__init__.py", line 130, in activate
return _trans.activate(language)
File "/usr/lib/python2.7/dist-packages/django/utils/translation/trans_real.py", line 188, in activate
_active.value = translation(language)
File "/usr/lib/python2.7/dist-packages/django/utils/translation/trans_real.py", line 177, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "/usr/lib/python2.7/dist-packages/django/utils/translation/trans_real.py", line 159, in _fetch
app = import_module(appname)
File "/usr/lib/python2.7/dist-packages/django/utils/importlib.py", line 40, in import_module
__import__(name)
File "/usr/lib/python2.7/dist-packages/django/contrib/admin/__init__.py", line 6, in <module>
from django.contrib.admin.sites import AdminSite, site
File "/usr/lib/python2.7/dist-packages/django/contrib/admin/sites.py", line 4, in <module>
from django.contrib.admin.forms import AdminAuthenticationForm
File "/usr/lib/python2.7/dist-packages/django/contrib/admin/forms.py", line 6, in <module>
from django.contrib.auth.forms import AuthenticationForm
File "/usr/lib/python2.7/dist-packages/django/contrib/auth/forms.py", line 17, in <module>
from django.contrib.auth.models import User
File "/usr/lib/python2.7/dist-packages/django/contrib/auth/models.py", line 48, in <module>
class Permission(models.Model):
File "/usr/lib/python2.7/dist-packages/django/db/models/base.py", line 96, in __new__
new_class.add_to_class('_meta', Options(meta, **kwargs))
File "/usr/lib/python2.7/dist-packages/django/db/models/base.py", line 264, in add_to_class
value.contribute_to_class(cls, name)
File "/usr/lib/python2.7/dist-packages/django/db/models/options.py", line 124, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "/usr/lib/python2.7/dist-packages/django/db/__init__.py", line 34, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "/usr/lib/python2.7/dist-packages/django/db/utils.py", line 198, in __getitem__
backend = load_backend(db['ENGINE'])
File "/usr/lib/python2.7/dist-packages/django/db/utils.py", line 113, in load_backend
return import_module('%s.base' % backend_name)
File "/usr/lib/python2.7/dist-packages/django/utils/importlib.py", line 40, in import_module
__import__(name)
File "/usr/lib/python2.7/dist-packages/django/db/backends/postgresql_psycopg2/base.py", line 25, in <module>
raise ImproperlyConfigured("Error loading psycopg2 module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading psycopg2 module: No
module named psycopg2
Also one more thing, I am able to run a project with SQLite on the VM.
What to do ?
Answer: `pyscopg2` is the name of the Python module normally used to interface with
Postgresql. You will need to install it.
You should be able to install it using Pip with `pip install psycopg2`. Pip
itself can usually be found using your package manager - on Ubuntu, for
instance, IIRC the package is called python-pip for Python 2 and python3-pip
for Python 3.
|
Error with hex encode in Python 3.3
Question: I am trying modify code from [this
question](http://stackoverflow.com/questions/3241929/python-find-dominant-
most-common-color-in-an-image) to use in Python 3.3 (I installed Pillow, scipy
and NumPy):
import struct
from PIL import Image
import scipy
import scipy.misc
import scipy.cluster
NUM_CLUSTERS = 5
print ('reading image')
im = Image.open("image.jpg")
im = im.resize((150, 150)) # optional, to reduce time
ar = scipy.misc.fromimage(im)
shape = ar.shape
ar = ar.reshape(scipy.product(shape[:2]), shape[2])
print ('finding clusters')
codes, dist = scipy.cluster.vq.kmeans(ar, NUM_CLUSTERS)
print ('cluster centres:\n'), codes
vecs, dist = scipy.cluster.vq.vq(ar, codes) # assign codes
counts, bins = scipy.histogram(vecs, len(codes)) # count occurrences
index_max = scipy.argmax(counts) # find most frequent
peak = codes[index_max]
colour = ''.join(chr(c) for c in peak).encode('hex')
print ('most frequent is %s (#%s)') % (peak, colour)
But I get this error:
Traceback (most recent call last): File
"C:/Users/User/Desktop/pyt33_pic.py", line 24, in <module>
colour = ''.join(chr(c) for c in peak).encode('hex') LookupError: unknown encoding: hex
What am I doing wrong?
Answer: In 2.x the codec "hex_codec" is aliased to "hex". This alias is restored in
3.4. A bigger change is that a buffer to `bytes` encoding requires using
`codecs.encode` in Python 3. Additionally, for string formatting you'll need
to decode the result. For example:
>>> peak
array([131, 128, 124], dtype=uint8)
>>> codecs.encode(peak, 'hex_codec').decode('ascii')
'83807c'
Alternatively you can use the `format` function to individually convert the
numbers to hex:
>>> ''.join(format(c, '02x') for c in peak)
'83807c'
|
Creating a list of numpy.ndarray of unequal length in Cython
Question: I now have python code to create a list of ndarrays, and these arrays are not
equal length. The piece of code snippet that looks like this:
import numpy as np
from mymodule import list_size, array_length # list_size and array_length are two lists of ints, and the len(array_length) == list_size
ndarray_list = []
for i in range(list_size):
ndarray_list.append(np.zeros(array_length[i]))
Now, I need to convert this to Cython, but do not know how. I tried to create
a 2-d dynamically allocated array, like this:
import numpy as np
cimport numpy as np
from mymodule import list_size, array_length
cdef int i
ndarray_list = <double **>malloc(list_size * sizeof(double*))
for i in range(list_size):
ndarray_list[i] = <double *>malloc(array_length[i] * sizeof(double))
However, this method only creates a double pointer in ndarray_list[i]. I
cannot pass it to other functions which requires some of the ndarray method.
What should I do?
Answer: In order to pass the C `double*` buffer to a function that requires a
`numpy.ndarray` you can create a temporary buffer and assign to its memory
address the address of the `double*` array.
This `malloc()`-based solution is orders of magnitude faster than the other
answer based on NumPy buffers. Note how to `free()` the inner arrays to avoid
a memory leak.
import numpy as np
cimport numpy as np
from cython cimport view
from libc.stdlib cimport malloc, free
cdef int i
cdef double test
list_size = 10
ndarray_list = <double **>malloc(list_size * sizeof(double*))
array_length = <int *>malloc(list_size * sizeof(int*))
for i in range(list_size):
array_length[i] = i+1
ndarray_list[i] = <double *>malloc(array_length[i] * sizeof(double))
for j in range(array_length[i]):
ndarray_list[i][j] = j
for i in range(list_size):
for j in range(array_length[i]):
test = ndarray_list[i][j]
cdef view.array buff
for i in range(list_size):
buff = <double[:array_length[i]]>ndarray_list[i]
print np.sum(buff)
#...
for i in range(list_size):
free(ndarray_list[i])
free(ndarray_list)
free(array_length)
|
Python: function takes 1 positional argument but 2 were given, how?
Question: I was creating a Sudoku Game in python with Tk.
I got a error about the function on a keypress for a button
from random import randint
from tkinter import *
class sudoku:
global root,result,lb
def __init__(self):
self.aleatoriedade()
for k in range(9):
j=randint(0,80)
x=j//9
y=j-(x*9)
lb[x][y]['text']=result[x][y]
lb[0][0].bind('<KeyPress-2>',self.kk)
#setted this for test
root.mainloop()
def kk(self):
lb[0][0]['text']='2'
def aleatoriedade(self):
for i in range(9):
var=0
while var in result[0]:
var=randint(1,9)
result[0][i]=var
for i in range(1,9):
for j in range(9):
result[i][j]=result[0][field[i][j]-1]
#MAIN()
n = 3
field = [[(i*n + i//n + j) % (n*n) + 1 for j in range(9)]for i in range(9)]
result = [[None for i in range(9)]for i in range(9)]
lb=[[None for i in range(9)]for i in range(9)]
x=0
y=0
root=Tk()
for i in range(9):
for j in range(9):
lb[i][j]=Button(root,font=("Verdana",'13',"bold"),bd=1,height=3,width=6)
if (i in (0,1,2,6,7,8) and j in (0,1,2,6,7,8))or(i in (3,4,5) and j in (3,4,5)):
lb[i][j]['bg']='white'
lb[i][j].grid(row=i,column=j)
janela=sudoku()
and this error/exception in `lb[0][0].bind('<KeyPress-2>',self.kk)`
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python33\lib\tkinter\__init__.py", line 1489, in __call__
return self.func(*args)
TypeError: kk() takes 1 positional argument but 2 were given
I don't mind where is the error. I have included the self on my function
Answer: I'm not a `tkinter` expert, but it seems (by what I've read so far) that the
method
bind(some_string, some_function)
calls `function` passing the parameter `string` to it.
You have declared the method `kk` like
def kk(self):
and it means that it is only expecting _one argument_. You are also passing
the method `self.kk` to `bind()`, which means that it will be called like
self.kk('<KeyPress-2>')
There is the problem! That call, in fact, is passing two arguments to the
method `kk`. It's equivalent to
sudoku.kk(janela, '<KeyPress-2>')
Note that `janela` is the actual instance of the class `sudoku`. Coming back
to the problem, you are passing _two arguments_!!!
**How can you solve it?**
As I said I'm not an expert on this topic, but my guess is to declare the
method `kk` with two parameters:
def kk(self, another_parameter):
# ...
**Note:** I would recommend you to follow [Python naming
conventions](http://www.python.org/dev/peps/pep-0008/#naming-conventions). In
other words, class names should be like `SomeClassName` or `Sudoku`.
|
Is this function built in in Python 2.7?
Question: Suppose I have two lists of the same size in Python, the first:
[100, 200, 300, 400]
and I want the other to be:
[0, 100, 300, 600]
which is each element in the 2nd list equals the sum of all previous elements
in the first.
Is there a built-in Python function that does such an operation on a list? Or
do I have to think about an algorithm to do it?
Answer: If you use Python 3.2+, you can use
[`itertools.accumulate`](https://docs.python.org/3/library/itertools.html#itertools.accumulate):
>>> import itertools
>>>
>>> a_list = [100, 200, 300, 400]
>>> list(itertools.accumulate([0] + a_list[:-1]))
[0, 100, 300, 600]
**UPDATE**
To avoid creation of temporary list, use
[`itertools.islice`](https://docs.python.org/3/library/itertools.html#itertools.islice),
[`itertools.chain`](https://docs.python.org/3/library/itertools.html#itertools.chain):
>>> from itertools import chain, accumulate, islice
>>>
>>> a_list = [100, 200, 300, 400]
>>> list(accumulate(chain([0], islice(a_list, 0, len(a_list) - 1))))
[0, 100, 300, 600]
|
Why this regular expression pattern matches even an extra character in string?
Question: It is becoming hard for me to learn regular expressions, see the following
python regular expression code snippet.
>>> import re
>>> str = "demo"
>>> re.search("d?mo",str)
<_sre.SRE_Match object at 0x00B65330>
In the above example, why it is returning the matching object, even it is not?
I know, symbol '?' means it will match either 0 or 1 repetitions of the
preceding character, but
From the above example,
1.'d' is matched with 'd'
2.'m' is matched with 'm'
3.'o' is matched with 'o'
But with which character 'e' is matched? Accoding to my understanding, only
'dmo' or 'mo' have to be matched with the given pattern, but why 'demo'.
If I want to match only 'dmo' or 'mo', what is the correct pattern?
Answer: `re.search('R', str)` is effectively the same as `re.match('.*R', str)` for
regexes `R`.
So you have (effectively... ignoring newlines)
re.match(".*d?mo", "demo")
where the `.*` matches `"de"`, the `d?` matches `""` and the `mo` matches
`"mo"`.
* * *
You can check this with a capturing group:
re.search("(d?mo)", "demo").group(0)
#>>> 'mo'
The `d?` matches nothing, as it's optional.
|
Can't save matplotlib animation
Question: I am trying to get a simple animation saved using ffmpeg. I followed a
tutorial to install ffmpeg, and I can now access it from the command prompt.
Now I run this piece of code:
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
fig = plt.figure()
ax = plt.axes(xlim=(0, 2), ylim=(-2, 2))
line, = ax.plot([], [], lw=2)
def init():
line.set_data([], [])
return line,
def animate(i):
x = np.linspace(0, 2, 1000)
y = np.sin(2 * np.pi * (x - 0.01 * i))
line.set_data(x, y)
return line,
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=200, interval=20, blit=True)
mywriter = animation.FFMpegWriter()
anim.save('mymovie.mp4',writer=mywriter)
plt.show()
I get this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 523, in runfile
execfile(filename, namespace)
File "C:\Users\Renger\.xy\startups\b.py", line 23, in <module>
anim.save('mymovie.mp4',writer=mywriter)
File "C:\Python27\lib\site-packages\matplotlib\animation.py", line 609, in save
with writer.saving(self._fig, filename, dpi):
File "C:\Python27\lib\contextlib.py", line 17, in __enter__
return self.gen.next()
File "C:\Python27\lib\site-packages\matplotlib\animation.py", line 166, in saving
self.setup(*args)
File "C:\Python27\lib\site-packages\matplotlib\animation.py", line 156, in setup
self._run()
File "C:\Python27\lib\site-packages\matplotlib\animation.py", line 183, in _run
stdin=subprocess.PIPE)
File "C:\Python27\lib\subprocess.py", line 711, in __init__
errread, errwrite)
File "C:\Python27\lib\subprocess.py", line 948, in _execute_child
startupinfo)
WindowsError: [Error 2] Het systeem kan het opgegeven bestand niet vinden
The last dutch sentence does mean something like: The system can't find the
specified file.
What do these errors mean, and how can I solve them?
Answer: You need to specify your path to `ffmpeg`:
On linux I use:
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
plt.rcParams['animation.ffmpeg_path'] = '/usr/bin/ffmpeg'
You will obviously have to point to your windows .exe instead of
'/usr/bin/ffmpeg'
If you don't have `ffmpeg` installed, you can get it
[here](http://ffmpeg.zeranoe.com/builds/)
|
"Fuzzy" hashtables in Python to replace elif chains?
Question: The Pythonic way to choose one of many execution paths or bits of data
dependent on a variable is to use a hashtable, like so:
mapping = {'bar':"xyzzy", 'foo':"plugh", 'baz':"frobnitz"}
magic = mapping[command]
What do I do in the case where I want to choose an execution path based on
whether or not a value is within one of many ranges of values?
Or, to put it more succinctly, how do I pythonicize this:
#A value from one to four indicates an O-type star. 4/58 chance.
if typeselec <= 4:
self.specttype="O"
#A value from 5-20 indicates a B-type star. 15/58 chance.
elif typeselec <= 20:
self.specttype="B"
#A value from 20-30 indicates an A-type star. 10/58 chance.
elif typeselec <= 30:
self.specttype="A"
#A value from 31-36 indicates an F-type star. 6/58 chance.
elif typeselec <= 36:
self.specttype="F"
#A value from 37-40 indicates a G-type star. 4/58 chance.
elif typeselec <= 40:
self.specttype="G"
#A value from 41-50 indicates a K-type star. 10/58 chance.
elif typeselec <= 22:
self.specttype="K"
#A value from 50-58 indicates an M-type star. 8/58 chance.
else:
self.specttype="M"
One obvious means would be `{1:'O', 2:'O',3:'O',4:'O',5:'B',6:'B'...}`, but
that seems awfully inelegant and inefficient. Is there any other way?
Answer: Exactly this kind or problem is used as example for the
[bisect](https://docs.python.org/3.4/library/bisect.html#other-examples)
module.
Adapted to your problem it would look like this:
from bisect import bisect_left
def select_type(typeselec,
breakpoints=[4, 20, 30, 36, 40, 50],
types='0BAFGKM'):
return types[bisect_left(breakpoints, typeselec)]
for i in range(59):
print(i, select_type(i))
The difference to the example from the doc is that here the breakpoint is part
of the lower bound (`<=`), where as in the example it's part of the upper
(`>=`), so `bisect_left` needs to be used instead of `bisect` (which is an
alias for `bisect_right`).
Altough `bisect` doesn't have a time complexity of `O(1)`, it used binary
search, so it has a time complexity of `O(log(n))`, which for larger `n` would
be a good improvement of the `O(n)` of the `if` cascade.
|
Unable to import lib tiff after installing Anaconda on Mac OS X 10.9.2
Question: I have developed some software using Python under Windows 7.
I have given it to a colleague to run on a Mac (OS X 10.9.2). I have never
used a Mac and am having trouble helping them to get started. I have
downloaded and installed [Anaconda
1.9.2](https://store.continuum.io/cshop/anaconda/) on the Mac. According to
the continuum documentation, `libtiff` is included, but when I run my python
file using the Spyder IDE I get the following error when it tries to import
`libtiff`:
> ImportError: No module named libtiff.
Following one of the answers on Stack Ooverflow, I tried:
conda install libtiff
This runs and returns:
> All requested packages already installed.
However on Windows 7 I can see a libtiff folder under `\python27\lib\site-
packages`. On the Mac there is no `libtiff` folder under `/lib/python2.7/site-
packages`.
Can anyone tell me what I am missing?
Answer: Unclear on this. But what you can do to begin with is to type echo `$PATH`
from the terminal and see what paths are set. Unclear on how Anaconda
interacts with the system, but a good hunch is that if the library file is not
in a path then that would cause this.
Also, looking at this thread on [Google
Groups](https://groups.google.com/a/continuum.io/forum/#!topic/anaconda/F8Q-8xyvrks)
it seems that Anaconda installs it’s own libraries that might need to be
symbolically linked into the main `/usr/local/lib` directory. The user Denis
Engemann—who made the post—posts this bash script in the last response in the
thread:
for lib in ~/anaconda/lib/*;
do
ln -s $lib /usr/local/lib/$(basename $lib);
done
I would recommend checking those two directories first before linking to make
sure all is as expected.
|
How to choose which child class to instantiate dynamically
Question: My current project is in as3, but this is something I am curious about for
other languages as well.
I'm attempting to use a factory object to create the appropriate object
dynamically. My `LevelFactory` has a static method that returns a new instance
of the level number provided to the method. In the code calling that method, I
am able to dynamically create the buttons to call the levels like so:
for (var i:int = 1; i < 4; i++) {
var tempbutton:Sprite = createButton("Level " + i, 25, 25 +(60 * i), start(i));
_buttons.push(button);
}
This code just creates a simple button with the given arguments `(ButtonText,
x, y, function)`. It's working fine. The buttons are created, and clicking on
one of them calls this method with the appropriate argument
private function start(level:int):Function {
return function(e:MouseEvent):void {
disableButtons();
newLevel = LevelFactory.createLevel(level);
addChild(newLevel);
}
}
This is all working fine; I'm just providing it for background context. The
question I have is this: Is it possible to dynamically choose the type of
object that my static function returns? Currently, I have am doing it as
follows
public static function createLevel(level:int):Level {
var result:Level;
switch(level) {
case 1: result = new Level1(); break;
case 2: result = new Level2(); break;
//etc
}
return result;
}
I should note that all of these Level1, Level2, etc. classes extend my base
level class. (Yay polymorphism!) What I would like to do is be able to do
something along the lines of
public static function createLevel(level:int):Level {
var result:Level;
var levelType:String = "Level" + level;
return new levelType();
}
Obviously it's not going to work with a string like that, but is there any way
to accomplish this in as3? What about other languages, such as Java or Python?
Can you dynamically choose what type of child class to instantiate?
Update:
import Levels.*;
import flash.events.*;
import flash.utils.*;
public class LevelFactory
{
public static function createLevel(level:int):Level {
var ref:Class = getDefinitionByName('Levels.' + 'Level' + level) as Class;
var result:Level = new ref();
return result;
}
}
Update/Edit: getDefinitionByName seems to be what I'm looking for, but it has
a problem. It seems that the compiler will strip unused imports, which means
that unless I declare each subclass in the code ahead of time, this method
will get a reference error. How can I get around the need to declare each
class separately (which defeats the purpose of dynamic instantiation)?
Answer: Yes, you sure can, and it's very similar to the string thing that you've
provided. The only thing that you are missing is the `getDefinitionByName`
method:
[http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/utils/package.html#getDefinitionByName()](http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/utils/package.html#getDefinitionByName%28%29)
You can generate whatever class name you want, and what this method does is
that it searches for that class in it's namespace, and if it finds it - it
returns it as a class:
var ClassReference:Class = getDefinitionByName("flash.display.Sprite") as Class;
var instance:Object = new ClassReference();
This piece of code will instantiate a Sprite. This way you can instantiate
your classes without all those switches and cases, especially when you have to
make a hundred levels :)
Hope that helps! Cheers!
Edit:
In your case, the code should be:
var ref:Class = getDefinitionByName('com.path.Level' + index) as Class;
var level:Level = new ref(); // it will actually be Level1 Class
|
Finding first samples greater than a threshold value efficiently in Python (and MATLAB comparison)
Question: Instead of finding all the samples / data points within a list or an array
which are greater than a particular `threshold`, I would like to find only the
first samples where a `signal` becomes greater than a `threshold`. The signal
might cross the threshold several times. For example if I have an example
signal:
signal = [1, 2, 3, 4, 4, 3, 2, 1, 0, 3, 2, 1, 0, 0, 1, 1, 4, 8, 7, 6, 5, 0]
and a `threshold = 2`, then
signal = numpy.array(signal)
is_bigger_than_threshold = signal > threshold
would give me all values in `signal`which are greater than `threshold`.
However, I would like to get only the first samples whenever signal becomes
greater than threshold. Therefore, I am going through the whole list and make
boolean comparisons like
first_bigger_than_threshold = list()
first_bigger_than_threshold.append(False)
for i in xrange(1, len(is_bigger_than_threshold)):
if(is_bigger_than_threshold[i] == False):
val = False
elif(is_bigger_than_threshold[i]):
if(is_bigger_than_threshold[i - 1] == False):
val = True
elif(is_bigger_than_threshold[i - 1] == True):
val = False
first_bigger_than_threshold.append(val)
This gives me the result I was looking for, namely
[False, False, True, False, False, False, False, False, False, True, False, False, False,
False, False, False, True, False, False, False, False, False]
In MATLAB I would do similarily
for i = 2 : numel(is_bigger_than_threshold)
if(is_bigger_than_threshold(i) == 0)
val = 0;
elseif(is_bigger_than_threshold(i))
if(is_bigger_than_threshold(i - 1) == 0)
val = 1;
elseif(is_bigger_than_threshold(i - 1) == 1)
val = 0;
end
end
first_bigger_than_threshold(i) = val;
end % for
Is there a more efficient (faster) way to perform this calculation?
If I generate data in Python, e.g.
signal = [round(random.random() * 10) for i in xrange(0, 1000000)]
and time it, calculating these values took `4.45` seconds. If I generate data
in MATLAB
signal = round(rand(1, 1000000) * 10);
and execute the program it takes only `0.92` seconds.
Why is MATLAB almost 5 times quicker than Python performing this task?
Thanks in advance for your comments!
Answer: [This post](http://stackoverflow.com/questions/17242684/for-loop-in-python-
is-10x-slower-than-matlab) explains why your code is slower than Matlab.
Try this code
import numpy as np
threshold = 2
signal = np.array([1, 2, 3, 4, 4, 3, 2, 1, 0, 3, 2, 1, 0, 0, 1, 1, 4, 8, 7, 6, 5, 0])
indices_bigger_than_threshold = np.where(signal > threshold)[0] # get item
print indices_bigger_than_threshold
# [ 2 3 4 5 9 16 17 18 19 20]
non_consecutive = np.where(np.diff(indices_bigger_than_threshold) != 1)[0]+1 # +1 for selecting the next
print non_consecutive
# [4 5]
first_bigger_than_threshold1 = np.zeros_like(signal, dtype=np.bool)
first_bigger_than_threshold1[indices_bigger_than_threshold[0]] = True # retain the first
first_bigger_than_threshold1[indices_bigger_than_threshold[non_consecutive]] = True
`np.where` returns indices that match the condition.
The strategy is to get indices upper than `threshold` and remove the
consecutive.
BTW, welcome to Python/Numpy world.
|
Does Parallel Python PP always get back jobs'results in the order we created them?
Question: We are launching processes using PP and need to aggregate the results of jobs
in the order we sent them to the server. Is there a kind of pile to control
the aggregation of the results?
import pp, numpy
def my_silly_function(a,b):
return a**4+b**15
# call the server (with pile?)
ppservers = ()
job_server = pp.Server(ppservers=ppservers, secret="password")
# launch jobs and aggregate them into a list
jobs1=numpy.array[job_server.submit(my_silly_function, args=(w,w+40)) for w in xrange(1000)]
\--> we hope that results will be returned in the same order we sent them (and
thus don't need a lexsort to get them in the order we are asking for?
Answer: The answer is yes. The order is maintained. If you'd like to choose how to get
your results back, with a blocking map, iterative map, or asynchronous
(unordered) map, then you can use the fork of `pp` in `pathos`. The
`pathos.pp` fork also works with python 3.x.
>>> # instantiate and configure the worker pool
>>> from pathos.pp import ParallelPythonPool
>>> pool = ParallelPythonPool(nodes=4)
>>>
>>> # do a blocking map on the chosen function
>>> print pool.map(pow, [1,2,3,4], [5,6,7,8])
[1, 32, 729, 16384]
>>>
>>> # do a non-blocking map, then extract the results from the iterator
>>> results = pool.imap(pow, [1,2,3,4], [5,6,7,8])
>>> print "..."
>>> print list(results)
[1, 32, 729, 16384]
>>>
>>> # do an asynchronous map, then get the results
>>> results = pool.amap(pow, [1,2,3,4], [5,6,7,8])
>>> while not results.ready():
>>> time.sleep(5); print ".",
>>> print results.get()
[1, 729, 32, 16384]
get `pathos.pp` here: <https://github.com/mmckerns>
|
max_user_connections after gevent.monkey.patch_all()
Question: I am using gevent-socketio v0.13.8 for a chat application on a django based
web app. My database is MySql and have a max_user_connection = 1500 value. My
socket server is daemonized with python daemon. I was using the socket server
without monkey patching and it was running good, except when there was an
error on a greenlet, all system fails with SystemExit and no connection could
be established anymore. The solution was to restart all server.
However I don't want to restart the server everytime. And finally I came up
with the idea monkey patching. I don't know if it is relevant to my problem
but I want my socket server to run even if an unhandled exception causes a
SystemExit on a greenlet.
Then I used gevent.monkey.patch_all() in my server start function. And here is
my main problem right now: After 3-4 connections, MySql causes the following
error:
User xxx already has more than 'max_user_connections' active connections
max_user_connection variable is set to 1500 in mysql server. I think something
creates new connection to database in greenlets.
By the way max_user_connection error does not appear when I use:
monkey.patch_all(socket=True, dns=True, time=True, select=True, thread=True, os=True, ssl=True, httplib=True, aggressive=True)
instead of:
monkey.patch_all()
Is there a way to use monkey patching without getting this error? If I forgot
to give any information on the problem definition please let me know and I
will update it immediately
And here is my daemonized server code:
class App():
def __init__(self):
self.stdin_path = 'xxx.txt'
self.stdout_path = 'xxx.txt'
self.stderr_path = 'xxx.txt'
self.pidfile_path = 'xxx.pid'
self.pidfile_timeout = 5
def run(self):
from socketio.server import SocketIOServer
from gevent import monkey
monkey.patch_all()
while True:
try:
bind = ("xx.xx.xx.xx", xxxx)
handler = get_handler()
server = SocketIOServer(bind, handler, resource="socket.io",policy_server=False)
server.serve_forever()
except:
server.kill()
app = App()
logger = logging.getLogger("DaemonLog")
logger.setLevel(logging.INFO)
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler = logging.FileHandler("xxx.log")
handler.setFormatter(formatter)
logger.addHandler(handler)
daemon_runner = runner.DaemonRunner(app)
daemon_runner.daemon_context.files_preserve=[handler.stream]
daemon_runner.do_action()
Answer: I have found a solution for my problem at:
<https://github.com/abourget/gevent-socketio/issues/174>
it is actually related with non closed database connections of greenlets.
After adding a similar exception_handler_decorator in my namespace class I
have not see the max_user_connection errors
from django.db import close_old_connections # Django 1.6
class MyNamespace(BaseNamespace):
...
def exception_handler_decorator(self, fun):
def wrap(*args, **kwargs):
try:
return fun(*args, **kwargs)
finally:
close_old_connections()
return wrap
...
|
python array processing: how to generate new values on pixel by pixel basis
Question: My problem is about array manipulation in python. Want I want to do is to
analyze a multiband raster with python and output another raster depending on
the results. Real case example: I have a raster with 3 bands and I want to see
if the different band values for the same pixel are in increasing or
decreasing order (from band 1 to band 3). If the values are in increasing
order, a new value, lets say "1" will be assigned to that particular pixel in
a new output raster (or can be also a new band in the same raster). If the
values are in decreasing order, the value "2" will be assigned. I have defined
the functions and read the array values but I m stuck at recursively build
lists for every pixels, and assign new values for a new raster.
I will post the code that I have until now:
import arcpy
import numpy as np
arcpy.env.workspace = r"\\workspace"
def decrease(A):
return all(A[i] >= A[i+1] for i in range(len(A)-1)) # function to check if a list has decreasing values
def increase(A):
return all(A[i] <= A[i+1] for i in range(len(A)-1)) # function to check if a list has increasing values
def no0(A,val):
return[value for value in A if value !=val] # function to check if a list has 0 values, and if so, strip them
myArray = arcpy.RasterToNumpyArray(r"raster_3bands.img") # my 3 bands raster
print myArray.shape # to see how many dimensions the array has
for z in range(3): # where z is the number of bands, in my case 3
for y in range(6): #the number of columns, i.e. the vertical axis, in my case 6
for x in range(6): #the number of rows,i.e. the horizontal axis, in my case 6.
a = myArray[z,y,x] # get all the values for the 3 bands
print "The value for axes: ", z,y,x, " is: ",a
What I am missing: 1\. the way to build for every pixel a list which will
store the three correspondent band values, so that I can later run the
Decrease and Increase functions on those lists 2\. a way to assign new values,
pixel by pixel and to store that array as a new raster.
Many thanks for your patience to read this,
Bogdan
So, here is the code for a 3 bands raster:
import arcpy, os
import numpy as np
myArray = arcpy.RasterToNumPyArray(r"\\nas2\home\bpalade\Downloads\test.img")
nbands = 3
print "Array dimensions are: ", myArray.shape
print "\n"
print "The array is: "
print "\n"
print myArray
increasing_pixels = np.product([(myArray[i] <= myArray[i+1]) for i in range(nbands-1)],axis=0)
decreasing_pixels = np.product([(myArray[i] >= myArray[i+1]) for i in range(nbands-1)],axis=0)
new_band = np.zeros_like(myArray[0])
new_band[increasing_pixels] = 1
new_band[decreasing_pixels] = 2
print "\n"
print "The new array is: "
print "\n"
print new_band
The result is attached as jpeg
I do not see the jpeg, so I copy/paste from my results window:
Array dimensions are: (3L, 4L, 4L)
The array is:
[[[60 62 62 60]
[64 64 63 60]
[62 62 58 55]
[59 57 54 50]]
[[53 55 55 55]
[57 57 56 55]
[55 55 51 50]
[52 50 47 45]]
[[35 37 37 36]
[39 39 38 36]
[37 37 33 31]
[34 32 29 26]]]
The new array is:
[[1 1 1 1]
[2 2 2 2]
[0 0 0 0]
[0 0 0 0]]
>>>
Answer: The first think to think about here is the structure of the code. When using
numpy (and especially in a case like this) it is rare that you need to apply
loops. You can simply apply vectorized operations along the axes.
In this case you can use array broadcasting to find which sets of pixels are
increasing and decreasing.
increasing_pixels = (myArray[0] <= myArray[1]) * (myArray[1] <= myArray[2])
This will give you a boolean array where `True` pixels are those which
increase across the three bands. The same can be done with decreasing pixels
(this time `True` implies pixels that decrease across the band):
decreasing_pixels = (myArray[0] >= myArray[1]) * (myArray[1] >= myArray[2])
These arrays can act as a mask for your raster, allowing you to identify the
pixels that you wish to change value. Below is an example of creating a new
band based on the increasing or decreasing values of pixels across the
original three bands.
new_band = np.zeros_like(myArray[0])
new_band[increasing_pixels==True] = 1
new_band[decreasing_pixels==True] = 2
**EDIT:**
For an arbitrary number of bands, the above comparisons can be replaced by:
increasing_pixels = np.product([(myArray[i] <= myArray[i+1]) for i in range(nbands-1)],axis=0)
decreasing_pixels = np.product([(myArray[i] >= myArray[i+1]) for i in range(nbands-1)],axis=0)
|
python script speed improvements
Question: for my robot I am analyzing laser range data. I need to analyze a lot of
samples per second. So speed is required. I know python is not the right
language based on this - but I don't want to switch for now as I am in the
prototyping phase (will see if I ever get out of it :-) ).
At the moment I am stuck on squeezing more speed out of the analyzing code I
have.
I pulled out the relevant code and created a small test. It would be brilliant
if someone could give me a some hints on where to improve speed in this test
script.
from math import degrees, radians, sin, cos, fabs
import time
class NewRobotMap(object):
def __init__(self, sizeX, sizeY, Resolution, RobotPosX, RobotPosY, RobotTheta, ServoPos, mapMaxOcc, mapMaxFree, OccValue, EmptyValue):
self.sizeX = sizeX
self.sizeY = sizeY
self.RobotPosX = int(RobotPosX)
self.RobotPosY = int(RobotPosY)
self.mapResolution = int(Resolution)
self.StartPosX = int(RobotPosX)
self.StartPosY = int(RobotPosY)
self.RobotTheta = float(RobotTheta)
self.EmptyValue = EmptyValue
self.ServoPos = ServoPos
self.mapMaxOcc = mapMaxOcc
self.mapMaxFree = mapMaxFree
self.mapOccValue = OccValue
self.RobotPosOldX = ""
self.RobotPosOldY = ""
def clear(self):
self.RobotMap = [[self.EmptyValue for i in xrange(self.sizeY)] for j in xrange(self.sizeX)]
def updateMap(self ,x ,y , Val):
oldval = self.RobotMap[x][y]
self.RobotMap[x][y]=self.RobotMap[x][y] + Val
if self.RobotMap[x][y] > self.mapMaxOcc:
self.RobotMap[x][y] = self.mapMaxOcc
elif self.RobotMap[x][y] < self.mapMaxFree:
self.RobotMap[x][y] = self.mapMaxFree
return oldval, self.RobotMap[x][y]
def setOcc(self,x,y):
self.RobotMap[x][y] = self.mapMaxOcc
def updateRobot(self,theta,x,y):
robotThetaold=self.RobotTheta
self.RobotTheta = float(theta)
self.RobotPosX = int(round(self.StartPosX + float(int(x)/self.mapResolution), 0))
self.RobotPosY = int(round(self.StartPosY - float(int(y)/self.mapResolution),0))
if x != self.RobotPosOldX or y != self.RobotPosOldX:
self.RobotPosOldX = x
self.RobotPosOldY = y
return True
else:
self.RobotPosOldX = x
self.RobotPosOldY = y
return False
def getRobotPos(self):
return self.RobotPosX, self.RobotPosY
def display(self):
s = [[str(e) for e in row] for row in self.RobotMap]
lens = [len(max(col, key=len)) for col in zip(*s)]
fmt = '\t'.join('{{:{}}}'.format(x) for x in lens)
table = [fmt.format(*row) for row in s]
print '\n'.join(table)
def updateServoPos(self, newServoPos):
self.ServoPos = newServoPos
templateData = {
'MapWidth' : 800,
'MapHeight': 600,
'StartPosX' : 500,
'StartPosY' : 300,
'StartTheta' : 0,
'Resolution' : 5,
'mapThresholdFree' : 126,
'mapThresholdOcc' : 130, #169
'EmptyValue' : 128,
'mapMaxOcc' : 137,
'mapMaxFree' : 119,
'ServoPos' : 0,
'CurrentPosX' : 0,
'CurrentPosY' : 0,
'CurrentTheta' : 0,
'SafeZone' : 10
}
templateData["MapHeight"] = templateData["MapHeight"] / templateData["Resolution"]
templateData["MapWidth"] = templateData["MapWidth"] / templateData["Resolution"]
templateData["StartPosX"] = templateData["StartPosX"] / templateData["Resolution"]
templateData["StartPosY"] = templateData["StartPosY"] / templateData["Resolution"]
def updateSonarCalcMapVal(val):
mapThresholdFree = templateData["mapThresholdFree"]
mapThresholdOcc = templateData["mapThresholdOcc"]
#oldval
if val[0] <= mapThresholdFree:
oldval = 0
elif mapThresholdFree < val[0] < mapThresholdOcc:
oldval = 1
elif val[0] >= mapThresholdOcc:
oldval = 2
# newval
if val[1] <= mapThresholdFree:
newval = 0
elif mapThresholdFree < val[1] < mapThresholdOcc:
newval = 1
elif val[1] >= mapThresholdOcc:
newval = 2
if oldval != newval:
return newval
else:
return 'n'
def dur( op=None, clock=[time.time()] ):
if op != None:
duration = time.time() - clock[0]
print '%s finished. Duration %.6f seconds.' % (op, duration)
clock[0] = time.time()
def updateIRWrite(RobotPos, coord, updateval):
XtoUpdate=RobotPos[0] + coord[0]
YtoUpdate=RobotPos[1] - coord[1]
val = map.updateMap(XtoUpdate, YtoUpdate , updateval)
newval=updateSonarCalcMapVal(val)
########### main Script #############
map=NewRobotMap(templateData["MapWidth"],templateData["MapHeight"], templateData["Resolution"], templateData["StartPosX"],templateData["StartPosY"], templateData["StartTheta"], templateData["ServoPos"],templateData["mapMaxOcc"],templateData["mapMaxFree"],templateData["mapThresholdOcc"],templateData["EmptyValue"])
map.clear()
dur()
for x in xrange(0,10001*40):
updateIRWrite((100,100), (10,10), 1)
dur("loops")
I tried a numpy array as self.RobotMap in the NewRobotMap class/object. But
this was much slower.
Answer: Few tips
# Minimize too deep redirections
Your code here:
def updateMap(self ,x ,y , Val):
oldval = self.RobotMap[x][y]
self.RobotMap[x][y]=self.RobotMap[x][y] + Val
if self.RobotMap[x][y] > self.mapMaxOcc:
self.RobotMap[x][y] = self.mapMaxOcc
elif self.RobotMap[x][y] < self.mapMaxFree:
self.RobotMap[x][y] = self.mapMaxFree
return oldval, self.RobotMap[x][y]
is all the time repeating `self.RobotMap[x][y]` what requires 4 levels of hops
to get the value (self -> RobotMap -> [x] -> [y])
This can be optimized:
# In place update
old:
self.RobotMap[x][y]=self.RobotMap[x][y] + Val
new (saving diving for existing value second time)
self.RobotMap[x][y] += Val
# Use local variable instead of deeply nested structure
def updateMap(self ,x ,y , Val):
oldval = self.RobotMap[x][y]
newval = oldval + Val
if newval > self.mapMaxOcc:
newval = self.mapMaxOcc
elif newval < self.mapMaxFree:
newval = self.mapMaxFree
return oldval, newval
Note, that your old return `oldval, self.RobotMap[x][y]` is not only returning
a value, but you have already modified the `self.RobotMap[x][y]` anyway (as it
is mutable), so if you rely on that, you could be surprised.
## Using global variables instead of tempateData dictionary
Changing dictionary into global variable speeded up the run a bit as it
removed one level ov indirection. I know, it looks nasty, but this may happen
with optimization.
# Skip returning `self.RobotMap[x][y]`
Consider saving returning `self.RobotMap[x][y]` if this not necessary, or if
you have already changed that value.
# Quick clear
change original:
def clear(self):
self.RobotMap = [[self.EmptyValue for i in xrange(self.sizeY)] for j in xrange(self.sizeX)]
to:
def clear(self):
self.RobotMap = self.sizeY * [self.sizeY * [self.EmptyValue]]
My test show about twice as fast execution for x = 3, y = 5, larger sizez
could be even better.
# Modified code - from 0.790581 to 0.479875 seconds
from math import degrees, radians, sin, cos, fabs
import time
templ_MapWidth = 800
templ_MapHeight = 600
templ_StartPosX = 500
templ_StartPosY = 300
templ_StartTheta = 0
templ_Resolution = 5
templ_mapThresholdFree = 126
templ_mapThresholdOcc = 130
templ_EmptyValue = 128
templ_mapMaxOcc = 137
templ_mapMaxFree = 119
templ_ServoPos = 0
templ_CurrentPosX = 0
templ_CurrentPosY = 0
templ_CurrentTheta = 0
templ_SafeZone = 10
templ_MapHeight = templ_MapHeight / templ_Resolution
templ_MapWidth = templ_MapWidth / templ_Resolution
templ_StartPosX = templ_StartPosX / templ_Resolution
templ_StartPosY = templ_StartPosY / templ_Resolution
class NewRobotMap(object):
def __init__(self, sizeX, sizeY, Resolution, RobotPosX, RobotPosY, RobotTheta, ServoPos, mapMaxOcc, mapMaxFree, OccValue, EmptyValue):
self.sizeX = sizeX
self.sizeY = sizeY
self.RobotPosX = int(RobotPosX)
self.RobotPosY = int(RobotPosY)
self.mapResolution = int(Resolution)
self.StartPosX = int(RobotPosX)
self.StartPosY = int(RobotPosY)
self.RobotTheta = float(RobotTheta)
self.EmptyValue = EmptyValue
self.ServoPos = ServoPos
self.mapMaxOcc = mapMaxOcc
self.mapMaxFree = mapMaxFree
self.mapOccValue = OccValue
self.RobotPosOldX = ""
self.RobotPosOldY = ""
def clear(self):
self.RobotMap = self.sizeX * [self.sizeY * [self.EmptyValue]]
def updateMap(self, x, y, Val):
oldval = self.RobotMap[x][y]
newval = oldval + Val
if newval < self.mapMaxFree:
return oldval, self.mapMaxFree
if newval > self.mapMaxOcc:
return oldval, self.mapMaxOcc
return oldval, newval
def setOcc(self, x, y):
self.RobotMap[x][y] = self.mapMaxOcc
def updateRobot(self, theta, x, y):
robotThetaold = self.RobotTheta
self.RobotTheta = float(theta)
self.RobotPosX = int(round(self.StartPosX + float(int(x)/self.mapResolution), 0))
self.RobotPosY = int(round(self.StartPosY - float(int(y)/self.mapResolution), 0))
if x != self.RobotPosOldX or y != self.RobotPosOldX:
self.RobotPosOldX = x
self.RobotPosOldY = y
return True
else:
self.RobotPosOldX = x
self.RobotPosOldY = y
return False
def getRobotPos(self):
return self.RobotPosX, self.RobotPosY
def display(self):
s = [[str(e) for e in row] for row in self.RobotMap]
lens = [len(max(col, key=len)) for col in zip(*s)]
fmt = '\t'.join('{{:{}}}'.format(x) for x in lens)
table = [fmt.format(*row) for row in s]
print '\n'.join(table)
def updateServoPos(self, newServoPos):
self.ServoPos = newServoPos
def updateSonarCalcMapVal(org, new):
mapThresholdFree = templ_mapThresholdFree
mapThresholdOcc = templ_mapThresholdOcc
#oldval
if org <= mapThresholdFree:
oldval = 0
elif mapThresholdFree < org < mapThresholdOcc:
oldval = 1
elif org >= mapThresholdOcc:
oldval = 2
# newval
if new <= mapThresholdFree:
newval = 0
elif mapThresholdFree < new < mapThresholdOcc:
newval = 1
elif new >= mapThresholdOcc:
newval = 2
if oldval != newval:
return newval
else:
return 'n'
def dur(op=None, clock=[time.time()]):
if op != None:
duration = time.time() - clock[0]
print '%s finished. Duration %.6f seconds.' % (op, duration)
clock[0] = time.time()
def updateIRWrite(RobotPos, coord, updateval):
XtoUpdate = RobotPos[0] + coord[0]
YtoUpdate = RobotPos[1] - coord[1]
newval = updateSonarCalcMapVal(*mymap.updateMap(XtoUpdate, YtoUpdate, updateval))
########### main Script #############
mymap = NewRobotMap(templ_MapWidth, templ_MapHeight, templ_Resolution, templ_StartPosX, templ_StartPosY, templ_StartTheta, templ_ServoPos, templ_mapMaxOcc, templ_mapMaxFree, templ_mapThresholdOcc, templ_EmptyValue)
mymap.clear()
dur()
for x in xrange(0, 10001*40):
updateIRWrite((100, 100), (10, 10), 1)
dur("loops")
# Conclusions
The code definitely needs review for doing correct work. E.g. there are
methods, which are not used at all and other calls, which never use returned
value.
But some optimization could be shown. Generally, following is good to follow:
1. Make your code running correctly first
2. Clarify what is acceptable speed, do not optimize, if not necessary
3. Measure, profile
4. Start optimizing in busiest loops, there are best chances to speed things up. In them, each line of code counts.
|
Log time of execution for each line in Python script?
Question: I have a Python script which executes fairly straightforward from the first to
the last line with plain logic. The script performance is very different on a
couple of machines with different environments, so I am trying to find out
which line of the code gives me a hard time.
I have seen [cProfiler](https://docs.python.org/2/library/profile.html) and
some [questions](http://stackoverflow.com/questions/1557571/how-to-get-time-
of-a-python-program-execution) (`timeit`) on recording time of execution for
the whole function. I am, however, interested in finding out the time Python
spent on executing each line of the code in my script.
The source code:
import math
result = pow(2,5)
newlist = []
newlist.append(result)
print newlist
What I want to get (number of row - time it took to execute in seconds):
1 - 0.04
2 - 0.01
3 - 0.06
4 - 0.08
5 - 0.1
**EDIT** : I have tried to use
[hotshot](https://docs.python.org/2/library/hotshot.html), a standard library
available, however I am getting an error message.
The source code I run:
import hotshot
import hotshot.stats
logfile = r"C:\testlog.prof"
prof_obj = hotshot.Profile(logfile,lineevents=True,linetimings=False)
prof_obj.start()
a = 1000
b = 2000
c = a + b
print c
prof_obj.stop()
prof_obj.close()
stats_obj = hotshot.stats.load(logfile) #getting error on this line *
stats_obj.strip_dirs()
stats_obj.sort_stats('time', 'calls')
stats_obj.print_stats(20)
* * *
* for event in log:
File "C:\Python27\ArcGIS10.2\Lib\hotshot\log.py", line 115, in next
filename, firstlineno, funcname = self._stack[-1]
IndexError: list index out of range
* * *
**EDIT:** I have found another example on using a hotshot from the Python
Essential Reference book.
import hotshot
import hotshot.stats
def function_to_call():
print "AA"
print "BB"
logfile = r"C:\testlog.prof"
prof_obj = hotshot.Profile(logfile,lineevents=True,linetimings=True)
prof_obj.runcall(function_to_call)
prof_obj.close()
stats_obj = hotshot.stats.load(logfile)
stats_obj.sort_stats('time', 'calls')
stats_obj.print_stats()
However, this does not give me any information on execution per line of code,
only per function call:
5 function calls in 0.012 seconds
Ordered by: internal time, call count
ncalls tottime percall cumtime percall filename:lineno(function)
4 0.012 0.003 0.012 0.003 <string>:6(write)
1 0.000 0.000 0.012 0.012 c:\gis\temp\simple_hotshot.py:11(function_to_call)
0 0.000 0.000 profile:0(profiler)
Answer: There is a [line profiler
module](https://pypi.python.org/pypi/line_profiler/1.0b2) that might do what
you are looking for. It also has nice decorators that you can put over
functions you want to profile on a line by line level. You can read the docs
[here](http://pythonhosted.org/line_profiler/).
Also take a look at
[`hotshot`](https://docs.python.org/2/library/hotshot.html). It looks like you
can set the `linetimings` parameter to get what you are looking for. I'm not
sure `hotshot` will be supported in future versions, though.
|
ImportError: Critical packages are not installed in Canopy Python
Question: I trying to install a module called debacl that can be found on
<https://github.com/CoAxLab/DeBaCl> on windows 64.
I am using the install command to install the module:
In [18]: run -i setup.py install
running install
running build
running build_py
running build_scripts
running install_lib
running install_scripts
running install_egg_info
Removing C:\Users\vjons\AppData\Local\Enthought\Canopy\User\Lib\site-packages\debacl-0.2.0-py2.7.egg-info
Writing C:\Users\vjons\AppData\Local\Enthought\Canopy\User\Lib\site-packages\debacl-0.2.0-py2.7.egg-info
The folder debacl then pops up in the Canopy\User\Lib\site-packages folder.
But when i try to import the newly installed module I get the following error
message:
In [3]: import debacl
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-3-5ef0bbe97964> in <module>()
----> 1 import debacl
C:\Users\vjons\AppData\Local\Enthought\Canopy\User\lib\site-packages\debacl\__init__.py in <module>()
1 # main debacl __init__.py
2
----> 3 import geom_tree
4 import cd_tree
5 import utils
C:\Users\vjons\AppData\Local\Enthought\Canopy\User\lib\site-packages\debacl\geom_tree.py in <module>()
24 import utils as utl
25 except:
---> 26 raise ImportError("Critical packages are not installed.")
27
28 try:
ImportError: Critical packages are not installed.
Okaj, I guess this means that the utils package has to be installed in order
for debacl to be used. But utils is included in the debacl/-folder:
In [4]: ls C:\Users\vjons\AppData\Local\Enthought\Canopy\User\Lib\site-packages\debacl
Volume in drive C has no label.
Volume Serial Number is 423B-C99D
Directory of C:\Users\vjons\AppData\Local\Enthought\Canopy\User\Lib\site-packages\debacl
2014-05-26 16:04 72 __init__.py
2014-05-26 16:05 255 __init__.pyc
2014-05-26 16:04 25 521 cd_tree.py
2014-05-26 16:14 23 466 cd_tree.pyc
2014-05-26 16:04 50 373 geom_tree.py
2014-05-26 16:14 47 087 geom_tree.pyc
2014-05-26 16:05 <DIR> test
2014-05-26 16:04 21 488 utils.py
2014-05-26 16:14 22 480 utils.pyc
Am I missing something?
Answer: The problem is actually not about absolute imports, but that you are missing
the package python-igraph. Two root causes:
1) the `setup.py` file in debacl fails to import setuptools (should be the
first line). But that would be trivial to work around (just install python-
igraph separately), except that...
2) without Microsoft Visual C++ 2008 installed on your system, you would not
be able to build python-igraph as required.
The easiest solution (which I have just tested successfully) is to:
1) Ensure that Canopy User Python is the default Python on your system, using
the Canopy Preferences Menu (you may need to set this, exit Canopy, then
restart Canopy to check that it was set).
2) Download `python‑igraph‑0.7.0.win‑amd64‑py2.7.exe` from
<http://www.lfd.uci.edu/~gohlke/pythonlibs/#python-igraph>
3) Run the above installer. This will install python-igraph
You should then be able to import debacl successfully.
|
ValueError: invalid literal for int() with base 10: '0.00'
Question: I have a string in Python like that:
l = "0.00 0.00"
And I want to convert it in a list of two numbers.
The following instruction does not work:
int(l.strip(" \n").split(" ")[0])
Apparently the function `int()` can convert string like `0` or `00` to an int,
but it does not work with `0.0`.
Is there a way to convert `0.0`?
A.
Answer: The easiest way to first convert to `Decimal`:
from decimal import Decimal
int(Decimal('0.00'))
if you are sure that fractional part is always zero then faster would be to
use float
int(float('0.00'))
|
Python : concat arrays of groupped lines
Question: I have the following array:
[
["String 0", [1, 2]],
["String 1", [1, 3]],
["String 2", []],
["String 3", [2]],
["String 1", [1, 2]],
["String 2", [0]]
]
I need to transform it into an array with unique Strings (first column) and
concatenated second column.
Here is what output should look like:
[
["String 0", [1, 2]],
["String 1", [1, 3, 1, 2]],
["String 2", [0]],
["String 3", [2]]
]
In this case, Strings are unique and `"String 1"` has a second column `"1, 3,
1, 2"` and `"String 2"` has second column `"0"`.
I know how to unique an array but not how to group and concatenate at the same
time.
Thank you.
Answer: Like the IanAuld suggests, a dictionary is likely the best way to handle this.
Try the following
arr_of_arrs = [
["String 0", [1, 2]],
["String 1", [1, 3]],
["String 2", []],
["String 3", [2]],
["String 1", [1, 2]],
["String 2", [0]]
]
from collections import defaultdict
arrs = defaultdict(lambda: [])
for arr in arr_of_arrs:
arrs[arr[0]] += arr[1]
How to convert the dictionary back to your array of arrays is left as an
exercise.
|
Python 2.x - References and pointers
Question: I have a question. How can I get "reference-pointer effect" in Python 2.x?
I have a class, containing 2 dictionaries - 1 with character representation
and 1 with integer representation (retrieved with `ord(character)`). Main
problem is I will print them a lot of times, so converting them in-the-fly is
a bad idea (I think). However, switching between them would be useful.
class Character_Dict(object):
def __init__(self, key_length):
self.char_dict = {}
self.ASCII_dict = {}
self.key_length = key_length
self.dictionary = <<here should be a dictionary in usage>>
I could just assign wanted one to `self.dictionary`, but when assigning for
example `self.char_dict` any change in `self.dictionary` would not apply to
`self.char_dict` which in my case is a point of whole this construction.
Is there any mechanism in Python 2.x which would allow to do things like that?
[EDIT 1]: Dictionaries contains lists of symbols encrypted with usage of n-th
byte of some key:
n [list of character encrypted with n-th byte of key]
0 ['\xc5', '\x9a', '\xa5', '\x8d', '\xc8', '\xc8', '\x92', '\x9b', '\x82', '\x92', '\x86']
1 ['a', 'm', '.', 'a', '%', ',', ' ', '*', '$', ' ', '(']
2 ['\x18', '~', '4', '4', '?', ',', ',', '0', '9', ',', '\xe7']
3 ['\xe8', '\xe2', '\xe8', '\xec', ':', '\xe6', '\xe6', '0', '\xe6', '\xf3', '\xa9']
...
255 ['\x12', 'S', '\xcc', '_', '\xc0', 'S', '\x01', 'S', 'S', 'S']
[EDIT 2]: My encryption key has 256 bytes. Message which was encrypted by
usage of that key is 2688 bytes long. That means that encryption key was
repeated 10.5 times.
Think about changing the 3rd letter which was encrypted with usage of 10th
letter of key. That's (3-1)*256+10 byte. Instead of reading that letter, I can
simply read whole stream and use my class
fileXOR = open('2011061.xor', 'r')
key_length = 256
number_of_bytes = os.path.getsize('2011061.xor')
print number_of_bytes
amount_of_key_repetition = number_of_bytes/key_length.__float__()
print "Key has been repeated", amount_of_key_repetition, "times"
character_dict = Character_Dict(key_length)
for counter_x in range(0, key_length):
character_dict.dictionary[counter_x] = []
print character_dict
for current_byte in range(0, number_of_bytes):
read_character = fileXOR.read(1)
character_dict.dictionary[current_byte % key_length].append(read_character)
fileXOR.close()
and I can access my character simply by:
character_dict.dictionary[10][2]
Now, imagine that I need change `character_dict.dictionary[10][2]`. In
constructor I had assigned `self.char_dict` to `self.dictionary`. Changing
`object_name.dictionary` will not modify `object_name.char_dict` (AFAIK).
I want `object_name.dictionary` to be sometimes a ASCII representation, and
sometimes an integer representation. That would reduce a lot of code and
simplify any changes made into ciphertext.
Answer: In Python, the [weakref](https://docs.python.org/2/library/weakref.html)
module lets you store references to objects. If the source object gets deleted
or garbage collected, this can be detected by calling the weakref.
Example from documentation:
>>> import weakref
>>> class Object:
... pass
...
>>> o = Object()
>>> r = weakref.ref(o)
>>> o2 = r()
>>> o is o2
True
If the referent no longer exists, calling the reference object returns None:
>>> del o, o2
>>> print r()
None
|
GUI - tkinter Making a button
Question: I want to make a button that will allow the user to browse and select a file
and assign the choice to a variable, this is what I have, I know it is wrong
but I cant seem to get something that works, please give me tips to improve,
thanks.
import tkinter
#Window
window = tkinter.Tk()
window.title("Title")
window.geometry("300x400")
#Label
fileSelectLBL = tkinter.Label(window, text="Please select your file:")
fileSelectLBL.pack()
#Button
filename = tkinter.Button(window, text="Browse", command = askopenfilename( filetypes = (("Text Files","*.txt"))))
filename.pack()
#Main Loop
windowindow.mainloop()
I get this error when running it:
filename = Button(window, text="Browse", command = window.load_file, width = 10)
File "/usr/lib/python3.4/tkinter/__init__.py", line 1886, in __getattr__
return getattr(self.tk, attr)
AttributeError: 'tkapp' object has no attribute 'load_file'
And when clicking on the button I get this error:
Exception in Tkinter callback
Traceback (most recent call last):
File "/usr/lib/python3.4/idlelib/run.py", line 121, in main
seq, request = rpc.request_queue.get(block=True, timeout=0.05)
File "/usr/lib/python3.4/queue.py", line 175, in get
raise Empty
queue.Empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.4/tkinter/__init__.py", line 1490, in __call__
return self.func(*args)
TypeError: load_file() missing 1 required positional argument: 'self'
I have updated it to this:
import tkinter
from tkinter import *
from tkinter.filedialog import askopenfilename
from tkinter.messagebox import showerror
from tkinter import filedialog
#Window
window = tkinter.Tk()
window.title("Title")
window.geometry("300x400")
def load_file():
fname = askopenfilename(filetypes=(("Template files", "*.tplate"),
("HTML files", "*.html;*.htm"),
("All files", "*.*") ))
if fname:
try:
print("""here it comes: self.settings["template"].set(fname)""")
except: # <- naked except is a bad idea
showerror("Open Source File", "Failed to read file\n'%s'" % fname)
return
window.button = Button(window, text="Browse", command=load_file(), width=10)
#Label
fileSelectLBL = tkinter.Label(window, text="Please select your file:")
fileSelectLBL.pack()
#Button
def load_file(self):
fname = askopenfilename(filetypes=(("Text files", "*.txt"),
("All files", "*.*") ))
filename = tkinter.Button(window, text="Browse", command = load_file)
filename.pack()
filename = Button(window, text="Browse", command = window.load_file, width = 10)
#Main Loop
windowindow.mainloop()
Now this opens the file dialog, but it opens as soon as the program is run, I
would like it to only run when the browse button is pressed, what can I do to
fix this?
Answer: You needed to fix your imports, and it's **HIGHLY** recommended to use OOP
with GUIs.
from tkinter import *
from tkinter.filedialog import askopenfilename
class Example(Frame):
def __init__(self, parent):
Frame.__init__(self, parent)
self.parent = parent
self.initUI()
def initUI(self):
self.parent.title("File dialog")
#Label
self.fileSelectLBL = Label(self, text="Please select your file:")
self.fileSelectLBL.pack()
#Button
self.filename = Button(self, text="Browse", command = self.load_file)
self.filename.pack()
def load_file(self, ftypes = None):
ftypes = ftypes or (("Text Files","*.txt"))
fl = askopenfilename(filetypes = ftypes)
if fl != '':
print(fl)
def main():
window = Tk()
ex = Example(window)
window.geometry("300x400")
#Main Loop
window.mainloop()
if __name__ == '__main__':
main()
|
In Python 3.4, what is best/easiest way to compare paths?
Question: Using this code in Python 3.4 and Ubuntu 14.04 do not return `True`
import pathlib
path1 = pathlib.Path("/tmp")
path2 = pathlib.Path("/tmp/../tmp")
print(path1 == path2)
# gives False
print(path1 is path2)
# gives False
But normally "/tmp" and "/tmp/../tmp" are the same folder. So how to ensure
that the comparisons return True?
Answer: To compare you should
[resolve](https://docs.python.org/dev/library/pathlib.html#pathlib.Path.resolve)
the paths first or you can also use
[os.path.samefile](https://docs.python.org/dev/library/os.path.html#os.path.samefile).
Example:
print(path1.resolve() == path2.resolve())
# True
import os
print(os.path.samefile(str(path1), str(path2)))
# True
By the way, `path1 is path2` checkes if `path1` is the same object as `path2`
rather than comparing actual paths.
|
Using Python subprocess.call to call bash script and agruments unsucessfully
Question: I'm working on a project for managing blocks of IP addresses in a lab
environment where we have a limited number of address available. For example
users are allowed to create virtual machines in the lab as needed but should
assign an IP of say an available range of 50.
The program will empower users to choose available IP's by pinging the range
sequentially and giving output of what IP address respond.
I'm stuck trying to find a way for ping to respond with a 1 or 0. I might be
going about it all wrong. I've written a bash shell script which run fine on
its own to return 1 or 0. I'm now attempting to use call from the subprocess
module in python to call the script but i never get to the second argument. In
addition the second arg when converted to a string give me syntax errors.
###Shell script###
#!/bin/bash
if ping -c 1 $1 >/dev/null; then
echo "1"
else
echo "0"
fi
###Python code calling ./ping.sh###
import subprocess
host = str(input('Host Please: '))
output = subprocess.call(['./ping.sh', host], shell=True)
###Output of python code###
jon@WartHead ~/Desktop/Python/Python-3.3.0 $ python ip.py
Host Please: 127.0.0.1
Traceback (most recent call last):
File "ip.py", line 2, in <module>
host = str(input('Host Please: '))
File "<string>", line 1
127.0.0.1
^
SyntaxError: invalid syntax
This is one of my first python scripts written, any help is much welcomed.
Jon
Answer: I would do something like this;
import re,commands
class CheckAlive:
def __init__(self,firstIP,lastIP):
for checkIP in range(firstIP,lastIP):
myCommand = commands.getstatusoutput('ping -c ' + '127.0.0.' + str(checkIP))
searchString = r'ping: unknown host'
match = re.search(searchString,str(myCommand))
if match:
# these hosts are not alive
print '127.0.0.' + str(checkIP) + " host is not up."
else:
# these hosts are alive
print '127.0.0.' + str(checkIP) + " host is up."
CheckAlive(0,254) # checks all hosts from 127.0.0.0 to 127.0.0.254
|
Alternative to Yield in Python
Question: Is there an alternative (even if longer) method of writing the `yield` part of
this code? I'm not very familiar with the function and would rather keep it
simple.
for i in range(0, len(lstInput) - intCount + 1):
if intCount == 1:
yield [lstInput[i]]
else:
current = lstInput[i]
remainder = lstInput[i+1:]
for rest in yieldLotto(remainder, intCount - 1):
yield [current] + rest
Answer: The alternative is to either embed the loops into your calling program, or to
change it to return a list. This means everything will be going into memory,
though.
def foo():
return_list = []
for i in range(0, len(lstInput) - intCount + 1):
if intCount == 1:
return_list.append([lstInput[i]])
else:
current = lstInput[i]
remainder = lstInput[i+1:]
for rest in yieldLotto(remainder, intCount - 1):
return_list.append([current] + rest)
return return_list
Honestly though, I think yield is better and it's an important feature of
Python. I suggest learning it.
|
Calculate time difference using python
Question: I am wondering if there is a way or builtin library available to find the
difference in time from two string input.
What I mean is, if I have 2 input strings:
1. '2013-10-05T01:21:07Z'
2. '2013-10-05T01:21:16Z'
how can I can calculate the difference in time and print it as output.
I know it sounds a bit silly but any help on this is appreciated.
Answer: You seem to be using an ISO 8601 formatted
[dateTime](http://books.xmlschemata.org/relaxng/ch19-77049.html). This format
is used in many places, including the GPS eXchange Format.
[-]CCYY-MM-DDThh:mm:ss[Z|(+|-)hh:mm]
## Using datetime:
import datetime
a = datetime.datetime.strptime("2013-10-05T01:21:07Z", "%Y-%m-%dT%H:%M:%SZ")
b = datetime.datetime.strptime("2013-10-05T01:21:16Z", "%Y-%m-%dT%H:%M:%SZ")
c = b - a
print(c)
**Advantages:**
* Built-in to Python Standard Library
* Object-oriented interface
**Disadvantages:**
* Need to manually handle other valid ISO 8601 representations such as '2013-10-05T01:21:16+00:00'
* Throws exception for leap seconds such as '2012-06-30T23:59:60Z'
## Using python-dateutil:
import dateutil.parser
a = dateutil.parser.parse("2013-10-05T01:21:07Z")
b = dateutil.parser.parse("2013-10-05T01:21:16Z")
c = b - a
print(c)
**Advantages:**
* Automagically handles pretty much any time format
**Disadvantages:**
* Needs python-dateutil library (pip install python-dateutil)
* Throws exception for leap seconds such as '2012-06-30T23:59:60Z'
## Using time.strptime and time.mktime as suggested by
[Alfe](http://stackoverflow.com/a/23886181/1703216)
**Advantages:**
* Built-in to Python Standard Library
* Can parse leap seconds such as '2012-06-30T23:59:60Z'
**Disadvantages:**
* Need to manually handle other valid ISO 8601 representations such as '2013-10-05T01:21:16+00:00'
* Loss of one leap second between '2012-06-30T23:59:60Z' and '2012-07-01T00:00:00Z' (unavoidable without knowing when leap seconds will next occur)
|
python list modification to list of lists
Question: I am trying to learn python (just finished _Learn Python the Hard Way_ book!),
but I seem to be struggling a bit with lists. Specifically speaking, I have a
list like so:
x = ["/2.ext", "/4.ext", "/5.ext", "/1.ext"]
I would like to operate on this above list, so that it returns a list
(somehow!) like so:
y = [ ["/1.ext", "/2.ext"], ["/1.ext", "/2.ext", "/3.ext, "/4.ext"], ["/1.ext", "/2.ext", "/3.ext", "/4.ext", "/5.ext"], ["/1.ext"] ]
So, essentially, each element in `x` is now turned to a list of lists. I could
probably loop over `x`, store all the sequence lists in another list and then
merge then together - but it just seems like there must be a better way to do
it.
Would be grateful if someone could point me in the right direction to solve
this problem.
EDIT (taking into account Martijn's comments):
Specifically, I want to generate the intermediary filenames in a sequence,
ending at the number for each x list element
Answer: You can do it as follows:
x = ["/2.ext", "/4.ext", "/5.ext", "/1.ext"]
print [['/{}.ext'.format(j) for j in range(1,int(i[1])+1)] for i in x]
[OUTPUT]
[['/1.ext', '/2.ext'], ['/1.ext', '/2.ext', '/3.ext', '/4.ext'], ['/1.ext', '/2.ext', '/3.ext', '/4.ext', '/5.ext'], ['/1.ext']]
This only works for digits upto 9. I'll post update for more general solutions
HERE is the more general solution. Works for any numbers:
import re
x = ["/2.ext", "/4.ext", "/5.ext", "/1.ext"]
print [['/{}.ext'.format(j) for j in range(1,int(re.search(r'\d+',i).group(0))+1)] for i in x]
|
An alternative to os.path.expanduser("~")?
Question: In python 2.7.x, `os.path.expanduser("~")` is broken for Unicode.
This means that you get an exception if the expansion of "~" has non-ascii
characters in it.
<http://bugs.python.org/issue13207>
How can I achieve the same, some other way?
(That is to say, how can I get the path to the user's "home directory", which
would usually be `C:\Users\usern-name` on Win7)?
Answer: The bug report you link to includes a [workaround
script](http://bugs.python.org/file23442/expanduser.py), which retrieves the
relevant home directory information directly from the Win32 API:
import ctypes
from ctypes import windll, wintypes
class GUID(ctypes.Structure):
_fields_ = [
('Data1', wintypes.DWORD),
('Data2', wintypes.WORD),
('Data3', wintypes.WORD),
('Data4', wintypes.BYTE * 8)
]
def __init__(self, l, w1, w2, b1, b2, b3, b4, b5, b6, b7, b8):
"""Create a new GUID."""
self.Data1 = l
self.Data2 = w1
self.Data3 = w2
self.Data4[:] = (b1, b2, b3, b4, b5, b6, b7, b8)
def __repr__(self):
b1, b2, b3, b4, b5, b6, b7, b8 = self.Data4
return 'GUID(%x-%x-%x-%x%x%x%x%x%x%x%x)' % (
self.Data1, self.Data2, self.Data3, b1, b2, b3, b4, b5, b6, b7, b8)
# constants to be used according to the version on shell32
CSIDL_PROFILE = 40
FOLDERID_Profile = GUID(0x5E6C858F, 0x0E22, 0x4760, 0x9A, 0xFE, 0xEA, 0x33, 0x17, 0xB6, 0x71, 0x73)
def expand_user():
# get the function that we can find from Vista up, not the one in XP
get_folder_path = getattr(windll.shell32, 'SHGetKnownFolderPath', None)
if get_folder_path is not None:
# ok, we can use the new function which is recomended by the msdn
ptr = ctypes.c_wchar_p()
get_folder_path(ctypes.byref(FOLDERID_Profile), 0, 0, ctypes.byref(ptr))
return ptr.value
else:
# use the deprecated one found in XP and on for compatibility reasons
get_folder_path = getattr(windll.shell32, 'SHGetSpecialFolderPathW', None)
buf = ctypes.create_unicode_buffer(300)
get_folder_path(None, buf, CSIDL_PROFILE, False)
return buf.value
This `expand_user()` function returns the home directory for the current user
only.
|
Django Exception DoesNotExist
Question: I have a model that has Django's model fields AND python properties at the
same time. Ex:
**Edit2: Updated with actual models (sorry for the portuguese names)**
#On Produto.models.py
from django.db import models
from django.forms import ModelForm
from openshift.models import AbstractModel
from openshift.produto.models import app_name
class Produto(models.Model):
class Meta:
app_label = app_name
descricao = models.CharField(max_length=128)
und_choices = (
('UND', 'Unidade'),
('M', 'Metro'),
('Kg', 'Quilograma'),
('PC', 'Peça'),
)
unidade = models.CharField(max_length=3, choices=und_choices, default='UND')
#On Estoque.models.py
from django.db import models
from django.forms import ModelForm
from openshift.models import AbstractModel
from produto.models import Produto
from openshift.estoque.models import app_name
class Estoque(models.Model):
class Meta:
app_label = app_name
id = models.IntegerField(primary_key=True)
produtoid = models.ForeignKey(Produto, unique=True)
quantidade = models.DecimalField(max_digits=20, decimal_places=4)
valorunit = models.DecimalField(max_digits=20, decimal_places=4)
valortotal = models.DecimalField(max_digits=20, decimal_places=2)
_field_labels = {
"id" : r"Código",
"produtoid" : r"Produto",
"quantidade" : r"Quantidade",
"valorunit" : r"Valor Unitário",
"valortotal" : r"Valor Total"
}
_view_order = ['id', 'produtoid', 'quantidade', 'valorunit', 'valortotal']
@property
def Vars(self):
return {'field_labels': self._field_labels, 'view_order': self._view_order}
#on project.views.main_request
obj = get_model('Estoque', 'Estoque')().Vars #Here is where the Exception triggers.
If i try to call the property "Vars" before I call the save() method (during
the creation of the model's record), django keeps raising a DoesNotExist
exception, even though the "Vars" property isnt part of the django model.
Can anyone explain why is this happening?
**Edit: As Requested:**
Django Trace:
> Traceback:
>
> File "/home/gleal/cbengine/local/lib/python2.7/site-
> packages/django/core/handlers/base.py" in get_response 111\. response =
> callback(request, *callback_args, **callback_kwargs)
>
> File "/home/gleal/cbengine/engine/wsgi/openshift/views.py" in req 48\.
> return browse(request, app, model, var_dict, False)
>
> File
> "/home/gleal/cbengine/engine/wsgi/openshift/../openshift/subviews/browse.py"
> in _browse 32\. custom_vars['TableColspan'] = len(obj.Vars.get('VIEW_ORDER',
> {}))
>
> File "/home/gleal/cbengine/engine/wsgi/openshift/../openshift/models.py" in
> Vars 40\. curr_vals[field.name] = getattr(self, field.name) File
> "/home/gleal/cbengine/local/lib/python2.7/site-
> packages/django/db/models/fields/related.py" in **get** 343\. raise
> self.field.rel.to.DoesNotExist
>
> Exception Type: DoesNotExist at /estoque/estoque/browse/ Exception Value:
Answer: Just figured out that the great villain was the `@property` decorator on the
method `Vars`.
It was making django try to get values from the instance of the class, and in
some cases, it might trigger some querys on the DB made by the Django's ORM
(basically, in cases where i tried to get values from the current instance).
I changed the @property to a @classmethod and everything worked like a charm.
Like this:
#before
@property
def Vars(self):
return {'field_labels': self._field_labels, 'view_order': self._view_order}
#after
@classmethod
def Vars(self):
return {'field_labels': self._field_labels, 'view_order': self._view_order}
Thanks to anyone who tried to help!
|
Python catch timeout and repeat request
Question: I'm trying to use the Xively API with python to update a datastream but
occasionally I get a 504 error which seems to end my script.
How can I catch that error and more importantly delay and try again so the
script can keep going and upload my data a minute or so later?
Here's the block where I'm doing the uploading.
# Upload to Xivity
api = xively.XivelyAPIClient("[MY_API_KEY")
feed = api.feeds.get([MY_DATASTREAM_ID])
now = datetime.datetime.utcnow()
feed.datastreams = [xively.Datastream(id='temps', current_value=tempF, at=now)]
feed.update()
And here's the error I see logged when my script fails:
Traceback (most recent call last):
File "C:\[My Path] \ [My_script].py", line 39, in <module>
feed = api.feeds.get([MY_DATASTREAM_ID])
File "C:\Python34\lib\site-packages\xively_python-0.1.0_rc2-py3.4.egg\xively\managers.py", >line 268, in get
response.raise_for_status()
File "C:\Python34\lib\site-packages\requests-2.3.0-py3.4.egg\requests\models.py", line 795, >in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out
Thanks,
P.S. I've replaced my personal info with [MY_INFO] but obviously the correct
data appears in my code.
Answer: I usually use a decorator for this:
from functools import wraps
from requests.exceptions import HTTPError
import time
def retry(func):
""" Call `func` with a retry.
If `func` raises an HTTPError, sleep for 5 seconds
and then retry.
"""
@wraps(func)
def wrapper(*args, **kwargs):
try:
ret = func(*args, **kwargs)
except HTTPError:
time.sleep(5)
ret = func(*args, **kwargs)
return ret
return wrapper
Or, if you want to retry more than once:
def retry_multi(max_retries):
""" Retry a function `max_retries` times. """
def retry(func):
@wraps(func)
def wrapper(*args, **kwargs):
num_retries = 0
while num_retries <= max_retries:
try:
ret = func(*args, **kwargs)
break
except HTTPError:
if num_retries == max_retries:
raise
num_retries += 1
time.sleep(5)
return ret
return wrapper
return retry
Then put your code in a function like this
#@retry
@retry_multi(5) # retry 5 times before giving up.
def do_call():
# Upload to Xivity
api = xively.XivelyAPIClient("[MY_API_KEY")
feed = api.feeds.get([MY_DATASTREAM_ID])
now = datetime.datetime.utcnow()
feed.datastreams = [xively.Datastream(id='temps', current_value=tempF, at=now)]
feed.update()
|
python : var's namespace code eval compiled ast code
Question: Using python3.4 and test the ast parse . Here is the test code .
import ast
import unittest
class TestAST(unittest.TestCase):
def test_ast(self):
#compileobj = compile(ast.parse("x=42"), '<input>', mode="exec")
#compile(ast.parse("x=42"), '<input>', mode="exec")
eval(compile(ast.parse("x=42"), '<input>', mode="exec"))
self.assertTrue(x == 42)
pass
if __name__ == '__main__':
unittest.main()
python test.py I got error like this :
Traceback (most recent call last):
File "qocean\test\test_ast.py", line 13, in test_ast
self.assertTrue(x == 42)
NameError: name 'x' is not defined
While eval it in ipython : the x can be see like this :
In [3]: eval(compile(ast.parse("x=42"), '<input>', mode="exec"))
In [4]: x
Out[4]: 42
So the quetion is , while I can't use the var "x" , since the eval code is
same in source code and ipython eval environemt . What's the difference ?
`#############################################`
**Update** : seem it is something relate to unittest module ,change test code
in main() , it will success .
if __name__ == '__main__':
eval(compile(ast.parse("x=42"), '<input>', mode="exec"))
print (x)
after run . it show 42 .
more detailed , call this in func also failed
def test_eval():
eval(compile(ast.parse("y=42"), '<input>', mode="exec"))
print ("y is ", y)
if __name__ == '__main__':
#unittest.main()
test_eval()
Answer: In Python 3, you can't create new local names dynamically. Those are baked in
at the time the function is compiled to bytecode. Instead, you need to pass a
dictionary to `exec` to be used as locals within that statement.
def test_ast(self):
localvars = {}
eval(compile(ast.parse("x=42"), '<input>', mode="exec"), localvars)
self.assertTrue(localvars["x"] == 42)
|
Installing M2Crypto 0.20.1 on Python 2.6 on Ubuntu 14.04
Question: I need to compile and install M2Crypto 0.20.1 from source for Python 2.6 on
Ubuntu 14.04. I can't migrate to Python2.7 right now but we're planning so. I
installed Python2.6 from <https://launchpad.net/~fkrull/+archive/deadsnakes>.
I have installed libssl-dev and python2.6-dev, file /usr/include/x86_64-linux-
gnu/openssl/opensslconf.h has 644 as permissions and is owned by root.
However the `setup.py install` for M2Crypto fails as below:
Running setup.py install for M2Crypto
building 'M2Crypto.__m2crypto' extension
swigging SWIG/_m2crypto.i to SWIG/_m2crypto_wrap.c
swig -python -I/usr/include/python2.6 -I/usr/include -includeall -o SWIG/_m2crypto_wrap.c SWIG/_m2crypto.i
SWIG/_evp.i:9: Error: Unable to find 'openssl/opensslconf.h'
SWIG/_ec.i:7: Error: Unable to find 'openssl/opensslconf.h'
error: command 'swig' failed with exit status 1
Complete output from command /vagrant/venv/bin/python2.6 -c "import setuptools, tokenize;__file__='/vagrant/venv/build/M2Crypto/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-3vnOUl-record/install-record.txt --single-version-externally-managed --compile --install-headers /vagrant/venv/include/site/python2.6:
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-2.6
creating build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/RC4.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/BIO.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/callback.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/DSA.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/ftpslib.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/Engine.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/EVP.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/BN.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/DH.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/util.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/EC.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/httpslib.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/Rand.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/__init__.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/m2.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/m2xmlrpclib.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/RSA.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/threading.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/ASN1.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/SMIME.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/m2urllib2.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/Err.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/X509.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/AuthCookie.py -> build/lib.linux-x86_64-2.6/M2Crypto
copying M2Crypto/m2urllib.py -> build/lib.linux-x86_64-2.6/M2Crypto
creating build/lib.linux-x86_64-2.6/M2Crypto/SSL
copying M2Crypto/SSL/Session.py -> build/lib.linux-x86_64-2.6/M2Crypto/SSL
copying M2Crypto/SSL/Cipher.py -> build/lib.linux-x86_64-2.6/M2Crypto/SSL
copying M2Crypto/SSL/Connection.py -> build/lib.linux-x86_64-2.6/M2Crypto/SSL
copying M2Crypto/SSL/Checker.py -> build/lib.linux-x86_64-2.6/M2Crypto/SSL
copying M2Crypto/SSL/__init__.py -> build/lib.linux-x86_64-2.6/M2Crypto/SSL
copying M2Crypto/SSL/TwistedProtocolWrapper.py -> build/lib.linux-x86_64-2.6/M2Crypto/SSL
copying M2Crypto/SSL/SSLServer.py -> build/lib.linux-x86_64-2.6/M2Crypto/SSL
copying M2Crypto/SSL/Context.py -> build/lib.linux-x86_64-2.6/M2Crypto/SSL
copying M2Crypto/SSL/timeout.py -> build/lib.linux-x86_64-2.6/M2Crypto/SSL
copying M2Crypto/SSL/cb.py -> build/lib.linux-x86_64-2.6/M2Crypto/SSL
copying M2Crypto/SSL/ssl_dispatcher.py -> build/lib.linux-x86_64-2.6/M2Crypto/SSL
creating build/lib.linux-x86_64-2.6/M2Crypto/PGP
copying M2Crypto/PGP/PublicKey.py -> build/lib.linux-x86_64-2.6/M2Crypto/PGP
copying M2Crypto/PGP/__init__.py -> build/lib.linux-x86_64-2.6/M2Crypto/PGP
copying M2Crypto/PGP/RSA.py -> build/lib.linux-x86_64-2.6/M2Crypto/PGP
copying M2Crypto/PGP/PublicKeyRing.py -> build/lib.linux-x86_64-2.6/M2Crypto/PGP
copying M2Crypto/PGP/packet.py -> build/lib.linux-x86_64-2.6/M2Crypto/PGP
copying M2Crypto/PGP/constants.py -> build/lib.linux-x86_64-2.6/M2Crypto/PGP
running build_ext
building 'M2Crypto.__m2crypto' extension
swigging SWIG/_m2crypto.i to SWIG/_m2crypto_wrap.c
swig -python -I/usr/include/python2.6 -I/usr/include -includeall -o SWIG/_m2crypto_wrap.c SWIG/_m2crypto.i
SWIG/_evp.i:9: Error: Unable to find 'openssl/opensslconf.h'
SWIG/_ec.i:7: Error: Unable to find 'openssl/opensslconf.h'
error: command 'swig' failed with exit status 1
----------------------------------------
Cleaning up...
Command /vagrant/venv/bin/python2.6 -c "import setuptools, tokenize;__file__='/vagrant/venv/build/M2Crypto/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-3vnOUl-record/install-record.txt --single-version-externally-managed --compile --install-headers /vagrant/venv/include/site/python2.6 failed with error code 1 in /vagrant/venv/build/M2Crypto
Traceback (most recent call last):
File "/vagrant/venv/bin/pip", line 11, in <module>
sys.exit(main())
File "/vagrant/venv/lib/python2.6/site-packages/pip/__init__.py", line 185, in main
return command.main(cmd_args)
File "/vagrant/venv/lib/python2.6/site-packages/pip/basecommand.py", line 161, in main
text = '\n'.join(complete_log)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 39: ordinal not in range(128)
What could I be missing?
Answer: The path is wrong. Try to do this:
cd /usr/include/openssl/
ln -s ../x86_64-linux-gnu/openssl/opensslconf.h .
|
Sci-kit Learn PLS SVD and cross validation
Question: The `sklearn.cross_decomposition.PLSSVD` class in Sci-kit learn appears to be
failing when the response variable has a shape of `(N,)` instead of `(N,1)`,
where `N` is the number of samples in the dataset.
However, `sklearn.cross_validation.cross_val_score` fails when the response
variable has a shape of `(N,1)` instead of `(N,)`. How can I use them
together?
A snippet of code:
from sklearn.pipeline import Pipeline
from sklearn.cross_decomposition import PLSSVD
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
# x -> (N, 60) numpy array
# y -> (N, ) numpy array
# These are the classifier 'pieces' I'm using
plssvd = PLSSVD(n_components=5, scale=False)
logistic = LogisticRegression(penalty='l2', C=0.5)
scaler = StandardScaler(with_mean=True, with_std=True)
# Here's the pipeline that's failing
plsclf = Pipeline([('scaler', scaler),
('plssvd', plssvd),
('logistic', logistic)])
# Just to show how I'm using the pipeline for a working classifier
logclf = Pipeline([('scaler', scaler),
('logistic', logistic)])
##################################################################
# This works fine
log_scores = cross_validation.cross_val_score(logclf, x, y, scoring='accuracy',
verbose=True, cv=5, n_jobs=4)
# This fails!
pls_scores = cross_validation.cross_val_score(plsclf, x, y, scoring='accuracy',
verbose=True, cv=5, n_jobs=4)
Specifically, it fails in the `_center_scale_xy` function of
`cross_decomposition/pls_.pyc` with `'IndexError: tuple index out of range'`
at line 103: `y_std = np.ones(Y.shape[1])`, because the shape tuple has only
one element.
If I set `scale=True` in the `PLSSVD` constructor, it fails in the same
function at line 99: `y_std[y_std == 0.0] = 1.0`, because it is attempting to
do a boolean index on a float (`y_std` is a float, since it only has one
dimension).
Seems, like an easy fix, just make sure the `y` variable has two dimensions,
`(N,1)`. **However:**
If I create an array with dimensions `(N,1)` out of the output variable `y`,
it still fails. In order to change the arrays, I add this before running
`cross_val_score`:
y = np.transpose(np.array([y]))
Then, it fails in `sklearn/cross_validation.py` at line 398:
File "my_secret_script.py", line 293, in model_create
scores = cross_validation.cross_val_score(plsclf, x, y, scoring='accuracy', verbose=True, cv=5, n_jobs=4)
File "/Users/my.secret.name/anaconda/lib/python2.7/site-packages/sklearn/cross_validation.py", line 1129, in cross_val_score
cv = _check_cv(cv, X, y, classifier=is_classifier(estimator))
File "/Users/my.secret.name/anaconda/lib/python2.7/site-packages/sklearn/cross_validation.py", line 1216, in _check_cv
cv = StratifiedKFold(y, cv, indices=needs_indices)
File "/Users/my.secret.name/anaconda/lib/python2.7/site-packages/sklearn/cross_validation.py", line 398, in __init__
label_test_folds = test_folds[y == label]
ValueError: boolean index array should have 1 dimension
I'm running this on OSX, NumPy version `1.8.0`, Sci-kit Learn version
`0.15-git`.
Any way to use `PLSSVD` together with `cross_val_score`?
Answer: Partial least squares projects both your data `X` and your target `Y` onto
linear subspaces spanned by `n_components` vectors each. They are projected in
a way that regression scores of one projected variable on the other are
maximized.
The number of components, i.e. dimensions of the latent subspaces is bounded
by the number of features in your variables. Your variable `Y` only has one
feature (one column), so the latent subspace is one-dimensional, effectively
reducing your construction to something more akin to (but not exactly the same
as) linear regression. So using partial least squares in this specific
situation is probably not useful.
Take a look at the following
import numpy as np
rng = np.random.RandomState(42)
n_samples, n_features_x, n_features_y, n_components = 20, 10, 1, 1
X = rng.randn(n_samples, n_features_x)
y = rng.randn(n_samples, n_features_y)
from sklearn.cross_decomposition import PLSSVD
plssvd = PLSSVD(n_components=n_components)
X_transformed, Y_transformed = plssvd.fit_transform(X, y)
`X_transformed` and `Y_transformed` are arrays of shape `n_samples,
n_components`, they are the projected versions of `X` and `Y`.
The answer to your question about using `PLSSVD` within a `Pipeline` in
`cross_val_score`, is **no** , it will not work out of the box, because the
`Pipeline` object calls `fit` and `transform` using both variables `X` and `Y`
as arguments if possible, which, as you can see in the code I wrote, returns a
_tuple_ containing the projected `X` and `Y` values. The next step in the
pipeline will not be able to process this, because it will think that this
tuple is the new `X`.
This type of failure is due to the fact that `sklearn` is only beginning to be
systematic about multiple target support. The `PLSSVD` estimator you are
trying to use is inherently multi target, even if you are only using it on one
target.
**Solution** : Don't use partial least squares on 1D targets, there would be
no gain to it even if it worked with the pipeline.
|
Print results in console of class
Question: I'm pretty new to OOPython and am trying to simply execute the value of
`parse_param_paths()` to get the value of `dictpath`
I have:
class Injection:
def __init__(self):
self.tld_object = None
self.path_object = None
print "Class Initialized"
def gather_path(self):
self.path_object = PathsOfDomain.objects.filter(FKtoTld=3)
return self.path_object
def parse_param_paths(self):
if self.path_object is not None:
dictpath = {}
for path in self.path_object:
self.params = path.split("?")[1].split("&")
out = list(map(lambda v: v.split("=")[0] +"=" + self.fuzz_vectors, self.params))
dictpath[path] = out
print dictpath
Any help is greatly appreciated here. Thank you
Answer:
from myapp import Injection
inj = Injection() # create an instance
inj.gather_path() # to set self.object to a value other than None
inj.parse_param_paths() # call method with instance
Put a `print self.path_object is not None` before `if self.path_object is not
None`
If `self.path_object` is `None` you won't get any output as the condition
won't evaluate to `True` so you won't execute anymore lines in your method.
|
How can I encrypt .docx files with AES & pycrypto without corrupting the files
Question: I've got this bit of python code that I want to use to encrypt various kinds
of files with AES 256. I am using the pycrypto module. It works fine for most
files (exe, deb, jpg, pdf, txt) but when it comes to office files (docx, xlsx,
ppt etc) the file is corrupted upon decryption and will no open (nor can it be
repaired) in LibreOffice. I am using Linux mint, python 2.7.6, pycrypto 2.6.1.
I'm still a bit of a noob so I'd appreciate it if you could give me code
examples of the corrections you'd recommend.
Thanks
from Crypto import Random
from Crypto.Cipher import AES
import os
def pad(s):
return s + b"\0" * (AES.block_size - len(s) % AES.block_size)
def encrypt(message, key, key_size=256):
message = pad(message)
iv = Random.new().read(AES.block_size)
cipher = AES.new(key, AES.MODE_CBC, iv)
return iv + cipher.encrypt(message)
def decrypt(ciphertext, key):
iv = ciphertext[:AES.block_size]
cipher = AES.new(key, AES.MODE_CBC, iv)
plaintext = cipher.decrypt(ciphertext[AES.block_size:])
return plaintext.rstrip(b"\0")
def encrypt_file(file_name, key):
with open(file_name, 'rb') as fo:
plaintext = fo.read()
enc = encrypt(plaintext, key)
with open(file_name + ".enc", 'wb') as fo:
fo.write(enc)
def decrypt_file(file_name, key):
with open(file_name, 'rb') as fo:
ciphertext = fo.read()
dec = decrypt(ciphertext, key)
with open(file_name[:-4], 'wb') as fo:
fo.write(dec)
key = b'\xbf\xc0\x85)\x10nc\x94\x02)j\xdf\xcb\xc4\x94\x9d(\x9e[EX\xc8\xd5\xbfI{\xa2$\x05(\xd5\x18'
encrypt_file('file.docx', key)
Answer: If you need to add padding to make the plaintext a multiple of 16 bytes, the
extra bytes need to be stripped before you write the decrypted data. This
means you will need to somehow include the number of pad bytes added with the
padding before you encrypt it. See PKCS#7 for one possible technique. There
are a number of other schemes as well.
|
Service endpoint interface for python
Question: I am working on java in which eclipse gives some tools like wsimport which
imports all the java files and class files by specifying the URL of the web
service to the tool. Is there some thing like this for python? How do we work
with python to use any web service. There should be some service endpoint
interface for python right? If there is one, How do we use it? Please help,
thanks in advance.
Answer: [Jurko's port of suds](https://bitbucket.org/jurko/suds) will do the job. It's
easy to use, and it exposes a web service's methods and object types.
Example:
>>> import suds
>>> from suds.client import Client
>>> client = Client('http://wsf.cdyne.com/WeatherWS/Weather.asmx?WSDL')
>>> print(client)
Service ( Weather ) tns="http://ws.cdyne.com/WeatherWS/"
Prefixes (1)
ns0 = "http://ws.cdyne.com/WeatherWS/"
Ports (2):
(WeatherSoap)
Methods (3):
GetCityForecastByZIP(xs:string ZIP)
GetCityWeatherByZIP(xs:string ZIP)
GetWeatherInformation()
Types (8):
ArrayOfForecast
ArrayOfWeatherDescription
Forecast
ForecastReturn
POP
WeatherDescription
WeatherReturn
temp
...
As you can see, the `Client` class just requires a URL in order to print out
info for the web service.
>>> weather = client.service.GetCityWeatherByZIP('02118')
>>> print(weather)
(WeatherReturn){
Success = True
ResponseText = "City Found"
State = "MA"
City = "Boston"
WeatherStationCity = "Boston"
WeatherID = 14
Description = "Cloudy"
Temperature = "64"
RelativeHumidity = "80"
Wind = "S9"
Pressure = "30.19F"
Visibility = None
WindChill = None
Remarks = None
}
The `client` object has a `service` attribute that lets you run methods. When
I ran `GetCityWeatherByZIP`, `suds` converted the XML response into a Python
object (in this case, a `WeatherReturn` object). You can access its attributes
like any other Python object.
>>> weather.Description
Cloudy
>>> weather.Wind
S9
You can also create XML-associated objects of your own with `Client.factory`.
>>> forecast = client.factory.create('Forecast')
>>> forecast.WeatherID = 6
>>> print(forecast)
(Forecast){
Date = None
WeatherID = 6
Desciption = None
Temperatures =
(temp){
MorningLow = None
DaytimeHigh = None
}
ProbabilityOfPrecipiation =
(POP){
Nighttime = None
Daytime = None
}
}
Although it's a little dated, [this doc
page](https://fedorahosted.org/suds/wiki/Documentation) should help you get
started.
|
Regex to match all new line characters outside of some tag
Question: I need to match all new line characters outside of a particular html tag or
pseudotag.
Here is an example. I want to match all `"\n"`s ouside of `[code] [/code]`
tags (in order to replace them with `<br>` tags) in this text fragment:
These concepts are represented by simple Python classes.
Edit the polls/models.py file so it looks like this:
[code]
from django.db import models
class Question(models.Model):
question_text = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
[/code]
I know that I should use negative lookaheads, but I'm struggling to figure the
whole thing out.
Specifically, I need a PCRE expression, I will use it with PHP and perhaps
Python.
Answer: To me, this situation seems to be straight out of [Match (or replace) a
pattern except in situations s1, s2, s3
etc](http://stackoverflow.com/questions/23589174/match-or-replace-a-pattern-
except-in-situations-s1-s2-s3-etc/23589204#23589204). Please visit that link
for full discussion of the solution.
I will give you answers for both PHP and Python (since the example mentioned
django).
**PHP**
(?s)\[code\].*?\[/code\](*SKIP)(*F)|\n
The left side of the alternation matches complete [code]...[/code] tags, then
deliberately fails, and skips the part of the string that was just matched.
The right side matches newlines, and we know they are the right newlines
because they were not matched by the expression on the left.
This PHP program shows how to use the regex (see the results at the bottom of
the [online demo](http://ideone.com/iXMIj8)):
<?php
$regex = '~(?s)\[code\].*?\[/code\](*SKIP)(*F)|\n~';
$subject = "These concepts are represented by simple Python classes.
Edit the polls/models.py file so it looks like this:
[code]
from django.db import models
class Question(models.Model):
question_text = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
[/code]";
$replaced = preg_replace($regex,"<br />",$subject);
echo $replaced."<br />\n";
?>
**Python**
For Python, here's our simple regex:
(?s)\[code\].*?\[/code\]|(\n)
The left side of the alternation matches complete `[code]...[/code]` tags. We
will ignore these matches. The right side matches and captures newlines to
Group 1, and we know they are the right newlines because they were not matched
by the expression on the left.
This Python program shows how to use the regex (see the results at the bottom
of the [online demo](http://ideone.com/Gg3VCV)):
import re
subject = """These concepts are represented by simple Python classes.
Edit the polls/models.py file so it looks like this:
[code]
from django.db import models
class Question(models.Model):
question_text = models.CharField(max_length=200)
pub_date = models.DateTimeField('date published')
[/code]"""
regex = re.compile(r'(?s)\[code\].*?\[/code\]|(\n)')
def myreplacement(m):
if m.group(1):
return "<br />"
else:
return m.group(0)
replaced = regex.sub(myreplacement, subject)
print(replaced)
|
Call Python from Java code using Jython cause error: ImportError: no module named nltk
Question: I'm calling a python code from a java code using jython by PythonInterpreter.
the python code just tag the sentence:
import nltk
import pprint
tokenizer = None
tagger = None
def tag(sentences):
global tokenizer
global tagger
tagged = nltk.sent_tokenize(sentences.strip())
tagged = [nltk.word_tokenize(sent) for sent in tagged]
tagged = [nltk.pos_tag(sent) for sent in tagged]
return tagged
def PrintToText(tagged):
output_file = open('/Users/ha/NetBeansProjects/JythonNLTK/src/jythonnltk/output.txt', 'w')
output_file.writelines( "%s\n" % item for item in tagged )
output_file.close()
def main():
sentences = """What is the salary of Jamie"""
tagged = tag(sentences)
PrintToText(tagged)
pprint.pprint(tagged)
if __name__ == 'main':
main()
I got this error:
run:
Traceback (innermost last):
(no code object) at line 0
File "/Users/ha/NetBeansProjects/JythonNLTK/src/jythonnltk/Code.py", line 42
output_file.writelines( "%s\n" % item for item in tagged )
^
SyntaxError: invalid syntax
BUILD SUCCESSFUL (total time: 1 second)
this code works very fine if I opened it in a python project but calling it
from java fire this error. How can I solve it?
Thanks in advance
**UPDATE** : I have edit the line to `output_file.writelines( ["%s\n" % item
for item in tagged] )` as @User sugeested but I received another error
message:
Traceback (innermost last):
File "/Users/ha/NetBeansProjects/JythonNLTK/src/jythonnltk/Code.py", line 5, in ?
ImportError: no module named nltk
BUILD SUCCESSFUL (total time: 1 second)
Answer: Now that the compile-time syntax error is solved, you are getting run-time
errors. What is `nltk`? Where is `nltk`? The `ImportError` implies that `nltk`
is not in your import path.
Try writing a small simple program and examine `sys.path` ; you might need to
append location of `nltk` before importing it.
### The import fails if nltk is not in the system path:
>>> import nltk
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named nltk
### Try inspecting the system path:
>>> import sys
>>> sys.path
['', '/usr/lib/site-python', '/usr/share/jython/Lib', '__classpath__', '__pyclasspath__/', '/usr/share/jython/Lib/site-packages']
### Try appending the location of nltk to the system path:
>>> sys.path.append("/path/to/nltk")
#### Now try the import again.
|
django nested_inlines not shown in admin site
Question: I'm trying to use nested_inlines and read that the bug, that the third inline
is not shown was already fixed. But still I have the same problems. I'm using
django 1.6.5 and python 2.7.5. The nested_inlines I downloaded from
<https://pypi.python.org/pypi/django-nested-inlines> .
I tried the examples in the internet and put 'nested_inlines' into the
INSTALLED_APPS, but I don't see the third line in my admin site.
Here my code in models.py:
from django.db import models
class A(models.Model):
name = models.CharField(max_length = 200)
class B(models.Model):
name = models.CharField(max_length = 200)
fk_a = models.ForeignKey('A')
class C(models.Model):
name = models.CharField(max_length = 200)
fk_b = models.ForeignKey('B')
admin.py:
from django.contrib import admin
from .models import A,B,C
from nested_inlines.admin import NestedStackedInline, NestedModelAdmin
class cInline (NestedStackedInline):
model = C
class bInline(NestedStackedInline):
model = B
inlines = [cInline,]
extra = 1
class aAdmin(NestedModelAdmin):
inlines =[bInline,]
admin.site.register(A, aAdmin)
What did I forgot? Any advices?
Answer: I believe it is a bug. Im working on the exact same problem right now. Try
adding an `extra` to `cInline`:
class cInline (NestedStackedInline):
model = C
extra = 1
It just doesn't seem to show up when there are no related models.
edit: also, use this repo instead: <https://github.com/silverfix/django-
nested-inlines>
They recommend it here (at the bottom):
<https://code.djangoproject.com/ticket/9025>
installation: `pip install -e git+git://github.com/silverfix/django-nested-
inlines.git#egg=django-nested-inlines`
|
Python: Tkinter to Shutdown, Restart and Sleep
Question: I am currently developing a small but basic applictaion using tkinter to run
on my windows startup so I can have a little menu for the different things I
want to open. For example, I currently have buttons to launch a few games I
play and buttons to launch Skype, Steam etc. But I am also adding buttons to
the menu to Shutdown, Restart and make my computer Sleep. SO far the code I
have is fairly basic but still here it is:
from Tkinter import *
import os, sys, subprocess
win=Tk()
b1 = Button(win, text = "SKYPE")
b2 = Button(win, text = "STEAM", command = lambda: os.startfile("C:\Program Files (x86)\Steam\Steam.exe"))
b3 = Button(win, text = "GOOGLE")
b4 = Button(win, text = "CS:GO")
b5 = Button(win, text = "RUST")
b6 = Button(win, text = "PPIRACY")
b7 = Button(win, text = "TERRARIA")
b8 = Button(win, text = "SHUTDOWN", command = lambda: subprocess.call(["shutdown.exe", "-f", "-s", "-t", "0"]))
b9 = Button(win, text = "SLEEP", command = lambda: subprocess.call(["sleep.exe", "-f", "-s", "-t", "0"]))
b10 = Button(win, text = "RESTART", command = lambda: subprocess.call(["restart.exe", "-f", "-s", "-t", "0"]))
l = Label(win, text = "Apps")
k = Label(win, text = "Games")
j = Label(win, text = "Misc")
l.grid(row = 0, column = 0, padx = 10, pady = 10)
k.grid(row = 0, column = 1, padx = 10, pady = 10)
j.grid(row = 0, column = 2, padx = 10, pady = 10)
b1.grid(row = 1, column = 0, padx = 10, pady = 10)
b2.grid(row = 2, column = 0, padx = 10, pady = 10)
b3.grid(row = 3, column = 0, padx = 10, pady = 10)
b4.grid(row = 1, column = 1, padx = 10, pady = 10)
b5.grid(row = 2, column = 1, padx = 10, pady = 10)
b6.grid(row = 3, column = 1, padx = 10, pady = 10)
b7.grid(row = 4, column = 1, padx = 10, pady = 10)
b8.grid(row = 1, column = 2, padx = 10, pady = 10)
b9.grid(row = 2, column = 2, padx = 10, pady = 10)
b10.grid(row = 3, column = 2, padx = 10, pady = 10)
mainloop()
As you can see my buttons 8, 9 and 10 are all to do these three things. The
shutdown works fine so I thought maybe lets try the same command but with
sleep.exe or restart.exe (Thought I'd give it a try) But obviously I am
getting the error
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python27\lib\lib-tk\Tkinter.py", line 1470, in __call__
return self.func(*args)
File "C:\Users\NAME\Desktop\test.py", line 17, in <lambda>
b10 = Button(win, text = "RESTART", command = lambda: subprocess.call(["restart.exe", "-f", "-s", "-t", "0"]))
File "C:\Python27\lib\subprocess.py", line 522, in call
return Popen(*popenargs, **kwargs).wait()
File "C:\Python27\lib\subprocess.py", line 709, in __init__
errread, errwrite)
File "C:\Python27\lib\subprocess.py", line 957, in _execute_child
startupinfo)
WindowsError: [Error 2] The system cannot find the file specified
And then for sleep the same again but obviously for making the system sleep.
What would be the way for me to go about shutdown, sleep and restart for me
then if that does not work? I am currently on Windows 8.1 if that makes a
difference. Thanks for any help.
Answer: To get the system to restart on the line of:
b10 = Button(win, text = "RESTART", command = lambda: subprocess.call(["shutdown.exe", "-f", "-s", "-t", "0"]))
You must change the -s to a -r to restart. The only part I need to fix now is
the computer not sleeping with -h (hibernate) Not sure of any other ways to do
this on windows 8.1 as -hg does not work.
|
Weird arithmetic with datetimes and relativedelta
Question: Is it safe to multiply [`relativedelta`](http://labix.org/python-
dateutil#head-ba5ffd4df8111d1b83fc194b97ebecf837add454) objects? I'm seeing
some weird and inconsistent behaviour, and can't find it documented what sorts
of arithmetic are supported by this class (if any)
>>> from datetime import datetime
>>> from dateutil.relativedelta import relativedelta
>>> datetime.now() + relativedelta(days=2)
datetime.datetime(2014, 5, 30, 12, 24, 59, 173941)
>>> datetime.now() + relativedelta(days=1) * 2
# TypeError: integer argument expected, got float
On the other hand:
>>> relativedelta(days=2) == relativedelta(days=1) * 2
True
Full traceback (with `python` 2.7.5 and `dateutil` 1.5):
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/dateutil/relativedelta.py", line 261, in __radd__
day = min(calendar.monthrange(year, month)[1],
File "/usr/lib/python2.7/calendar.py", line 121, in monthrange
day1 = weekday(year, month, 1)
File "/usr/lib/python2.7/calendar.py", line 113, in weekday
return datetime.date(year, month, day).weekday()
TypeError: integer argument expected, got float
Answer: You've run into a [known bug in `relativedelta`'s handling of
multiplication](https://bugs.launchpad.net/dateutil/+bug/737099), since fixed.
It only affects Python 2.7 or newer (call signatures of certain functions were
tightened).
Upgrade your `python-dateutils` package to version 2.1 or newer.
Don't be put off by the 2.0-is-Python-3-only misinformation on the project
documentation; 2.1 and 2.2 are Python 2 and 3 cross-compatible.
|
How to import a module with a dotted path?
Question: I want to import the **paramiko** module located in
**/usr/local/lib/python2.7/dist-packages**. So, I imported it this way:
from usr.local.lib.python2.7.dist-packages import paramiko
I have an error syntax related to python2.7 (It considers 7 as a package
located in python2 package)
I have both Python3.1.3 and Python2.7 installed. I program with Python3.1.3
only, however. How can I resolve this problem ?
Answer: How about ?
import sys
sys.path.append('/usr/local/lib/python2.7/dist-packages')
import paramiko
**UPDATED**
The best solution is installing `paramiko` on Python3 env. Take a look at
@DanielRoseman's answer. Or `virtualenv` is worth consideration. Here is a
good tutorial. <http://simononsoftware.com/virtualenv-tutorial/>
|
Update YAML file programmatically
Question: I've a Python dict that comes from reading a YAML file with the usual
yaml.load(stream)
I'd like to update the YAML file programmatically given a path to be updated
like:
group1,option1,option11,value
and save the resulting dict again as a yaml file. I'm facing the problem of
updating a dicntionary, taking into account that the path is dynamic (let's
say a user is able to enter the path through a simple CLI I've created using
Cmd).
Any ideas?
thanks!
**UPDATE** Let me be more specific on the question: The issue is with updating
part of a dictionary where I do not know in advance the structure. I'm working
on a project where all the configuration is stored on YAML files, and I want
to add a CLI to avoid having to edit them by hand. This a sample YAML file,
loaded to a dictionary (config-dict) using PyYaml:
config:
a-function: enable
b-function: disable
firewall:
NET:
A:
uplink: enable
downlink: enable
B:
uplink: enable
downlink: enable
subscriber-filter:
cancellation-timer: 180
service:
copy:
DS: enable
remark:
header-remark:
DSC: enable
remark-table:
port:
linkup-debounce: 300
p0:
mode: amode
p1:
mode: bmode
p2:
mode: amode
p3:
mode: bmode
I've created the CLI with Cmd, and it's working great even with
autocompletion. The user may provide a line like:
config port p1 mode amode
So, I need to edit:
config-dict['config']['port']['p1']['mode'] and set it to 'amode'. Then, use
yaml.dump() to create the file again. Another possible line would be:
config a-function enable
So config-dict['config']['a-function'] has to be set to 'enable'.
My problem is when updating the dictionary. If Python passed values as a
reference would be easy: Just iterate through the dict until the right value
is found and save it. Actually this is what I'm doing for the Cmd
autocomplete. But I don't know how to do the update.
Hope I explained myself better now!
Thanks in advance.
Answer: In fact the solution follows simple patter: load - modify - dump:
Before playing, be sure you have pyyaml installed:
$ pip install pyyaml
# `testyaml.py`
import yaml
fname = "data.yaml"
dct = {"Jan": {"score": 3, "city": "Karvina"}, "David": {"score": 33, "city": "Brno"}}
with open(fname, "w") as f:
yaml.dump(dct, f)
with open(fname) as f:
newdct = yaml.load(f)
print newdct
newdct["Pipi"] = {"score": 1000000, "city": "Stockholm"}
with open(fname, "w") as f:
yaml.dump(newdct, f)
# Resulting `data.yaml`
$ cat data.yaml
David: {city: Brno, score: 33}
Jan: {city: Karvina, score: 3}
Pipi: {city: Stockholm, score: 1000000}
|
Running many python threads and grabbing the returns values of each
Question: So i've had a look at several questions on SO, and also several webpages, and
the python pages for `thread`, `threading`, and `multiprocessing`, but i can't
seem to find anything that does what i want.
I've seen a few implementations which use queues, but they aren't really what
i want either.
Is there a way i can do something like this:
hosts = "host1 host2 host3 host4 host5".split() #lazy but meh
cpuInfo = {}
threadPool = ThreadPool(processes=len(hosts))
def getCPUUsage(host):
proc = Popen("ssh %s.rest.of.host.name top -bin 1" % host, stdout=PIPE, shell=True)
return proc.stdout.read()
for host in hosts:
cpuInfo[host] = threadPool.newThread(getCPUUsage(host)).getFunctionReturnStuff())
threadPool.waitUntilFinished()
print cpuInfo # Some dictionary with a `top -bin 1` output for each host.
The only example i've seen so far uses a queue to get all the info out of the
various threads by passing a `queue` as an argument to the function, and have
the function manually add it's reutnr values to said queue - or they simply do
not return anything and instead just `print` \- which is useless to me.
Answer: The following example starts one worker per CPU on the computer. For each
hostname in a list, it tells a worker to ssh to that computer, run a command,
and return the result to original caller. In my case I'm just ssh'ing once
into my laptop, but you get the idea.
import multiprocessing, subprocess
def get_value(args):
hostname = args[0]
return subprocess.check_output(
'ssh {} hostname'.format(hostname),
shell=True,
)
pool = multiprocessing.Pool()
print list( pool.imap_unordered(
get_value,
[ ('localhost',) ],
) )
Example output from my lappytop:
['palabras\n']
See: <https://docs.python.org/2/library/multiprocessing.html>
|
Pyocr doesn't recognize get_available_languages
Question: Im using Python and can't load languages from package pyocr.
from PIL import Image
import sys
import pyocr
from pyocr import builders
im=Image.open("Img1.gif")
tool=pyocr.get_available_tools()
lang = tool.get_available_languages()[0]
here I got "list object has no attribute 'get_available_languages'
Any ideas of how to solve it? I've never used pyocr.
Thanks
Answer: The `tools` is a list, and you need to iterate over them; the items are
individual tools:
for tool in pyocr.get_available_tools():
for lang in tool.get_available_languages():
print("Tool {} supports language {}".format(tool, lang))
Alternatively, say you know you want to use `tesseract`, you can import it
directly:
from pyocr import tesseract as tool
However the list of tools reflects the programs that are installed on the
system; to enable the `tesseract` tool in `pyocr` you need to install the
separate `tesseract` program. If it is found in the command path, the
`get_available_tools()` will list tesseract. The other tool supported by
`pyocr` is `cuneiform`.
|
How do you read post variables in python-eve?
Question: How do we read post variables in python-eve ?
If I do
curl -d [ {"firstname" : "Barack", "lastname":"Obama"} ] <url>
how would I read post variables[firstname and lastname] inside the program ?
Thanks !
Answer: I post this with the caveat that you are probably not defining and requesting
your resources correctly.
This makes your freshly posted values available in a list named 'items'
from eve import Eve
def read_insert(resource, items):
print 'resource: "%s"' % resource
print 'items: "%s"' % items
pass
if __name__ == '__main__':
app = Eve()
app.on_insert += read_insert
app.run()
|
Use Enthought Canopy to run a python script without specifying its full path
Question: I want to be able to run a python script at the command line using Enthought
Canopy, but I don't want to specify the full path to the script.
As I see it, there are two options.
Option 1: Make the python script an executable, add `#!/usr/bin/env python` to
the top of the script, and put the directory containing the script on my
`$PATH`. Now I can execute the script like this:
$ run.py
Option 2: As suggested by Andrew Clark in another [SO
post](http://stackoverflow.com/questions/10002291/how-to-run-a-python-script-
portably-without-specifying-its-full-path), just put the directory containing
the script on my `$PYTHONPATH`. Then I can execute the script like this:
$ python -m run.py
The `-m` causes python to search the `$PYTHONPATH`.
I prefer Option 2, and it works fine with the system python on my mac
(v2.7.2), but I cannot get it to work with Enthought Canopy. I can load Canopy
python and import modules in the same directory as `run.py`, so I know that I
have the path correct. I just cannot execute the script from the command line.
Is this a bug or am I doing something wrong?
Answer: BTW, it's probably a typo, but just to make sure you should be using the
module name, not the file name, with the `-m` option. For example, `python -m
run`
If that is not the problem then make sure that the python that is used in your
option 2 is the python located in your Canopy User virtual environment. You
can use the `which` command to verify that. For example:
` $ which python
/Users/YourUserId/Library/Enthought/Canopy_64bit/User/bin/python `
If that is not what you get then you can either add that bin folder to the
beginning of your `PATH` environment variable, or you can activate that
virtual environment like this:
` source /Users/YourUserId/Library/Enthought/Canopy_64bit/User/bin/activate `
|
grep/zgrep within python using subprocess
Question: I have a set of tsvs that are zipped in *.tsv.gz format and some that are not
zipped, ie, *.tsv in a directory.
I want to grep for a string from these files and print the grep results each
in a new line.
I have a function that looks that takes in the input directory in which tsvs
and *.tsv.gz are stored and the string to be searched.
import sys, os, traceback,subprocess,gzip,glob
def filter_from_tsvs(input_dir,string):
tsvs = glob.glob(os.path.join(input_dir,'*.tsv*'))
open_cmd=open
for tsvfile in tsvs:
print os.path.splitext
extension = os.path.splitext(tsvfile)[1]
if extension == ".gz":
open_cmd = gzip.open
print open_cmd
try:
print subprocess.check_output('grep string tsvfile', shell=True)
except Exception as e:
print "%s" %e
print "%s" %traceback.format_exc()
return
I have also tried to use:
`try:
fname = open_cmd(tsvfile,"r")
print "opened"
print subprocess.check_output('grep string fname', shell=True)`
I get this error: `gzip: tsvfile.gz: No such file or directory Command 'zgrep
pbuf tsvfile' returned non-zero exit status 2 Traceback (most recent call
last): File "ex.py", line 23, in filter_from_maintsvs print
subprocess.check_output('zgrep pbuf tsvfile', shell=True) File
"/datateam/tools/opt/lib/python2.7/subprocess.py", line 544, in check_output
raise CalledProcessError(retcode, cmd, output=output) CalledProcessError:
Command 'zgrep pbuf tsvfile' returned non-zero exit status 2`
How can use grep/zgrep within python?
Thanks in advance for the help.
Answer: I got the following solution after going through a
[blog](https://blog.nelhage.com/2010/02/a-very-subtle-bug/) and it worked for
me :)
import subprocess
import signal
output = subprocess.check_output('grep string tsvfile', shell=True, preexec_fn=lambda: signal.signal(signal.SIGPIPE, signal.SIG_DFL))
print output
|
Python Lex-Yacc(PLY): Not recognizing start of line or start of string
Question: I am very new to [PLY](http://www.dabeaz.com/ply/) and a bit more than a
beginner to Python. I am trying to play around with
[PLY-3.4](http://www.dabeaz.com/ply/ply-3.4.tar.gz) and python 2.7 to learn
it. Please see the code below. I am trying to create a token QTAG which is a
string made of zero of more whitespaces followed by 'Q' or 'q', followed by
'.' and a positive integer and one or more whitespaces. For example VALID
QTAGs are
"Q.11 "
" Q.12 "
"q.13 "
'''
Q.14
'''
INVALID ones are
"asdf Q.15 "
"Q. 15 "
Here is my code:
import ply.lex as lex
class LqbLexer:
# List of token names. This is always required
tokens = [
'QTAG',
'INT'
]
# Regular expression rules for simple tokens
def t_QTAG(self,t):
r'^[ \t]*[Qq]\.[0-9]+\s+'
t.value = int(t.value.strip()[2:])
return t
# A regular expression rule with some action code
# Note addition of self parameter since we're in a class
def t_INT(self,t):
r'\d+'
t.value = int(t.value)
return t
# Define a rule so we can track line numbers
def t_newline(self,t):
r'\n+'
print "Newline found"
t.lexer.lineno += len(t.value)
# A string containing ignored characters (spaces and tabs)
t_ignore = ' \t'
# Error handling rule
def t_error(self,t):
print "Illegal character '%s'" % t.value[0]
t.lexer.skip(1)
# Build the lexer
def build(self,**kwargs):
self.lexer = lex.lex(debug=1,module=self, **kwargs)
# Test its output
def test(self,data):
self.lexer.input(data)
while True:
tok = self.lexer.token()
if not tok: break
print tok
# test it
q = LqbLexer()
q.build()
#VALID inputs
q.test("Q.11 ")
q.test(" Q.12 ")
q.test("q.13 ")
q.test('''
Q.14
''')
# INVALID ones are
q.test("asdf Q.15 ")
q.test("Q. 15 ")
The output I get is as follows:
LexToken(QTAG,11,1,0)
Illegal character 'Q'
Illegal character '.'
LexToken(INT,12,1,4)
LexToken(QTAG,13,1,0)
Newline found
Illegal character 'Q'
Illegal character '.'
LexToken(INT,14,2,6)
Newline found
Illegal character 'a'
Illegal character 's'
Illegal character 'd'
Illegal character 'f'
Illegal character 'Q'
Illegal character '.'
LexToken(INT,15,3,7)
Illegal character 'Q'
Illegal character '.'
LexToken(INT,15,3,4)
Notice that only the first and third of the valid inputs are correctly
tokenized. I am not able to figure out why my other valid inputs are not being
tokenized properly. In the doc string for t_QTAG:
1. Replacing `'^'` with `'\A'` did not work.
2. I tried by removing `'^'` . Then all the valid inputs get tokenized, but then second invalid input also gets tokenized.
Any help is appreciated in advance!
Thanks
PS: I joined the google-group ply-hack and tried posting there, but I could
not post either directly in the forum or through email. I am not sure if the
group is active anymore. Prof. Beazley is not responding either. Any ideas?
Answer: Finally I found the answer myself. Posting it so that others may find it
useful.
As @Tadgh rightly pointed out `t_ignore = ' \t'` consumes the spaces and tabs
and hence I will not be able to match as per above regex for `t_QTAG` and the
consequence is that the second valid input is not tokenized. By reading the
PLY documentation carefully, I learned that if the ordering of the regex for
tokens is to be maintained then they have to be defined in a function rather
than strings as was done for `t_ignore`. If strings are used then PLY
automatically orders them by longest to shortest length and appends them
_after_ the functions. Here `t_ignore` is special, I guess, that it is somehow
executed before anything else. This part is not clearly documented. The work
around for this to define a function with a new token, eg, `t_SPACETAB`,
_after_ `t_QTAG` and just do not return anything. With this, all the _valid_
inputs are correctly tokenized now, except the one with triple quotes (the
multi-line string containing `"Q.14"`). Also, the invalid ones are, as per
specification, not tokenized.
Multi-line string problem: It turns out that internally PLY uses `re` module.
In that module, `^` is interpreted only at the beginning of a _string_ and NOT
beginning of _every line_ , by default. To change that behavior, I need to
turn on the multi-line flag, which can be done within the regex using `(?m)`.
So, to process all the valid and invalid strings in my test properly, the
correct regex is:
`r'(?m)^\s*[Qq]\.[0-9]+\s+'`
Here is the corrected code with some more tests added:
import ply.lex as lex
class LqbLexer:
# List of token names. This is always required
tokens = [
'QTAG',
'INT',
'SPACETAB'
]
# Regular expression rules for simple tokens
def t_QTAG(self,t):
# corrected regex
r'(?m)^\s*[Qq]\.[0-9]+\s+'
t.value = int(t.value.strip()[2:])
return t
# A regular expression rule with some action code
# Note addition of self parameter since we're in a class
def t_INT(self,t):
r'\d+'
t.value = int(t.value)
return t
# Define a rule so we can track line numbers
def t_newline(self,t):
r'\n+'
print "Newline found"
t.lexer.lineno += len(t.value)
# A string containing ignored characters (spaces and tabs)
# Instead of t_ignore = ' \t'
def t_SPACETAB(self,t):
r'[ \t]+'
print "Space(s) and/or tab(s)"
# Error handling rule
def t_error(self,t):
print "Illegal character '%s'" % t.value[0]
t.lexer.skip(1)
# Build the lexer
def build(self,**kwargs):
self.lexer = lex.lex(debug=1,module=self, **kwargs)
# Test its output
def test(self,data):
self.lexer.input(data)
while True:
tok = self.lexer.token()
if not tok: break
print tok
# test it
q = LqbLexer()
q.build()
print "-============Testing some VALID inputs===========-"
q.test("Q.11 ")
q.test(" Q.12 ")
q.test("q.13 ")
q.test("""
Q.14
""")
q.test("""
qewr
dhdhg
dfhg
Q.15 asda
""")
# INVALID ones are
print "-============Testing some INVALID inputs===========-"
q.test("asdf Q.16 ")
q.test("Q. 17 ")
Here is the output:
-============Testing some VALID inputs===========-
LexToken(QTAG,11,1,0)
LexToken(QTAG,12,1,0)
LexToken(QTAG,13,1,0)
LexToken(QTAG,14,1,0)
Newline found
Illegal character 'q'
Illegal character 'e'
Illegal character 'w'
Illegal character 'r'
Newline found
Illegal character 'd'
Illegal character 'h'
Illegal character 'd'
Illegal character 'h'
Illegal character 'g'
Newline found
Illegal character 'd'
Illegal character 'f'
Illegal character 'h'
Illegal character 'g'
Newline found
LexToken(QTAG,15,6,18)
Illegal character 'a'
Illegal character 's'
Illegal character 'd'
Illegal character 'a'
Newline found
-============Testing some INVALID inputs===========-
Illegal character 'a'
Illegal character 's'
Illegal character 'd'
Illegal character 'f'
Space(s) and/or tab(s)
Illegal character 'Q'
Illegal character '.'
LexToken(INT,16,8,7)
Space(s) and/or tab(s)
Illegal character 'Q'
Illegal character '.'
Space(s) and/or tab(s)
LexToken(INT,17,8,4)
Space(s) and/or tab(s)
|
Why does multiprocessing work on Django runserver and not on ngnix uwsgi?
Question: I have a Django 1.6 using python3.3 application which receives an http
request, does short term work, starts a new process and returns in a matter of
2 seconds. The process typically takes 50-60 seconds at which time it writes
that the data is free on the data base where a timed ajax call can retrieve
the data and update the clint web page.
This works perfectly on the Django development runserver.
When I deploy the application on ngnix uwsgi the quick response is delayed for
50-60 seconds and appears in tandem with the completion of the process.
I logged the time between entering the view and just before sending the
response and it was 1-2 seconds.
I checked the packets with wireshark and all communication ceases for the
duration of the process i.e 50-60 seconds.
in the uwsgi .ini file I used processes=3
I would be grateful for a solution or a line of investigation.
the .ini file:
> [uwsgi], chdir=/nnnhn/, module=wsgi, #master=True, pidfile=/tmp/project-
> master.pid, vacuum=True, max-requests=5000, daemonize=/var/log/uwsgi.log,
> socket=/tmp/uwsgi.sock, uid=hhm, processes=3, pythonpath=/csghgfh/doo
the process spawning code:
process=multiprocessing.Process(target=Util.processImage1, args=(processUtil, img_full_path, snapshot.pk, process_pk, db_name, process_table_name))
process.start()
the relevant spawned code:
def processImage1(self, img_full_path, snapshot_pk, process_pk, db_name, process_table_name):
connection.close()
print('')
print('I think I just closed th db connection connection')
print('')
print('processImage1:', datetime.now())
print('process_pk: ', process_pk)
sys.stdout.flush()
try:
con = psycopg2.connect(host = 'a', database=db_name, user='a')
cur = con.cursor()
Answer: This is explained in the (probably) most important page of the uWSGI docs:
<http://uwsgi-docs.readthedocs.org/en/latest/ThingsToKnow.html>
When you fork() the process, the new one will inherit the connection
descriptor. You have to close it in the new forked process, or you can add the
--close-on-exec option to force it automatically
|
Python Threading timer not working across suspend
Question: I'm trying to create a program that syncs with a server every 60 seconds. The
code I'm using to do that looks like:
threading.Timer(60, self.sync, [""]).start()
Pretty simple, and works great. The issue is if I decide to suspend the
machine for a period of time and then come back, it doesn't work anymore. It's
as if the timer stopped working. I suspect this has to do with the gap in
real-world time as perceived by the timer, but I'm not sure how to make it
work.
The app I'm making is targeted at OSX, so maybe there is a system-level timer
that I could have access to?
Answer: I don't know if this would work any better, but you could give it a try:
import time
def call_sync(freq, meth):
cur_time = time.time()
while True:
while (time.time() - cur_time) < freq:
yield None
meth()
cur_time = time.time()
def sync():
print("synced")
def main():
timer = call_sync(60, sync)
while True:
time.sleep(1)
next(timer)
if __name__ == '__main__':
main()
|
Trouble using add_row with prettytable
Question: I am trying to format an array using the `prettytable` library. Here is my
code:
from prettytable import PrettyTable
arrayHR = [1,2,3,4,5,6,7,8,9,10]
print ("arrayHR:", arrayHR)
x = PrettyTable(["Heart Rate"])
for row in arrayHR:
x.add_row(row)
This results in the following error:
arrayHR: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
Traceback (most recent call last):
File "C:\Users\aag\Documents\Python\test.py", line 7, in <module>
x.add_row(row)
File "C:\Python33\lib\site-packages\prettytable.py", line 817, in add_row
if self._field_names and len(row) != len(self._field_names):
TypeError: object of type 'int' has no len()
I am sure that I am doing something unspeakably stupid, but I do not know what
that is. Might a kind soul orient me towards the right direction?
Answer: According to [the
documentation](http://code.google.com/p/prettytable/wiki/Tutorial), `add_row`
is expecting a `list`, _not an`int`_, as an argument. Assuming that you want
the values in `arrayHR` to be the first value in each row, you could do:
x = PrettyTable(["Heart Rate"])
for row in arrayHR:
x.add_row([row])
or adopt the `add_column` example, also from the documentation:
x = PrettyTable()
x.add_column("Heart Rate", arrayHR)
|
Python : copy directory in another directory
Question: I have a directory "D:/INPUT/test1" that I'd like to copy in another directory
"D:/OUTPUT".
I tried many methods but none of them have worked.
For example I tried the method explained at
_https://stackoverflow.com/questions/15034151/copy-directory-contents-into-a-
directory-with-python_ , that is to say :
import distutils.core
# copy subdirectory example
fromDirectory = "D:/INPUT/test1"
toDirectory = "D:/OUTPUT"
distutils.dir_util.copy_tree(fromDirectory, toDirectory)
-> The directory "D:/OUTPUT" is well created but there is nothing inside.
Then I tried the shutil.copytree method but I get the same result.
Would you have any suggestions ? Thanks !
Answer:
import shutil, errno
def copyanything(src, dst):
try:
shutil.copytree(src, dst)
except OSError as exc: # python >2.5
if exc.errno == errno.ENOTDIR:
shutil.copy(src, dst)
else: raise
|
Converting an RGB image to grayscale and manipulating the pixel data in python
Question: I have an RGB image which I want to convert to a grayscale image, so that I
can have one number (maybe between 0 and 1) for each pixel. This gives me a
matrix which has the dimensions equal to that of the pixels of the image. Then
I want to do some manipulations on this matrix and generate a new grayscale
image from this manipulated matrix. How can I do this?
Answer: I frequently work with images as NumPy arrays - I do it like so:
import numpy as np
from PIL import Image
x=Image.open('im1.jpg','r')
x=x.convert('L') #makes it greyscale
y=np.asarray(x.getdata(),dtype=np.float64).reshape((x.size[1],x.size[0]))
<manipulate matrix y...>
y=np.asarray(y,dtype=np.uint8) #if values still in range 0-255!
w=Image.fromarray(y,mode='L')
w.save('out.jpg')
If your array values y are no longer in the range 0-255 after the
manipulations, you could step up to 16-bit TIFFs or simply rescale.
-Aldo
|
Learn Python the Hard Way exercise 36 while loop
Question: The while loop in the dragon_room is not running and I'm not sure why. I get
the '>>' prompt over and over again and the program never exits or brings me
to another room.
from sys import exit
def food_room():
print "This room is full of candy bars. How many do you take?"
next = raw_input("gold room> ")
if "0" in next or "1" in next:
how_much = int(next)
else:
dead("type a number.")
if how_much < 5:
dead("You've starved, sorry")
else:
print "You've survived! congratulations."
exit(0)
def bear_room():
print "There is a bear here."
print "The bear has a bunch of honey."
print "The fat bear is in front of another door."
print "How are you going to move the bear?"
bear_moved = False
while True:
next = raw_input("bear room >")
if next == "take honey":
dead("Apparently the bear is pretty protective of his honey")
elif next == "taunt bear" and not bear_moved:
print "The bear has moved from the door. You can go through it now."
bear_moved = True
elif next == "taunt bear" and bear_moved:
dead("The bear gets pissed off and chews your face off.")
elif next == "open door" and bear_moved:
food_room()
else:
"no idea what that means."
def dragon_room():
print "You've opened the door on a giant, fire-breathing dragon."
print "Do you flee or fight?"
dragon_moved = False
while True:
next = raw_input(">>")
if "flee" in next:
start()
elif next == "fight" and not dragon_moved:
print "You have slayed the dragon! There is a door behind him! you should open the door."
dragon_moved = True
elif next == "fight" and dragon_moved:
dead("killed")
elif next == "open door" and dragon_moved:
food_room()
else:
dead("you're not very good at following instructions.")
def dead(why):
print why, "good job!"
exit(0)
def start():
print "You are starving and in a dark room."
print "There is a door to your right and left."
print "Which one do you take?"
next = raw_input(">")
if next == "left":
bear_room()
elif next == "right":
dragon_room()
else:
dead("you starve.")
start()
Answer: It looks like you've got an indentation error:
while True:
next = raw_input(">>")
## the loop above runs forever -- you probably want to indent all of the
## below code to be inside the loop.
if "flee" in next:
start()
elif next == "fight" and not dragon_moved:
print "You have slayed the dragon! There is a door behind him! you should open the door."
dragon_moved = True
elif next == "fight" and dragon_moved:
dead("killed")
elif next == "open door" and dragon_moved:
food_room()
else:
dead("you're not very good at following instructions.")
|
Removing lines from a text file using python and regular expressions
Question: I have some text files, and I want to remove all lines that begin with the
asterisk (“*”).
Made-up example:
words
*remove me
words
words
*remove me
My current code fails. It follows below:
import re
program = open(program_path, "r")
program_contents = program.readlines()
program.close()
new_contents = []
pattern = r"[^*.]"
for line in program_contents:
match = re.findall(pattern, line, re.DOTALL)
if match.group(0):
new_contents.append(re.sub(pattern, "", line, re.DOTALL))
else:
new_contents.append(line)
print new_contents
This produces ['', '', '', '', '', '', '_', '', '_ ', '', '*', ''], which is
no goo.
I’m very much a python novice, but I’m eager to learn. And I’ll eventually
bundle this into a function (right now I’m just trying to figure it out in an
ipython notebook).
Thanks for the help!
Answer: You _don't_ want to use a `[^...]` negative character class; you are matching
_all_ characters except for the `*` or `.` characters now.
`*` is a meta character, you want to escape that to `\*`. The `.` 'match any
character' syntax needs a multiplier to match more than one. Don't use
`re.DOTALL` here; you are operating on a line-by-line basis but don't want to
erase the newline.
There is no need to test first; if there is nothing to replace the original
line is returned.
pattern = r"^\*.*"
for line in program_contents:
new_contents.append(re.sub(pattern, "", line))
Demo:
>>> import re
>>> program_contents = '''\
... words
... *remove me
... words
... words
... *remove me
... '''.splitlines(True)
>>> new_contents = []
>>> pattern = r"^\*.*"
>>> for line in program_contents:
... new_contents.append(re.sub(pattern, "", line))
...
>>> new_contents
['words\n', '\n', 'words\n', 'words\n', '\n']
|
Python Numerical Integration for Volume of Region
Question: For a program, I need an algorithm to very quickly compute the volume of a
solid. This shape is specified by a function that, given a point P(x,y,z),
returns 1 if P is a point of the solid and 0 if P is not a point of the solid.
I have tried using numpy using the following test:
import numpy
from scipy.integrate import *
def integrand(x,y,z):
if x**2. + y**2. + z**2. <=1.:
return 1.
else:
return 0.
g=lambda x: -2.
f=lambda x: 2.
q=lambda x,y: -2.
r=lambda x,y: 2.
I=tplquad(integrand,-2.,2.,g,f,q,r)
print I
but it fails giving me the following errors:
> Warning (from warnings module): File "C:\Python27\lib\site-
> packages\scipy\integrate\quadpack.py", line 321 warnings.warn(msg,
> IntegrationWarning) IntegrationWarning: The maximum number of subdivisions
> (50) has been achieved. If increasing the limit yields no improvement it is
> advised to analyze the integrand in order to determine the difficulties. If
> the position of a local difficulty can be determined (singularity,
> discontinuity) one will probably gain from splitting up the interval and
> calling the integrator on the subranges. Perhaps a special-purpose
> integrator should be used.
>
> Warning (from warnings module): File "C:\Python27\lib\site-
> packages\scipy\integrate\quadpack.py", line 321 warnings.warn(msg,
> IntegrationWarning) IntegrationWarning: The algorithm does not converge.
> Roundoff error is detected in the extrapolation table. It is assumed that
> the requested tolerance cannot be achieved, and that the returned result (if
> full_output = 1) is the best which can be obtained.
>
> Warning (from warnings module): File "C:\Python27\lib\site-
> packages\scipy\integrate\quadpack.py", line 321 warnings.warn(msg,
> IntegrationWarning) IntegrationWarning: The occurrence of roundoff error is
> detected, which prevents the requested tolerance from being achieved. The
> error may be underestimated.
>
> Warning (from warnings module): File "C:\Python27\lib\site-
> packages\scipy\integrate\quadpack.py", line 321 warnings.warn(msg,
> IntegrationWarning) IntegrationWarning: The integral is probably divergent,
> or slowly convergent.
So, naturally, I looked for "special-purpose integrators", but could not find
any that would do what I needed.
Then, I tried writing my own integration using the Monte Carlo method and
tested it with the same shape:
import random
# Monte Carlo Method
def get_volume(f,(x0,x1),(y0,y1),(z0,z1),prec=0.001,init_sample=5000):
xr=(x0,x1)
yr=(y0,y1)
zr=(z0,z1)
vdomain=(x1-x0)*(y1-y0)*(z1-z0)
def rand((p0,p1)):
return p0+random.random()*(p1-p0)
vol=0.
points=0.
s=0. # sum part of variance of f
err=0.
percent=0
while err>prec or points<init_sample:
p=(rand(xr),rand(yr),rand(zr))
rpoint=f(p)
vol+=rpoint
points+=1
s+=(rpoint-vol/points)**2
if points>1:
err=vdomain*(((1./(points-1.))*s)**0.5)/(points**0.5)
if err>0:
if int(100.*prec/err)>=percent+1:
percent=int(100.*prec/err)
print percent,'% complete\n error:',err
print int(points),'points used.'
return vdomain*vol/points
f=lambda (x,y,z): ((x**2)+(y**2)<=4.) and ((z**2)<=9.) and ((x**2)+(y**2)>=0.25)
print get_volume(f,(-2.,2.),(-2.,2.),(-2.,2.))
but this works too slowly. For this program I will be using this numerical
integration about 100 times or so, and I will also be doing it on larger
shapes, which will take minutes if not an hour or two at the rate it goes now,
not to mention that I want a better precision than 2 decimal places.
I have tried implementing a MISER Monte Carlo method, but was having some
difficulties and I'm still unsure how much faster it would be.
So, I am asking if there are any libraries that can do what I am asking, or if
there are any better algorithms which work several times faster (for the same
accuracy). Any suggestions are welcome, as I've been working on this for quite
a while now.
EDIT:
If I cannot get this working in Python, I am open to switching to any other
language that is both compilable and has relatively easy GUI functionality.
Any suggestions are welcome.
Answer: Your function is not a continuous function, I think it's difficult to do the
integration.
How about:
import numpy as np
def sphere(x,y,z):
return x**2 + y**2 + z**2 <= 1
x, y, z = np.random.uniform(-2, 2, (3, 2000000))
sphere(x, y, z).mean() * (4**3), 4/3.0*np.pi
output:
(4.1930560000000003, 4.1887902047863905)
Or VTK:
from tvtk.api import tvtk
n = 151
r = 2.0
x0, x1 = -r, r
y0, y1 = -r, r
z0, z1 = -r, r
X,Y,Z = np.mgrid[x0:x1:n*1j, y0:y1:n*1j, z0:z1:n*1j]
s = sphere(X, Y, Z)
img = tvtk.ImageData(spacing=((x1-x0)/(n-1), (y1-y0)/(n-1), (z1-z0)/(n-1)),
origin=(x0, y0, z0), dimensions=(n, n, n))
img.point_data.scalars = s.astype(float).ravel()
blur = tvtk.ImageGaussianSmooth(input=img)
blur.set_standard_deviation(1)
contours = tvtk.ContourFilter(input = blur.output)
contours.set_value(0, 0.5)
mp = tvtk.MassProperties(input = contours.output)
mp.volume, mp.surface_area
output:
4.186006622559839, 12.621690438955586
|
Change default options in pandas
Question: I'm wondering if there's any way to change the default display options for
pandas. I'd like to change the display formatting as well as the display width
each time I run python, eg:
pandas.options.display.width = 150
I see the defaults are hard-coded in `pandas.core.config_init`. Is there some
way in pandas to do this properly? Or if not, is there some way to set up
ipython at least to change the config each time I import pandas? Only thing I
can think of is making my own mypandas library that wraps pandas with some
extra commands issued each time it's loaded. Any better ideas?
Answer: As described here, there are [iPython config
files](http://ipython.org/ipython-
doc/rel-0.10.2/html/config/customization.html#ipy-user-conf-py):
# Most of your config files and extensions will probably start
# with this import
import IPython.ipapi
ip = IPython.ipapi.get()
# You probably want to uncomment this if you did %upgrade -nolegacy
# import ipy_defaults
import os
import pandas
def main():
#ip.dbg.debugmode = True
ip.dbg.debug_stack()
# uncomment if you want to get ipython -p sh behaviour
# without having to use command line switches
import ipy_profile_sh
import jobctrl
# Configure your favourite editor?
# Good idea e.g. for %edit os.path.isfile
#import ipy_editors
# Choose one of these:
#ipy_editors.scite()
#ipy_editors.scite('c:/opt/scite/scite.exe')
#ipy_editors.komodo()
#ipy_editors.idle()
# ... or many others, try 'ipy_editors??' after import to see them
# Or roll your own:
#ipy_editors.install_editor("c:/opt/jed +$line $file")
o = ip.options
# An example on how to set options
#o.autocall = 1
o.system_verbose = 0
#import_all("os sys")
#execf('~/_ipython/ns.py')
# -- prompt
# A different, more compact set of prompts from the default ones, that
# always show your current location in the filesystem:
#o.prompt_in1 = r'\C_LightBlue[\C_LightCyan\Y2\C_LightBlue]\C_Normal\n\C_Green|\#>'
#o.prompt_in2 = r'.\D: '
#o.prompt_out = r'[\#] '
# Try one of these color settings if you can't read the text easily
# autoexec is a list of IPython commands to execute on startup
#o.autoexec.append('%colors LightBG')
#o.autoexec.append('%colors NoColor')
o.autoexec.append('%colors Linux')
pandas.options.display.width = 150
# some config helper functions you can use
def import_all(modules):
""" Usage: import_all("os sys") """
for m in modules.split():
ip.ex("from %s import *" % m)
def execf(fname):
""" Execute a file in user namespace """
ip.ex('execfile("%s")' % os.path.expanduser(fname))
main()
Probably better to make [separate Python profiles](http://ipython.org/ipython-
doc/rel-0.10.2/html/config/customization.html#ipython-profiles). (The code is
untested).
|
Making Text look like it's being typed in the Python Shell
Question: So far the only method I've come up with is clearing the console then
displaying the string with one more letter. Suggestions?
Current Code:
import os
import time
In=list(input("Text to be displayed: "))
Out=""
Move=[]
print("-")
while len(Move)<len(In):
Move+=["/","|","\\","-"]
for a in range(0,len(In)):
if In[a]==" ":
x=.3
elif In[a] in ",';:!@#$%^&*()_+-=[]{}":
x=.25
elif In[a] in "aeiouzxcvbnm1234567890AEIOUZXCVBNM":
x=.2
else:
x=.15
os.system('cls')
Out+=In[a]
print(Out+Move[a])
time.sleep(x)
os.system('cls')
print(Out)
input()
Answer: Just use `print()` with `end=''` to stay on the same line, and `flush = True`
for the timing:
import time
Move+=["/","|","\\","-"]
text = input('Text to be displayed: ')
for i in text:
print(i, end='', flush=True)
time.sleep(0.1)
print('\n')
This runs as:
bash-3.2$ python3.3 test.py
Text to be displayed: This is a test
This is a test
bash-3.2$
## Addressing your comment, use the below code:
import time
Move=["/","|","\\","-",""]
text = input('Text to be displayed: ')
for i in range(len(text)):
print(text[i]+Move[i%5], end='', flush=True)
time.sleep(0.1)
print('\n')
|
CUDA local array initalization modifies program output
Question: I have a program which (for now) calculates values of two functions in random
points on GPU , sends these values back to host, and then visualizes them.
This is what I get, some nice semi-random points:  Now, if I modify my kernel code, and
add the local array initalization code at the very end,
__global__ void optymalize(curandState * state, float* testPoints)
{
int ind=blockDim.x*blockIdx.x+threadIdx.x;
int step=blockDim.x*gridDim.x;
for(int i=ind*2;i<NOF*TEST_POINTS;i+=step*2)
{
float* x=generateX(state);
testPoints[i]=ZDT_f1(x);
testPoints[i+1]=ZDT_f2(x);
}
//works fine with 'new'
//float* test_array=new float[2];
float test_array[2]={1.0f,2.0f};
}
I get something like this everytime: 
Does anyone know the cause of this behavior? All the drawn points are computed
BEFORE test_array is initialized, yet they are affected by it. It doesn't
happen when I initialize test_array before the 'for' loop.
Host/device code:
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include "curand_kernel.h"
#include "device_functions.h"
#include <random>
#include <iostream>
#include <time.h>
#include <fstream>
using namespace std;
#define XSIZE 5
#define TEST_POINTS 100
#define NOF 2
#define BLOCK_COUNT 64
#define THR_COUNT 128
#define POINTS_PER_THREAD (NOF*TEST_POINTS+THR_COUNT*BLOCK_COUNT-1)/(THR_COUNT*BLOCK_COUNT)
#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, char *file, int line, bool abort=false)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) exit(code);
}
}
__device__ float g(float* x)
{
float tmp=1;
for(int i=1;i<XSIZE;i++)
tmp*=x[i];
return 1+9*(tmp/(XSIZE-1));
}
__device__ float ZDT_f1(float* x)
{
return x[0];
}
__device__ float ZDT_f2(float* x)
{
float gp=g(x);
return gp*(1-sqrtf(x[0]/gp));
}
__device__ bool oneDominatesTwo(float* x1, float* x2)
{
for(int i=0;i<XSIZE;i++)
if(x1[i]>=x2[i])
return false;
return true;
}
__device__ float* generateX(curandState* globalState)
{
int ind = threadIdx.x;
float x[XSIZE];
for(int i=0;i<XSIZE;i++)
x[i]=curand_uniform(&globalState[ind]);
return x;
}
__global__ void setup_kernel ( curandState * state, unsigned long seed )
{
int id = blockDim.x*blockIdx.x+threadIdx.x;
curand_init ( seed, id, 0, &state[id] );
}
__global__ void optymalize(curandState * state, float* testPoints)
{
int ind=blockDim.x*blockIdx.x+threadIdx.x;
int step=blockDim.x*gridDim.x;
for(int i=ind*2;i<NOF*TEST_POINTS;i+=step*2)
{
float* x=generateX(state);
testPoints[i]=ZDT_f1(x);
testPoints[i+1]=ZDT_f2(x);
}
__syncthreads();
//float* test_array=new float[2];
//test_array[0]=1.0f;
//test_array[1]=1.0f;
float test_array[2]={1.0f,1.0f};
}
void saveResultToFile(float* result)
{
ofstream resultFile;
resultFile.open ("result.txt");
for(unsigned int i=0;i<NOF*TEST_POINTS;i+=NOF)
{
resultFile << result[i] << " "<<result[i+1]<<"\n";
}
resultFile.close();
}
int main()
{
float* dev_fPoints;
float* fPoints=new float[NOF*TEST_POINTS];
gpuErrchk(cudaMalloc((void**)&dev_fPoints, NOF * TEST_POINTS * sizeof(float)));
curandState* devStates;
gpuErrchk(cudaMalloc(&devStates,THR_COUNT*sizeof(curandState)));
cudaEvent_t start;
gpuErrchk(cudaEventCreate(&start));
cudaEvent_t stop;
gpuErrchk(cudaEventCreate(&stop));
gpuErrchk(cudaThreadSetLimit(cudaLimitMallocHeapSize, 128*1024*1024));
gpuErrchk(cudaEventRecord(start, NULL));
setup_kernel<<<BLOCK_COUNT, THR_COUNT>>>(devStates,unsigned(time(NULL)));
gpuErrchk(cudaDeviceSynchronize());
gpuErrchk(cudaGetLastError());
optymalize<<<BLOCK_COUNT,THR_COUNT>>>(devStates, dev_fPoints);
gpuErrchk(cudaDeviceSynchronize());
gpuErrchk(cudaGetLastError());
gpuErrchk(cudaMemcpy(fPoints, dev_fPoints, NOF * TEST_POINTS * sizeof(float), cudaMemcpyDeviceToHost));
gpuErrchk(cudaEventRecord(stop, NULL));
gpuErrchk(cudaEventSynchronize(stop));
float msecTotal = 0.0f;
cudaEventElapsedTime(&msecTotal, start, stop);
cout<<"Kernel execution time: "<<msecTotal<< "ms"<<endl;
saveResultToFile(fPoints);
system("start pythonw plot_data.py result.txt");
cudaFree(dev_fPoints);
cudaFree(devStates);
system("pause");
return 0;
}
Plot script code:
import matplotlib.pyplot as plt;
import sys;
if len(sys.argv)<2:
print("Usage: python PlotScript <filename>");
sys.exit(0);
path=sys.argv[1];
x=[]
y=[]
with open(path,"r") as f:
for line in f:
vals=line.strip().split(" ");
x.append(vals[0]);
y.append(vals[1]);
plt.plot(x,y,'ro')
plt.show();
Answer: The basic problem was in code you originally didn't show in your question,
specifically this:
__device__ float* generateX(curandState* globalState)
{
int ind = threadIdx.x;
float x[XSIZE];
for(int i=0;i<XSIZE;i++)
x[i]=curand_uniform(&globalState[ind]);
return x;
}
Returning an address or reference to a local scope variable from a function
results in undefined behaviour. It is only valid to use `x` by reference or
value within `generateX` while it is in scope. There should be no surprise
that adding or moving other local scope variables around within the kernel
changes the kernel behaviour.
Fix this function so it populates an array passed by reference, rather than
returning the address of a local scope array. And pay attention to compiler
warnings - there will have been one for this which should have immediately set
off alarm bells that there was something wrong.
|
Using certificates in urllib3
Question: I'm a Python newbie. I'm using urllib3 to talk to an api. The reason I'm using
this and not requests is that I'd like to host my app on GAE. My app uses
certicates. When I post data, I get the following error:
TypeError: __init__() got an unexpected keyword argument 'cert_reqs'
How can I include certs in my urlopen call? A snippet of code follows
CA_CERTS = ('client-2048.crt', 'client-2048.key')
http = urllib3.PoolManager()
r = http.urlopen('POST', url, body=payload, headers={'X-Application': '???', 'Content-Type': 'application/x-www-form-urlencoded'}, cert_reqs='REQUIRED', ca_certs=CA_CERTS)
print r.status, r.data
Answer: You can drop down to the
[HTTPSConnectionPool](http://urllib3.readthedocs.org/en/latest/pools.html#urllib3.connectionpool.HTTPSConnectionPool)
level which you may do directly:
from urllib3.connectionpool import HTTPSConnectionPool
conn = HTTPSConnectionPool('httpbin.org', ca_certs='/etc/pki/tls/cert.pem', cert_reqs='REQUIRED')
Or, more simply or via the `connection_from_url()` helper function:
conn = urllib3.connection_from_url('https://httpbin.org', ca_certs='/etc/pki/tls/cert.pem', cert_reqs='REQUIRED')
Note that `ca_certs` is the file name of a certificate bundle used to validate
the remote server's certificate. Use `cert_file` and `key_file` to present
your client certificate to the remote server:
conn = urllib3.connection_from_url('https://httpbin.org', cert_file='client-2048.crt', key_file='client-2048.key', ca_certs='/etc/pki/tls/cert.pem', cert_reqs='REQUIRED')
Then issue your request:
response = conn.request('POST', 'https://httpbin.org/post', fields={'field1':1234, 'field2':'blah'})
>>> print response.data
{
"args": {},
"data": "",
"files": {},
"form": {
"field1": "1234",
"field2": "blah"
},
"headers": {
"Accept-Encoding": "identity",
"Connection": "close",
"Content-Length": "220",
"Content-Type": "multipart/form-data; boundary=048b02ad15274fc485c2cb2b6a280034",
"Host": "httpbin.org",
"X-Request-Id": "92fbc1da-d83e-439c-9468-65d27492664f"
},
"json": null,
"origin": "220.233.14.203",
"url": "http://httpbin.org/post"
}
|
How to get Riak 2.0 security working with riak-python-client?
Question: Riak 2.0 is installed on Ubuntu 14.04 with default settings
Riak python client is taken from dev branch: <https://github.com/basho/riak-
python-client/tree/feature/bch/security>
**Steps I made:**
1.Enable security:
> riak-admin security enable
2.Check status:
> riak-admin security status
> Enabled
3.Add example user, group and apply some basic permissions
4.Overall it looks like following:
**user:**
riak-admin security print-users
+----------+---------------+----------------------------------------+------------------------------+
| username | member of | password | options |
+----------+---------------+----------------------------------------+------------------------------+
| user_sec | group_sec |ce055fe0a2d621a650c293a56996ee504054ea1d| [] |
+----------+---------------+----------------------------------------+------------------------------+
**user's grants:**
riak-admin security print-grants user_sec
Inherited permissions (user/user_sec)
+--------------------+----------+----------+----------------------------------------+
| group | type | bucket | grants |
+--------------------+----------+----------+----------------------------------------+
| group_sec | default | * | riak_kv.get |
| group_sec |bucket_sec| * | riak_kv.get |
+--------------------+----------+----------+----------------------------------------+
Cumulative permissions (user/user_sec)
+----------+----------+----------------------------------------+
| type | bucket | grants |
+----------+----------+----------------------------------------+
| default | * | riak_kv.get |
|bucket_sec| * | riak_kv.get |
+----------+----------+----------------------------------------+
**auth sources:**
riak-admin security print-sources
+--------------------+------------+----------+----------+
| users | cidr | source | options |
+--------------------+------------+----------+----------+
| user_sec | 0.0.0.0/32 | password | [] |
| user_sec |127.0.0.1/32| trust | [] |
+--------------------+------------+----------+----------+
**simple python script I'm trying to run (on the same host where Riak is
running):**
import riak
from riak.security import SecurityCreds
pbc_port = 8002
riak_host = "127.0.0.1"
creds = riak.security.SecurityCreds('user_sec', 'secure_password')
riak_client = riak.RiakClient(pb_port=pbc_port, host=riak_host, protocol='pbc', security_creds=creds)
bucket = riak_client.bucket('test')
data = bucket.get("42")
print data.data
stack trace I'm getting: python riak_test.py
Traceback (most recent call last):
File "riak_test.py", line 8, in <module>
data = bucket.get("42")
File "/usr/local/lib/python2.7/dist-packages/riak/bucket.py", line 214, in get
return obj.reload(r=r, pr=pr, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/riak/riak_object.py", line 307, in reload
self.client.get(self, r=r, pr=pr, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py", line 184, in wrapper
return self._with_retries(pool, thunk)
File "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py", line 126, in _with_retries
return fn(transport)
File "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py", line 182, in thunk
return fn(self, transport, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/riak/client/operations.py", line 382, in get
return transport.get(robj, r=r, pr=pr, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/transport.py", line 148, in get
if self.quorum_controls() and pr:
File "/usr/local/lib/python2.7/dist-packages/riak/transports/feature_detect.py", line 102, in quorum_controls
return self.server_version >= versions[1]
File "/usr/local/lib/python2.7/dist-packages/riak/util.py", line 148, in __get__
value = self.fget(obj)
File "/usr/local/lib/python2.7/dist-packages/riak/transports/feature_detect.py", line 189, in server_version
return LooseVersion(self._server_version())
File "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/transport.py", line 101, in _server_version
return self.get_server_info()['server_version']
File "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/transport.py", line 119, in get_server_info
expect=MSG_CODE_GET_SERVER_INFO_RESP)
File "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py", line 51, in _request
return self._recv_msg(expect)
File "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py", line 137, in _recv_msg
raise RiakError(err.errmsg)
riak.RiakError: 'Security is enabled, please STARTTLS first'
When security is disabled the same script works perfectly fine:
python riak_test.py
{u'question': u"what's the sense of universe?"}
I also tried to generate example certificates using this tool:
<https://github.com/basho-labs/riak-ruby-ca> and set them in riak.conf:
grep ssl /etc/riak/riak.conf
## with the ssl config variable, for example:
ssl.certfile = $(platform_etc_dir)/server.crt
## Default key location for https can be overridden with the ssl
ssl.keyfile = $(platform_etc_dir)/server.key
## with the ssl config variable, for example:
ssl.cacertfile = $(platform_etc_dir)/ca.crt
and use ca.crt in python script:
creds = riak.security.SecurityCreds('user_sec', 'secure_password', 'ca.crt')
It didn't change anything. I'm still getting the same exception. I guess this
problem might be trivial, but I don't have any clue for now.
**Update:**
I was using wrong param name. Few commits ago it was: **security_creds** , now
it's called: **credentials**. When I fixed this in my script, SSL handshake
was initialized. Then next exceptions were caused by wrong SecurityCreds
initialization. Constructor is using named params, so it should be:
creds = riak.security.SecurityCreds(username='user_sec', password='secure_password', cacert_file='ca.crt')
handshake is initialized, but it's failing on this command:
ssl_socket.do_handshake()
from riak/transport/pbc/connection.py (line 134)
I'm getting these 2 errors (randomly):
File "/home/gta/riak-python-client/riak/transports/pbc/connection.py", line 77, in _init_security
self._ssl_handshake()
File "/home/gta/riak-python-client/riak/transports/pbc/connection.py", line 145, in _ssl_handshake
raise e
OpenSSL.SSL.SysCallError: (104, 'ECONNRESET')
File "/home/gta/riak-python-client/riak/transports/pbc/connection.py", line 77, in _init_security
self._ssl_handshake()
File "/home/gta/riak-python-client/riak/transports/pbc/connection.py", line 145, in _ssl_handshake
raise e
OpenSSL.SSL.SysCallError: (-1, 'Unexpected EOF')
I'm also observing errors in Riak's logs (/var/log/riak/error.log):
2014-06-02 15:09:33.954 [error] <0.1995.1> gen_fsm <0.1995.1> in state wait_for_tls terminated with reason: {error,{startls_failed,{certfile,badarg}}}
2014-06-02 15:09:33.955 [error] <0.1995.1> CRASH REPORT Process <0.1995.1> with 0 neighbours exited with reason: {error,{startls_failed,{certfile,badarg}}} in gen_fsm:terminate/7 line 622
2014-06-02 15:09:33.955 [error] <0.28750.0> Supervisor riak_api_pb_sup had child undefined started with {riak_api_pb_server,start_link,undefined} at <0.1995.1> exit with reason {error,{startls_failed,{certfile,badarg}}} in context child_terminated
This situation happens with both approaches: cacert (ca.crt) and client cert
(client.crt)/key (client.key). I tried various combinations of keys:
* keys from tests/resource
* keys generated with riak-ruby-ca script
* keys generated with `make` in tests/resource
* keys generated with helper script from pyOpenSSL
* ...none of them work for me
I'm using **riak_2.0.0beta1-1_amd64.deb**
Answer: Thanks for the enthusiastic testing! The branch you pulled is an unreviewed
work in progress and I've added some updates today.
I would try again with both the very latest 2.0.0 beta and the changes made to
this branch. There are some test certs in `riak/tests/resources` which would
be useful to get started testing your configuration.
You'll need to name your **cacert** parameter, now, too since several other
options have been added.
The basic setup looks pretty good. Try the latest and let me know how it works
for you.
|
TypeError while running Django tests
Question: im new at Django or python, but im currently working in a project with both.
Right now im trying to get my tests to work. I wrote these simple tests about
3 months ago and im 100% sure they worked back then. Also, when I run the
server and try different searches manually I get the expected results, so I
know the view is at least correct (I know its horrible and slow, I will work
on fixing that). I have searched for this error but the only related thing I
found was that Ubuntu was my problem, but I have tried both on Ubuntu and
Windows 7. I have no idea what happened between then and now but they give me
the following error:
EDIT:I have no idea why all my indentation is being ignored :| oh well, after
some suggestions I changed a couple of things and now i get a failure like
this:
enrique@enrique-XPS-L521X:~/Documents/Reeduq$
python manage.py test Search
Creating test database for alias 'default'...
FF
======================================================================
FAIL: test_private_courses_search (Search.tests.SearchTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/enrique/Documents/Reeduq/Search/tests.py", line 18, in test_private_courses_search
self.assertEqual(response.context['found_entries'],[])
AssertionError: [] != []
======================================================================
FAIL: test_public_course_search (Search.tests.SearchTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/enrique/Documents/Reeduq/Search/tests.py", line 29, in test_public_course_search
self.assertEqual(response.context['found_entries'],['<Course: test>'])
AssertionError: [<Course: test>] != ['<Course: test>']
----------------------------------------------------------------------
Ran 2 tests in 0.018s
FAILED (failures=2)
Destroying test database for alias 'default'...
I read that this means that i dont have a `__unicode__` function or something
like that, but my Course model and User model have one each, so Im not sure
what to make of it then
This is the test code:
from django.test import TestCase
from django.shortcuts import render, get_object_or_404, redirect, render_to_response, redirect
from django.core.urlresolvers import reverse
from Search.views import search
from Course.models import *
from Reeduq_Home.models import *
class SearchTests(TestCase):
def test_private_courses_search(self):
"""
a search should not return private courses
"""
new_user=EndUser.objects.create(username="test", first_name="test", last_name="test", email="[email protected]", password="test", account_type="E")
Course.objects.create(name="test", instructor=new_user, description="test", tags="test", start_date="2014-3-9", end_date="2014-3-10", public=False)
response=self.client.get(reverse('Search:search', args=("test",)))
self.assertEqual(response.status_code, 200)
self.assertQuerysetEqual(response.context['found_entries'],[])
def test_public_course_search(self):
"""
a search should return public courses
"""
new_user=EndUser.objects.create(username="test", first_name="test", last_name="test", email="[email protected]", password="test", account_type="E")
Course.objects.create(name="test", instructor=new_user, description="test", tags="test, wat, wait", start_date="2014-3-9", end_date="2014-3-10", public=True)
response=self.client.get(reverse('Search:search', args=("wat",)))
self.assertEqual(response.status_code, 200)
self.assertQuerysetEqual(response.context['found_entries'],['<Course: test>'])
This is the view code:
def search(request, query):
query=query.replace('_', ' ')
found_entries = []
objects = Course.objects.all()
for p in objects:
a=[x.strip() for x in p.tags.split(',')]
for i in a:
if i == query:
if p.public:
found_entries.append(p.id)
results = Course.objects.all().filter(pk__in=found_entries)
return render_to_response('search.html',
{ 'query': query, 'found_entries': results,},)
Thank you for your help.
Answer: Not sure where you're getting the unicode comment.
`response.context['found_entries']` is, behind the scenes, doing
`response.context.__get__item('found_entries')`, except response.context is
None. As @AlexShkop points out, it sounds like your response isn't what you
expect, probably because you're actually getting a 302 redirect (or 401/403).
You could try @alecxe's suggestion, or use the built in client.login
capability (after creating a dummy user)
|
Pyalgotrade Tutorial Attribute Error
Question: I have been googling for a while now, but am still unable to find a solution,
or even determine the problem, honestly.
My installation of Python and Pyalgotrade is correct, as verified by the
successful imports.
Nonetheless, I can't manage to run the example code in the tutorial, it always
throws:
AttributeError: MyStrategy instance has no attribute 'info'
Here's the example code:
from pyalgotrade import strategy
from pyalgotrade.barfeed import yahoofeed
class MyStrategy(strategy.BacktestingStrategy):
def __init__(self, feed, instrument):
strategy.BacktestingStrategy.__init__(self, feed)
self.__instrument = instrument
def onBars(self, bars):
bar = bars[self.__instrument]
self.info(bar.getClose())
# Load the yahoo feed from the CSV file
feed = yahoofeed.Feed()
feed.addBarsFromCSV("orcl", "orcl-2000.csv")
# Evaluate the strategy with the feed's bars.
myStrategy = MyStrategy(feed, "orcl")
myStrategy.run()
And the iPython Notebook output:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-f786d1b471f7> in <module>()
18 # Evaluate the strategy with the feed's bars.
19 myStrategy = MyStrategy(feed, "orcl")
---> 20 myStrategy.run()
/usr/local/lib/python2.7/site-packages/pyalgotrade/strategy/__init__.pyc in run(self)
398 self.onStart()
399
--> 400 self.__dispatcher.run()
401
402 if self.__feed.getCurrentBars() != None:
/usr/local/lib/python2.7/site-packages/pyalgotrade/observer.pyc in run(self)
139 subject.start()
140
--> 141 while not self.__stopped and self.__dispatch():
142 pass
143 finally:
/usr/local/lib/python2.7/site-packages/pyalgotrade/observer.pyc in __dispatch(self)
131 nextDateTime = subject.peekDateTime()
132 if nextDateTime == None or nextDateTime == smallestDateTime:
--> 133 subject.dispatch()
134 return ret
135
/usr/local/lib/python2.7/site-packages/pyalgotrade/feed/__init__.pyc in dispatch(self)
95 dateTime, values = self.getNextValuesAndUpdateDS()
96 if dateTime != None:
---> 97 self.__event.emit(dateTime, values)
98
99 def getKeys(self):
/usr/local/lib/python2.7/site-packages/pyalgotrade/observer.pyc in emit(self, *parameters)
51 self.__emitting = True
52 for handler in self.__handlers:
---> 53 handler(*parameters)
54 self.__emitting = False
55 self.__applyChanges()
/usr/local/lib/python2.7/site-packages/pyalgotrade/strategy/__init__.pyc in __onBars(self, dateTime, bars)
386
387 # 1: Let the strategy process current bars and place orders.
--> 388 self.onBars(bars)
389
390 # 2: Place the necessary orders for positions marked to exit on session close.
<ipython-input-1-f786d1b471f7> in onBars(self, bars)
10 def onBars(self, bars):
11 bar = bars[self.__instrument]
---> 12 self.info(bar.getClose())
13
14 # Load the yahoo feed from the CSV file
AttributeError: MyStrategy instance has no attribute 'info'
Has anyone at least a hint on what the problem could be?
Answer: Which version of PyAlgoTrade are you using ?
import pyalgotrade
print pyalgotrade.__version__
|
Ipython 2.0 notebook, matplotlib, --pylab
Question: In the past I have run Ipython with the --pylab option. The only way I have
found to get the notebook to work without getting the message about the ill-
effects of --pylab is to open the notebooks and then
> %matplotlib
> import matplotlib.pylab as pl
and then do
pl.plot(x,y)
I would like to put the two commands above in my 00-import.py rather than
typing them into the beginning of each notebook Is there a better way to do
this.
Answer: Write in a terminal
ipython locate
this command will return a directory. Inside that directory you will find the
folder "profile_default/startup/" any Python script located in that directory
will be read when ipython start. If you write a script with the following
lines
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
you will get numpy, matplotlib and inline figures.
Anyway, probably it is a good idea to read
[this](http://carreau.github.io/posts/10-No-PyLab-Thanks.ipynb.html).
|
RichIPythonWidget import side effect on pkgutil module
Question: Apparently, importing RichIPythonWidget has an impact on module pkgutil.
# Test environment:
IPython version: 2.1.0 python versions: 2.7 & 2.7.6
# Code showing issue:
import os
import pkgutil
print 'Before 1 ... '
pkgutil.find_loader('os') # call 1
print 'After call 1 ... '
from IPython.qt.console.rich_ipython_widget import RichIPythonWidget
print 'Before call 2 ... '
pkgutil.find_loader('os') # call 2
print 'After call 2 ... '
# Output:
Before call 1 ...
After call 1 ...
Before call 2 ...
Traceback (most recent call last):
File "issue.py", line 11, in <module>
pkgutil.find_loader('os') # call 2
File "/u/bl/Dev/work/inkqt/opt.SuSE-11.4/lib64/python2.7/pkgutil.py", line 467, in find_loader
loader = importer.find_module(fullname)
TypeError: find_module() takes exactly 3 arguments (2 given)
As far as i understand, my issue seems to be related to objects added to
**sys.meta_path** by **IPython** but used with the wrong interface by
**pkgutil** module. But it's hard for me to decide who's guilty...
Any workaround would be greatly appreciated.
thx
-benoit
Answer: This is most likely due to the Qt ImportDenier installed by IPython in
IPython/external/qt_loaders.py. The find_module method does not respect the
PEP302 signature of `finder.find_module(fullname, path=None)`, where the path
is optional.
I submitted an issue on IPython:
<https://github.com/ipython/ipython/issues/5932>.
|
Python/wxPython: How to get text display in a second frame from the main frame
Question: Am new to Python/wxPython, I created a text two frames using `wxFormBuilder`.
The purpose is to add two numbers and display the result on both frames
`OnAdd` button click.
I have done all I could but no success?
My problem is how do I get the Final_Result display on the second frame that
gets call when the add button is press as par the code below;
**Note:** the codes are in 3 separate files (q1.py, q2.py, and q3.py). q2.py
is the main running file while q1.py, and q2.py create the frames respectively
as generated from wxFormBuilder.
**q1.py**
import wx
import wx.xrc
class MyFrame1 ( wx.Frame ):
def __init__( self, parent ):
wx.Frame.__init__ ( self, parent, id = wx.ID_ANY, title = wx.EmptyString, pos = wx.DefaultPosition, size = wx.Size( 500,300 ), style = wx.CAPTION|wx.CLOSE_BOX|wx.MINIMIZE_BOX|wx.SYSTEM_MENU|wx.TAB_TRAVERSAL )
self.SetSizeHintsSz( wx.DefaultSize, wx.DefaultSize )
Sizer1 = wx.BoxSizer( wx.VERTICAL )
Sizer2 = wx.GridSizer( 0, 2, 0, 0 )
self.val1 = wx.TextCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.val1.SetFont( wx.Font( 30, 70, 90, 90, False, wx.EmptyString ) )
Sizer2.Add( self.val1, 1, wx.ALL|wx.EXPAND, 5 )
self.val2 = wx.TextCtrl( self, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.val2.SetFont( wx.Font( 30, 70, 90, 90, False, wx.EmptyString ) )
Sizer2.Add( self.val2, 1, wx.ALL|wx.EXPAND, 5 )
self.Calc = wx.Button( self, wx.ID_ANY, u"Add", wx.DefaultPosition, wx.DefaultSize, 0 )
self.Calc.SetFont( wx.Font( 30, 70, 90, 90, False, wx.EmptyString ) )
Sizer2.Add( self.Calc, 1, wx.ALL|wx.EXPAND, 5 )
self.result = wx.StaticText( self, wx.ID_ANY, u"Result", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_CENTRE )
self.result.Wrap( -1 )
self.result.SetFont( wx.Font( 30, 70, 90, 90, False, wx.EmptyString ) )
Sizer2.Add( self.result, 1, wx.ALL|wx.EXPAND, 5 )
Sizer1.Add( Sizer2, 1, wx.EXPAND, 5 )
self.SetSizer( Sizer1 )
self.Layout()
self.Centre( wx.BOTH )
# Connect Events
self.Calc.Bind( wx.EVT_BUTTON, self.addFunc )
def __del__( self ):
pass
#===================================================
**q2.py**
#!/usr/bin/python
# -*- coding: utf-8 -*-
import wx
from q1 import MyFrame1
from q3 import MyFrame3
class MyFrame2(MyFrame1):
def __init__(self, parent):
MyFrame1.__init__ (self, parent)
def addFunc( self, event ):
val1 = float(self.val1.GetValue())
val2 = float(self.val2.GetValue())
add = val1 + val2
self.result.SetLabel(str(add))
self.result = MyFrame4(self)
self.result.Show()
self.Final_Result.SetLabel(str(add))
class MyFrame4(MyFrame3):
"""docstring for my_temp_Frame"""
def __init__(self, parent):
MyFrame3.__init__ (self, parent)
if __name__ == "__main__":
app = wx.App(0)
MyFrame2(None).Show()
app.MainLoop()
#===================================================
**q3.py**
import wx
import wx.xrc
class MyFrame3 ( wx.Frame ):
def __init__( self, parent ):
wx.Frame.__init__ ( self, parent, id = wx.ID_ANY, title = wx.EmptyString, pos = wx.DefaultPosition, size = wx.Size( 500,100 ), style = wx.CAPTION|wx.CLOSE_BOX|wx.MINIMIZE_BOX|wx.SYSTEM_MENU|wx.TAB_TRAVERSAL )
self.SetSizeHintsSz( wx.DefaultSize, wx.DefaultSize )
Sizer1 = wx.BoxSizer( wx.VERTICAL )
Sizer2 = wx.GridSizer( 0, 2, 0, 0 )
self.Text = wx.TextCtrl( self, wx.ID_ANY, u"You result is:", wx.DefaultPosition, wx.DefaultSize, wx.TE_READONLY )
self.Text.SetFont( wx.Font( 20, 70, 90, 90, False, wx.EmptyString ) )
Sizer2.Add( self.Text, 1, wx.ALL|wx.EXPAND, 5 )
self.Final_Result = wx.StaticText( self, wx.ID_ANY, u"Final_Result", wx.DefaultPosition, wx.DefaultSize, wx.ALIGN_CENTRE )
self.Final_Result.Wrap( -1 )
self.Final_Result.SetFont( wx.Font( 30, 70, 90, 90, False, wx.EmptyString ) )
self.Final_Result.SetForegroundColour( wx.Colour( 255, 255, 255 ) )
self.Final_Result.SetBackgroundColour( wx.Colour( 255, 0, 0 ) )
Sizer2.Add( self.Final_Result, 1, wx.ALL|wx.EXPAND, 5 )
Sizer1.Add( Sizer2, 1, wx.EXPAND, 5 )
self.SetSizer( Sizer1 )
self.Layout()
self.Centre( wx.BOTH )
def __del__( self ):
pass
#===================================================
Thanks in advance.
Answer: I have rarely seen a more convoluted way to simply have two frames in wxPython
and transferring data from one into the other. But maybe you have your good
reasons. If not, have a look at
[this](http://wiki.wxpython.org/ModelViewController). It opened the eyes at
least for me how to cleanly separate logic and GUI.
**Answer** : You should have got on calling:
self.result = MyFrame4(self)
# ...
self.Final_Result.SetLabel(str(add))
the error message:
AttributeError: 'MyFrame2' object has no attribute 'Final_Result'
which simply means that there is no `Final_Result` in `MyFrame2`. After some
looking `Final_result` can be found in `MyFrame3` which is the base class for
`MyFrame4`. We also learn that `MyFrame4` is set as object attribute
`self.result` in `MyFrame2`
So simply change the offending line to:
self.result.Final_Result.SetLabel(str(add))
and you are done.
|
Searching items of large list in large python dictionary quickly
Question: I am currently working to make a dictionary with a tuple of names as keys and
a float as the value of the form {(nameA, nameB) : datavalue, (nameB, nameC) :
datavalue ,...}
The values data is from a matrix I have made into a pandas DataFrame with the
names as both the index and column labels. I have created an ordered list of
the keys for my final dictionary called `keys` with the function
`createDictionaryKeys()`. The issue I have is that not all the names from this
list appear in my data matrix. I want to only include the names do appear in
the data matrix in my final dictionary.
How can I do this search avoiding the slow linear for loop? I created a
dictionary that has the name as key and a value of 1 if it should be included
and 0 otherwise as well. It has the form `{nameA : 1, nameB: 0, ... }` and is
called `allow_dict`. I was hoping to use this to do some sort of hash search.
def createDictionary( keynamefile, seperator, datamatrix, matrixsep):
import pandas as pd
keys = createDictionaryKeys(keynamefile, seperator)
final_dict = {}
data_df = pd.read_csv(open(datamatrix), sep = matrixsep)
pd.set_option("display.max_rows", len(data_df))
df_indices = list(data_df.index.values)
df_cols = list(data_df.columns.values)[1:]
for i in df_indices:
data_df = data_df.rename(index = {i:df_cols[i]})
data_df = data_df.drop("Unnamed: 0", 1)
allow_dict = descriminatePromoters( HARDCODEDFILENAME, SEP, THRESHOLD )
#print ( item for item in df_cols if allow_dict[item] == 0 ).next()
present = [ x for x in keys if x[0] in df_cols and x[1] in df_cols]
for i in present:
final_dict[i] = final_df.loc[i[0],i[1]]
return final_dict
Answer: Testing existence in python sets is O(1), so simply:
present = [ x for x in keys if x[0] in set(df_cols) and x[1] in set(df_cols)]
...should give you some speed up. Since you're iterating through in O(n)
anyway (and have to to construct your final_dict), something like:
colset = set(df_cols)
final_dict = {k: final_df.loc[k[0],k[1]]
for k in keys if (k[0] in colset)
and (k[1] in colset)}
Would be nice, I would think.
|
Shared variable in Python Process subclass
Question: I was wondering if it would be possible to create some sort of static set in a
Python Process subclass to keep track the types processes that are currently
running asynchronously.
class showError(Process):
# Define some form of shared set that is shared by all Processes
displayed_errors = set()
def __init__(self, file_name, error_type):
super(showError, self).__init__()
self.error_type = error_type
def run(self):
if error_type not in set:
displayed_errors.add(error_type)
message = 'Please try again. ' + str(self.error_type)
winsound.MessageBeep(-1)
result = win32api.MessageBox(0, message, 'Error', 0x00001000)
if result == 0:
displayed_errors.discard(error_type)
That way, when I create/start multiple showError processes with the same
error_type, subsequent error windows will not be created. So how can we define
this shared set?
Answer: You can use a
[`multiprocessing.Manager.dict`](https://docs.python.org/2/library/multiprocessing.html#multiprocessing.managers.SyncManager.dict)
(there's no `set` object available, but you can use a `dict` in the same way)
and share that between all your subprocesses.
import multiprocessing as mp
if __name__ == "__main__":
m = mp.Manager()
displayed_errors = m.dict()
subp = showError("some filename", "some error type", displayed_errors)
Then change `showError.__init__` to accept the shared dict:
def __init__(self, file_name, error_type, displayed_errors):
super(showError, self).__init__()
self.error_type = error_type
self.displayed_errors = displayed_errors
Then this:
displayed_errors.add(error_type)
Becomes:
self.displayed_errors[error_type] = 1
And this:
displayed_errors.discard(error_type)
Becomes:
try:
del self.displayed_errors[error_type]
except KeyError:
pass
|
Getting data iterating over wtform fields
Question: I've got a form, and number of dynamically adding fields,
class EditBook(Form):
title = TextField('title', validators = [Required()])
authors = FieldList(TextField())
that's how I append them
$('form').append('<input type="text" class="span4" id="authors-' + FieldCount +'" name="authors-' + FieldCount +'" placeholder="author ' + FieldCount +'">');
I want to get data from theese inputs. how can I iterate over them, using
python?
(or how can I send collected data with jquery to server? I'm new to js and
jquery)
guess my inputs aren't connected with authors Fieldlist.
**UPDATE**
my inputs aren't connected with EditBook though I append them to it.
form.data will solve rest of the problem, when I attach my inputs to form.
(now I just get keyerror, trying to access form.data['authors-1'])
now I try to add just a single authors field to copy it later. but it renders
invisible, for unknown reason. in blank space should be input, similar to
"author-1"
{{form.authors(class="span4", placeholder="author 1")}}
what should I add to code to render this field correctly?

Answer: The **WTForms** `process_formdata` method will pull these out of the form
submission and store them in the `data` attribute. Below is what your access
code will look like. Your `authors` list should be stored in an iterable that
can be accessed with the `authors` key.
from collections import namedtuple
from wtforms.validators import Required
from wtforms import Form
from wtforms import TextField
from wtforms import FieldList
from webob.multidict import MultiDict
class EditBook(Form):
title = TextField('title', validators = [Required()])
authors = FieldList(TextField())
Book = namedtuple('Book', ['title', 'authors'])
b1 = Book('Title1', ['author1', 'author2'])
# drop them in a dictionary
data_in={'title':b1.title, 'authors':b1.authors}
# Build form and print
form = EditBook(data=MultiDict(data_in))
# lets get some data
print form.data
print form.data['title']
print form.data['authors']
|
Kivy Canvas redrawing after touch event
Question:
I wanna make a small game, but I need some help...
I'm pretty newbie both in python and in kivy. I'm using python 3.4 and kivy
1.8.0.
The game will have some drawn elements which will be draggable and/or
disappering:
-if you click on a point you could drag it
\- if you click anywhere one point will disappear
I've tried to make the disappearing part of it, but I got stuck. I made a
dummy code with some points on the canvas, where you could see how I wanted to
approach the problem:
\--> draw some points
\--> remove / reposition one of point
\--> clear canvas
\--> redraw
Somehow I cannot redraw it. But I managed to clear the canvas...
Could you help me?
Also I would like to get help/ideas how to make it draggable...
Thank you, here's my code:
from kivy.app import App
from kivy.graphics import Ellipse
from kivy.uix.boxlayout import BoxLayout
import random
class CustomLayout(BoxLayout):
def __init__(self, **kwargs):
super(CustomLayout, self).__init__(**kwargs)
self.a = []
self.points_pos()
with self.canvas.after:
self.draw_points()
def points_pos(self):
i=0
x = random.sample(range(800), 10)
y = random.sample(range(640), 10)
for i in range(10):
pos = [0,0]
pos[0]=x[i]
pos[1]=y[i]
self.a.append(pos)
print(self.a)
def draw_points(self):
i = 0
for i in range(len(self.a)):
self.circle = Ellipse(
size = (25,25),
pos = (self.a[i][0],self.a[i][1])
)
def random_remove(self):
point = self.a[random.randint(0,len(self.a)-1)]
self.a.remove(point)
def update(self):
self.parent.canvas.clear()
with self.canvas:
self.draw_points()
def on_touch_down(self, touch):
self.random_remove()
self.update()
class MainApp(App):
def build(self):
root = CustomLayout()
return root
if __name__ == '__main__':
MainApp().run()
Answer:
def update(self):
self.parent.canvas.clear()
with self.canvas:
self.draw_points()
You clear the _parent's_ canvas, which includes the canvas of your BoxLayout.
After that it doesn't matter how many things you draw on the BoxLayout, they
won't ever be displayed.
To fix this you probably want to simply do `self.canvas.clear()` instead, but
actually this is very inefficient and isn't a good way to work with kivy's
canvases (though it will work fine for small numbers of instructions). It is
much better to keep references to all your canvas instructions and only remove
the specific one(s) you no longer want, with `self.canvas.remove(...)`.
|
How to stop Python program compiled in py2exe from displaying ImportError: No Module names 'ctypes'
Question: I was wondering if this might be a compilation error or if there is something
I can do to stop it from displaying. I have made an argparse program for cmd.
I compiled it with py2exe and when I run it, it exacutes the program properly
but always gives this error before running the code:
Traceback (most recent call last):
File "boot_common.py", line 46, in <module>
ImportError: No module named 'ctypes'
If it is something in my code, here is my script:
import argparse
import zipfile
import os
from contextlib import closing
def parse_args():
parser = argparse.ArgumentParser('ziputil '+\
'-m <mode> -f <file> -p <output>')
parser.add_argument('-f', action="store", dest='files', type=str,
help='-f <file> : Specify the files to be zipped, or the .zip to be unzipped.')
parser.add_argument('-m', action="store", dest='mode', type=str,
help='-m <mode> : Zip to zip files, UnZip, to unzip files, or ZipDir to zip entire directories.')
parser.add_argument('-p', action="store", dest='path', type=str, nargs='?', const=os.getcwd(),
help='-p <path> : specify the path to unpack/pack to.')
return vars(parser.parse_args())
def unzipPackage(path, files):
with zipfile.ZipFile(files, "r") as z:
z.extractall(path)
def zipPackage(path, files):
files = files.split(', ')
zf = zipfile.ZipFile(path, mode='w')
try:
for file in files:
zf.write(file)
finally:
zf.close()
def zipdir(path, zip):
for root, dirs, files in os.walk(path):
for file in files:
zip.write(os.path.join(root, file))
dict = parse_args()
files = dict['files']
path = dict['path']
mode = dict['mode']
if mode == 'Zip':
zipPackage(path, files)
elif mode == 'UnZip':
unzipPackage(path, files)
elif mode == 'ZipDir':
zipf = zipfile.ZipFile(path, 'w')
zipdir(files, zipf)
zipf.close()
Answer: This is caused by a bug in py2exe, it'll be fixed in next release. [More
Info](http://permalink.gmane.org/gmane.comp.python.py2exe/4706)
The solution is to add `ctypes` to `bootstrap_modules` in
`C:\Python34\Lib\site-packages\py2exe\runtime.py` file (line 117).
...
# modules which are always needed
bootstrap_modules = {
# Needed for Python itself:
"ctypes",
"codecs",
"io",
"encodings.*",
}
...
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.