text
stringlengths 226
34.5k
|
---|
BigQuery API:: Able to insert CSV data but data reflection in BigQuery is untimely
Question: Using python script for `Google app engine` to upload `CSV data` into
`Bigquery`. Coded using `PyDev` perspective of `Eclipse on Windows 7`.
The `insert` is successful but inside BigQuery sometimes the data gets
inserted immediately and sometimes it takes hours to reflect.
j={
'kind': 'bigquery#insertRequest',
'jobReference': {'projectId': '#######'},
'configuration': {
'load': {
'sourceFormat': 'CSV',
'destinationTable': {'projectId': '############',
'tableId': '###########',
'datasetId': '##########'},
'allowJaggedRows': True,
'sourceUris': ['gs://bucket_naem/file_name'],
'skipLeadingRows': 1,
'schema': {'fields': [
{'type':'Data_Type','name':'Col1_name'},
{'type':'Data_Type','name':'Col2_name'}
]
},
},
},
}
response = service.jobs().insert(projectId = "##########",body = j).execute()
Answer: A BigQuery load job is asynchronous, which is why it returns immediately.
After the insert() call returns, it will give you a job id. You can then use
that job id to look up the state of your job. Once that job completes
successfully, your data should be immediately available.
If an import takes hours, that is unexpected (unless you are importing a
massive amount of data); if that is the case, please provide a job id and we
(BigQuery engineers) can look up what happened in the logs.
|
Python deepcopy, dictionary value in object changes
Question: I have a python code snippet here:
import copy
class Foo(object):
bar = dict()
def __init__(self, bar):
self.bar = bar
def loop(self):
backup = copy.deepcopy(self)
backup.bar[1] = "1"
for i in range(0, 10):
print "BACKUP BAR IS: ", backup.bar
self.bar[1] = "42"
self.bar = backup.bar
a = Foo({1:"0"})
a.loop()
This prints out `BACKUP BAR IS: {1: '1'}` the first two times, but then it
starts to print out `BACKUP BAR IS: {1: '42'}` the next eight times. Can
anyone tell my how and why this is happening? Doesn't `deepcopy` completely
create a new instance of `self`? How does the value of `backup` bar change
when we only change the value of "self", which is "a", in this case?
Edit: some have noted that the line `self.bar = backup.bar` causes the two
dicts to point to each other. But, if we do:
a = {1:1}
b = {1:2}
c = {1:3}
a = b
a = c
If a and b indeed point to the same dictionary, then all three values of a, b,
c should be equal to c. But instead, `a = c = {1:3}, b = {1:2}`. So changing
the reference of the left hand side doesn't change the reference of the right
hand side. Similarly, how would `backup.bar` be changed if we're only changing
`self.bar`?
Answer: Setting `self.bar = backup.bar` does not mutate the dictionary that `self.bar`
is pointing to, nor does it create a copy of `backup.bar` to assign to
`self.bar`. Rather, it changes the pointer of `self.bar` to refer to the
dictionary at `backup.bar`, so that they now refer to the same object. Take a
look at this demonstration:
In [48]: a = Foo({1: "0"})
In [49]: backup = copy.deepcopy(a)
In [50]: a.bar = backup.bar
In [51]: id(backup.bar)
Out[51]: 140428501511816
In [52]: id(a.bar) # Same object! Alternatively, `a.bar is backup.bar` -> True
Out[52]: 140428501511816
In [53]: backup.bar[1] = "42"
In [54]: a.bar
Out[54]: {1: '42'}
The discrepancy with iteration is explained by @alfasin.
With regard to your question edit, you never change the dictionary that `b`
points to. You're simply changing where `a` is pointing to twice. `a = c` does
not mean _change b = c_. It simply means _a was pointing to b, now it's
pointing to c_.
**Edit** drawings because why not.
a ---> {1: 1}
b ---> {1: 2}
c ---> {1: 3}
After `a = b`:
a -------- # So long, {1: 1}
|
v
b ---> {1: 2}
c ---> {1: 3}
After `a = c`, _i.e., a points to what c points to_ :
a ---------------
|
|
b ---> {1: 2} |
c ---> {1: 3} <-|
|
Flask( using watchdog) and uWSGI - no events from file system
Question: I am using [**watchdog**](https://pypi.python.org/pypi/watchdog) to reload
python modules on run of my **Flask server**. All works when I run my **debug
Flask server**. But when i start Flask server from **uWSGI** no notification
come into **watchdog** from my Linux file system, and so modules are not
reloaded. _MasterService_ is intialized when first request is accepted.
* * *
Note: I have tried to use waitress as well. There everything works fine, but i
would rpefer to use uWSGI. Thx for any advice.
* * *
'''
Created on 10 Oct 2014
@author: ttrval
'''
import os
import datetime
import pkgutil
import logging
from threading import BoundedSemaphore
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler, EVENT_TYPE_MOVED, EVENT_TYPE_MODIFIED, EVENT_TYPE_CREATED, EVENT_TYPE_DELETED
class Context(object):
'''Holds parameters passed into math services bz ServiceManager
'''
logger = None
serviceManager = None
class Service(object):
'''Container for python module imported by math_server on run.
'''
__slots__ = 'module', 'modifyDate', "name"
def __init__(self, name, module, modifyDate):
self.module = module
self.modifyDate = modifyDate
self.name = name
def update(self, otherService):
self.module = otherService.module
self.modifyDate = otherService.modifyDate
def __repr__(self):
return "<{typ}|{name}:{module}({date})>".format(
typ = type(self),module=self.module, date=self.modifyDate, name=self.name)
def __str__(self):
return "Service {name}:{module} was last updated {date}".format(
module=self.module, date=self.modifyDate, name=self.name)
class ServicesFilesEventHandler(FileSystemEventHandler):
'''Handles changes in file system of services loaded by math_server
'''
def __init__(self, master,logger=logging.getLogger('werkzeug'), supported_types = (".py")):
self.logger = logger
self.supported_types = supported_types
self.master = master
def dispatch(self, event):
'''Dispatches events to the appropriate methods.
:param event:
The event object representing the file system event.
:type event:
:class:`FileSystemEvent`
'''
print "event catched{}".format(str(event))
if event.is_directory:
return
path = event.src_path
if EVENT_TYPE_MOVED is event.event_type:
path = event.dest_path
if path[-3:] in self.supported_types:
_method_map = {
EVENT_TYPE_MODIFIED: self.on_modified,
EVENT_TYPE_MOVED: self.on_moved,
EVENT_TYPE_CREATED: self.on_created,
EVENT_TYPE_DELETED: self.on_deleted,
}
event_type = event.event_type
_method_map[event_type](event)
def on_moved(self, event):
"""Called when a file or a directory is moved or renamed.
:param event:
Event representing file/directory movement.
:type event:
:class:`DirMovedEvent` or :class:`FileMovedEvent`
"""
path = event.dest_path
self.logger.info("File moved: {}".format(path))
self.master.sync_modify_service(path)
self.master.sync_modify_service(event.src_path, unload=True)
def on_created(self, event):
"""Called when a file or directory is created.
:param event:
Event representing file/directory creation.
:type event:
:class:`DirCreatedEvent` or :class:`FileCreatedEvent`
"""
path = event.src_path
logging.getLogger('werkzeug').info("File created: {}".format(path))
self.master.sync_modify_service(path)
def on_deleted(self, event):
"""Called when a file or directory is deleted.
:param event:
Event representing file/directory deletion.
:type event:
:class:`DirDeletedEvent` or :class:`FileDeletedEvent`
"""
path = event.src_path
self.logger.info("File deleted: {}".format(path))
self.master.sync_modify_service(path, unload=True)
def on_modified(self, event):
"""Called when a file or directory is modified.
:param event:
Event representing file/directory modification.
:type event:
:class:`DirModifiedEvent` or :class:`FileModifiedEvent`
"""
path = event.src_path
self.logger.info("File modified: {}".format(path))
self.master.semaphore.acquire()
try:
self.master.unloadService(path)
self.master.loadService(path)
finally:
self.master.semaphore.release()
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs): # @NoSelf
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class ServicesMaster(object):
'''Singleton class, provides accesss to Services. It also handles services loading and unloading.
@uses :class: ServicesFilesEventHandler
@uses :package: watcghdog'''
# __metaclass__ = Singleton
services=None
dirname = None
observer = None
logger = None
semaphore = BoundedSemaphore(1)
def __init__(self, logger=logging.getLogger('werkzeug'), dirname="./services"):
Context.logger=logger
Context.serviceManager = self
self.__class__.dirname=os.path.abspath(dirname)
self.__class__.logger=logger
self._closeObserver()
self.loadServices()
self._initObserver()
def __del__(self):
self.dirname = None
self._closeObserver()
del self.services
del self.observer
@classmethod
def _initObserver(cls):
'''Creates observer of module folder (not recursive)
'''
event_handler = ServicesFilesEventHandler(cls, cls.logger)
print "event_handler init {}".format(str(event_handler))
if cls.observer is None:
cls.observer = Observer()
cls.observer.schedule(event_handler, cls.dirname, recursive=False)
cls.observer.start()
@classmethod
def _closeObserver(cls):
'''Deactivates observer of module folder (not recursive)'''
if cls.observer is not None:
cls.observer.stop()
cls.observer.join()
cls.observer = None
@classmethod
def sync_modify_service(cls, path, unload=False):
'''
synchronyzed modification of service
if unload = True: unloads service
else:loads service
'''
cls.semaphore.acquire()
try:
if unload:
cls.unloadService(path)
else:
cls.loadService(path)
finally:
cls.semaphore.release()
@classmethod
def loadServices(cls):
'''
Loads service from given path. Consider use of method
'sync_modify_service' when only one method ( loadServices xor unloadServices ) cen be executed at one time
'''
if cls.services is None:
cls.services={}
#remove current directory and replace file systema dress for python dot convention
importer = pkgutil.ImpImporter(path=cls.dirname)
cls.semaphore.acquire()
for name, ispkg in importer.iter_modules():
if not ispkg:
loader = importer.find_module(name)
if '.py'==loader.etc[0]:
new_service = Service(
name=name,
module = loader.load_module(loader.fullname),
modifyDate = cls.modification_date(loader.filename)
)
cls.services[name]=new_service
new_service.module.activate(Context)
cls.semaphore.release()
cls.logger.info("Loaded Services: {}".format( cls.services.keys() ))
print "check after services loaded"
@classmethod
def loadService(cls, path):
fullpath = os.path.abspath(path)
directory = os.path.dirname(fullpath)
if directory != cls.dirname:
raise Exception("Directory '{}' of new service is not module directory('{}')".
format(directory, cls.dirname))
new_service = Service(
name=os.path.basename(fullpath).split('.')[0],
module = cls._loadModule(fullpath),
modifyDate = cls.modification_date(fullpath)
)
if new_service.name in cls.services: #older version of new service is loaded already
#deactivate old module instance
cls.services[new_service.name].module.deactivate(Context)
#activate new module instance
cls.services[new_service.name].update(new_service)
else:
cls.services[new_service.name] = new_service
#activate new service
cls.services[new_service.name].module.activate(Context)
cls.logger.info( "Loaded Service: {}\nLoaded Services: {}"
.format( new_service.name, cls.services.keys() ))
@classmethod
def unloadService(cls, path):
fullpath = os.path.abspath(path)
directory = os.path.dirname(fullpath)
#check if file is(was) in directory of services
if directory != cls.dirname:
return
#file is(was) in observed directory of services
name=os.path.basename(fullpath).split('.')[0]
if name in cls.services:
#first deactivate old module
cls.services[name].module.deactivate(Context)
#remove old module
del cls.services[name]
#remove old module compile
try:
os.remove(fullpath.split('.')[0] + ".pyc")
except Exception:
#file does note exists already
cls.logger.info("Found that file {} was removed already.".format( fullpath.split('.')[0] + ".pyc" ))
else:
raise KeyError("Service {} not found in loadedServices", name)
cls.logger.info( "Unloaded Service: {}\nLoaded Services: {}"
.format( name, cls.services.keys() ))
return
@classmethod
def _loadModule(cls, path):
'''
Loads the single python module from file path
@param path: path to module f.e:
@type path: String F.E.:'./services/game_math.py'
'''
fullpath = os.path.abspath(path)
name = os.path.basename(fullpath).split('.')[0] #extracts file name without extension
folder = path[:-(len(os.path.basename(path)))] #extracts path to folder
importer = pkgutil.ImpImporter(path=folder)
loader = importer.find_module(name)
return loader.load_module(loader.fullname)
@staticmethod
def runService(name, args):
'''Returns result from math service for given arguiments
@raise exception: Exception( "Service '{}' not found on MathServer".format(name) )
'''
if name in ServicesMaster.services:
return ServicesMaster.services[name].module.run(args)
else:
raise Exception( "Service '{}' not found on MathServer".format(name) )
@staticmethod
def modification_date(filename):
'''returns modification date of file in datetime'''
t = os.path.getmtime(filename)
return datetime.datetime.fromtimestamp(t)
Answer: Solution is to **enable threads** in uwsg configuration and set number of
thread to **2 or more**
|
cast a structure in python
Question: i am using ctypes to read some data from an external Database.
this data is written in struct. the problem is, that the recieved Data could
have different results. for bettern understanding: i have created two
structures:
class BEAM(Structure):
_fields_ = [
('NR', c_ulong),
("NODE1", c_ulong),
("NODE2", c_ulong),
("NP", c_ulong),
("DL", c_float),
("foo", c_ulong),
("foobar", c_ulong),
("bar", c_ulong),
("barfoo", c_ulong)
]
class DUMMY(Structure):
_fields_ = [
('ID', c_ulong),
("NODE1", c_ulong),
("NODE2", c_ulong),
("NP", c_ulong),
("DL", c_ulong),
("foo", c_ulong),
("foobar", c_ulong),
("bar", c_ulong),
("barfoo", c_ulong)
]
the difference between this structs is the u_long type in "DL"... in DUMMY it
is u_long, in BEAM it's u_float.
after reading the datbase i get into DL = 1056964624 but in float is should be
0.5
my question is how can i cast the DUMMY into BEAM.
i have tried `BEAMRecord = cast(Record, POINTER(BEAMRecord))` but there is an
error called TypeError: must be a ctypes type
here is my code:
'''
Structure for DataLength
'''
class Len(Structure):
_fields_ = [
('buffer', c_int)
]
SLNRecord = element.SLN()
BEAMRecord = element.BEAM()
Record = element.DUMMY()
RecLen = Len()
sofistik = cdll.LoadLibrary("cdb_w30_x64.dll")
py_sof_cdb_init = sofistik.sof_cdb_init
py_sof_cdb_close = sofistik.sof_cdb_close
py_sof_cdb_get = sofistik.sof_cdb_get
py_sof_cdb_get.restype = c_int
Index = py_sof_cdb_init("system.cdb", 99)
pos = c_int(0)
while True:
RecLen.buffer = sizeof(Record)
ie = py_sof_cdb_get(Index, 100, 0, byref(Record), byref(RecLen), pos)
pos.value += 1
if ie > 1:
break
if Record.ID > 0:
BEAMRecord = cast(Record, POINTER(BEAMRecord))
print BEAMRecord
py_sof_cdb_close(0)
exit()
thanks for helping
* * *
**Solution:**
by reading [this thread](http://stackoverflow.com/questions/1825715/how-to-
pack-and-unpack-using-ctypes-structure-str?rq=1) i modified @Mr Temp question
i created a `BEAMRecordPointer = POINTER(element.BEAM)` and `BEAMRecord =
cast(Record, POINTER(BEAMRecord))` i rewrite to `BAR = cast(byref(Record),
BEAMRecordPointer).contents` so the solution looks like this
if Record.ID > 0:
BAR = cast(byref(Record), BEAMRecordPointer).contents
print BAR
I'm doing it wrong?
* * *
**Update 1**
@eryksun has a really good shothand for the cast() function. Thank you.
Answer: You could just load the structure into a `Union`, then access it how you need
to:
from ctypes import *
class BEAM(Structure):
_fields_ = [('NR', c_ulong),
("NODE1", c_ulong),
("NODE2", c_ulong),
("NP", c_ulong),
("DL", c_float),
("foo", c_ulong),
("foobar", c_ulong),
("bar", c_ulong),
("barfoo", c_ulong)]
class DUMMY(Structure):
_fields_ = [('ID', c_ulong),
("NODE1", c_ulong),
("NODE2", c_ulong),
("NP", c_ulong),
("DL", c_ulong),
("foo", c_ulong),
("foobar", c_ulong),
("bar", c_ulong),
("barfoo", c_ulong)]
class Both(Union):
_fields_ = [('Beam',BEAM),('Dummy',DUMMY)]
x = Both()
x.Dummy.DL = 1056964624
print(x.Beam.DL)
Output:
0.5000009536743164
Or more simply:
from ctypes import *
class DL(Union):
_fields_ = [('DUMMY',c_ulong),('BEAM',c_float)]
class Hybrid(Structure):
_fields_ = [('NR', c_ulong),
("NODE1", c_ulong),
("NODE2", c_ulong),
("NP", c_ulong),
("DL", DL),
("foo", c_ulong),
("foobar", c_ulong),
("bar", c_ulong),
("barfoo", c_ulong)]
x = Hybrid()
x.DL.DUMMY = 1056964624
print(x.DL.BEAM)
(same output)
|
Matching filename and saving it to a variable in python
Question: Does anyone know how to match a filename and then saving it to a variable?
For instance I've got multiple files that are being saved in one folder. All
of them start with the same name. "AmountFile" all of them start with but then
they differ in filename rest of the filename is date.
So all look like AmountFile in the begining but then AmountFile20141020 and so
on.
I want to match the filename save it in a variable so I can use it in a
os.rename function.
File1 = AmountFile123
File2 = AmountFile321
File3 = AmountFile456
File4 = AmountFile789
I want to match them so I can use it in a variable.
def main():
d = 'D:\Applications\IMP\Something'
for filename in os.listdir(d):
title = 'Import_ABC'
oldFile = 'AmountFile'
if filename.endswith(".csv"):
os.rename(oldFile,title+".csv")
main()
WindowsError: [Error 2] The system cannot find the file specified
My system will when finished rename files in the folder automatically move
them to another folder from where they will be imported to our database.
Problem is as in the example files have the same start but end differently
which is making it hard to run the function. For the OldFile variable I need
to match a file name which can be saved instead of 'AmountFile'
Answer:
import glob
filenames = glob.glob("AmountFile*")
`filenames` will then be a list of all files in the current directory whose
names start with `AmountFile`.
|
Why is my program returning the same permutations multiple times?
Question: I have a simple code which I am using to try and get all combinations of 4
nucleotide bases, but only as sets of 3 because 3 nucleotides make up a codon.
I basically need to generate all possible permutations that can be made by the
4 bases a, c, t and g, and put them in chunks of three, however at first the
program seemed to work, and then as I looked at the result I realized it was
repeating the permutations 5 times.
I only need all permutations once and I am not sure how to change my code to
get this result. I am also very new to Python so I would really appreciate
simple talk and jargon in any answers, thank you!
This is the code:
import itertools
bases = ['a','c','t','g']
for L in range(0, len(bases)+1):
for subset in itertools.permutations(bases, 3):
print(subset)
And the result I get looks right, but I just don't want it repeated 5 times:
('a', 'c', 't')
('a', 'c', 'g')
('a', 't', 'c').....
Answer: You're telling it to do it five times with that first `for L in range(0,
len(bases)+1):` line.
Without that it works fine.
import itertools
bases = ['a','c','t','g']
for subset in itertools.permutations(bases, 3):
print(subset)
|
exceptions/traceback not showing after importing qgis-utils
Question: Following on from a
[question](http://stackoverflow.com/questions/26239144/python-qgis-version-
information) I asked about getting the version information from `python-qgis`,
with a brilliant solution provided by @falsetru, I am running into a problem
whereby importing `qgis.utils` seems to hide all exceptions. Running the
following code in the interpreter, I get no traceback, or anything useful
following a raised exception, see below.
>>> import qgis.utils
>>> qgis.utils.QGis.QGIS_VERSION
'2.4.0-Chugiak'
>>> raise Exception('boof!')
>>>
Could someone tell me how I can turn back on traceback after importing `qgis-
utils` or another way of getting the version information from `python-qgis`
without needing to import `utils`?
Many thanks!
Answer: I have found a solution, which allows me to avoid using the `utils` module to
get the version information, and uses the `core` module instead.
>>> import qgis.core
>>> qgis.core.Qgis.QGIS_VERSION
'2.4.0-Chugiak'
>>> raise Exception('boof!')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Exception: boof!
This doesn't provide a full answer to my question, but does provide a work-
around to get the version information for `python-qgis`.
|
Yosemite and python matplotlib issue
Question: Can someone help me sort out the issue resulting to such an error. My python
codes were working well until I upgraded to Yosemite. Here is the error:
Traceback (most recent call last):
File "/Users/will/Downloads/legend_demo4.py", line 1, in <module>
import matplotlib.pyplot as plt
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/pyplot.py", line 26, in <module>
from matplotlib.figure import Figure, figaspect
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/figure.py", line 32, in <module>
from matplotlib.image import FigureImage
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/image.py", line 22, in <module>
import matplotlib._png as _png
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/_png.so, 2): Library not loaded: /usr/X11/lib/libpng12.0.dylib
Referenced from: /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/_png.so
Reason: image not found
Answer: I had the same problem. I fixed that by un- (via: easy_install -m matplotlib)
and reinstalling (via: python setup.py build; python setup.py install) the
matplotlib from the [sources](https://github.com/matplotlib/matplotlib) and
afterwards install the
[wheels](https://pypi.python.org/pypi?name=matplotlib&version=1.4.1&:action=display)
package (via: pip install packagename.whl).
|
Speeding up the code using numpy
Question: I'm new to python, and I have this code for calculating the potential inside a
1x1 box using fourier series, but a part of it is going way too slow (marked
in the code below).
If someone could help me with this, I suspect I could've done something with
the numpy library, but I'm not that familiar with it.
import matplotlib.pyplot as plt
import pylab
import sys
from matplotlib import rc
rc('text', usetex=False)
rc('font', family = 'serif')
#One of the boundary conditions for the potential.
def func1(x,n):
V_c = 1
V_0 = V_c * np.sin(n*np.pi*x)
return V_0*np.sin(n*np.pi*x)
#To calculate the potential inside a box:
def v(x,y):
n = 1;
sum = 0;
nmax = 20;
while n < nmax:
[C_n, err] = quad(func1, 0, 1, args=(n), );
sum = sum + 2*(C_n/np.sinh(np.pi*n)*np.sin(n*np.pi*x)*np.sinh(n*np.pi*y));
n = n + 1;
return sum;
def main(argv):
x_axis = np.linspace(0,1,100)
y_axis = np.linspace(0,1,100)
V_0 = np.zeros(100)
V_1 = np.zeros(100)
n = 4;
#Plotter for V0 = v_c * sin () x
for i in range(100):
V_0[i] = V_0_1(i/100, n)
plt.plot(x_axis, V_0)
plt.xlabel('x/L')
plt.ylabel('V_0')
plt.title('V_0(x) = sin(m*pi*x/L), n = 4')
plt.show()
#Plot for V_0 = V_c(1-(x-1/2)^4)
for i in range(100):
V_1[i] = V_0_2(i/100)
plt.figure()
plt.plot(x_axis, V_1)
plt.xlabel('x/L')
plt.ylabel('V_0')
plt.title('V_0(x) = 1- (x/L - 1/2)^4)')
#plt.legend()
plt.show()
#Plot V(x/L,y/L) on the boundary:
V_0_Y = np.zeros(100)
V_1_Y = np.zeros(100)
V_X_0 = np.zeros(100)
V_X_1 = np.zeros(100)
for i in range(100):
V_0_Y[i] = v(0, i/100)
V_1_Y[i] = v(1, i/100)
V_X_0[i] = v(i/100, 0)
V_X_1[i] = v(i/100, 1)
# V(x/L = 0, y/L):
plt.figure()
plt.plot(x_axis, V_0_Y)
plt.title('V(x/L = 0, y/L)')
plt.show()
# V(x/L = 1, y/L):
plt.figure()
plt.plot(x_axis, V_1_Y)
plt.title('V(x/L = 1, y/L)')
plt.show()
# V(x/L, y/L = 0):
plt.figure()
plt.plot(x_axis, V_X_0)
plt.title('V(x/L, y/L = 0)')
plt.show()
# V(x/L, y/L = 1):
plt.figure()
plt.plot(x_axis, V_X_1)
plt.title('V(x/L, y/L = 1)')
plt.show()
#Plot V(x,y)
#######
# This is where the code is way too slow, it takes like 10 minutes when n in v(x,y) is 20.
#######
V = np.zeros(10000).reshape((100,100))
for i in range(100):
for j in range(100):
V[i,j] = v(j/100, i/100)
plt.figure()
plt.contour(x_axis, y_axis, V, 50)
plt.savefig('V_1')
plt.show()
if __name__ == "__main__":
main(sys.argv[1:])
Answer: You can find how to use FFT/DFT in this document :
[Discretized continuous Fourier transform with
numpy](http://stackoverflow.com/questions/24077913/discretized-continuous-
fourier-transform-with-numpy)
Also, regarding your V matrix, there are many ways to improve the execution
speed. One is to make sure you use Python 3, or `xrange()` instead of
`range()` if you a are still in Python 2._. I usually put these lines in my
Python code, to allow it to run evenly wether I use Python 3._ or 2.*
# Don't want to generate huge lists in memory... use standard range for Python 3.*
range = xrange if isinstance(range(2),
list) else range
Then, instead of re-computing `j/100` and `i/100`, you can precompute these
values and put them in an array; knowing that a division is much more costly
than a multiplication ! Something like :
ratios = np.arange(100) / 100
V = np.zeros(10000).reshape((100,100))
j = 0
while j < 100:
i = 0
while i < 100:
V[i,j] = v(values[j], values[i])
i += 1
j += 1
Well, anyway, this is rather cosmetic and will not save your life; and you
still need to call the function `v()`...
Then, you can use weave :
<http://docs.scipy.org/doc/scipy-0.14.0/reference/tutorial/weave.html>
Or write all your pure computation/loop code in C, compile it and generate a
module which you can call from Python.
|
Decorator with configurable attributes got an unexpected keyword argument
Question: I am attempting to combine two decorator tutorials into a single decorator
that will log function arguments at a specified log level.
The first tutorial is from [here](http://juandebravo.com/2012/07/24/why-
python-rocks_and_two/) and looks like this (and works as expected):
import logging
logging.basicConfig(level=logging.DEBUG)
def dump_args(func):
# get function arguments name
argnames = func.func_code.co_varnames[:func.func_code.co_argcount]
# get function name
fname = func.func_name
logger = logging.getLogger(fname)
def echo_func(*args, **kwargs):
"""
Log arguments, including name, type and value
"""
def format_arg(arg):
return '%s=%s<%s>' % (arg[0], arg[1].__class__.__name__, arg[1])
logger.debug(" args => {0}".format(', '.join(
format_arg(entry) for entry in zip(argnames, args) + kwargs.items())))
return func(*args, **kwargs)
return echo_func
The second tutorial is from
[here](http://chimera.labs.oreilly.com/books/1230000000393/ch09.html#_solution_148).
My combined code looks like this and produces an error.
#decorators.py
from functools import wraps
import logging
logging.basicConfig(level=logging.DEBUG)
def logged(level=logging.INFO, name=None, message=None):
'''
Dump function arguments to log file.
Optionally, change the logging level of the call, the name of the logger to
use and the specific message to log as well
'''
def decorate(func):
# get function arguments name
argnames = func.func_code.co_varnames[:func.func_code.co_argcount]
# get function name
fname = name if name else func.__module__
logger = logging.getLogger(fname)
logmsg = message if message else None
@wraps(func)
def wrapper(*args, **kwargs):
"""
Log arguments, including name, type and value
"""
def format_arg(arg):
return '%s=%s<%s>' % (arg[0], arg[1].__class__.__name__, arg[1])
logger.log(level, " args => {0}".format(', '.join(
format_arg(entry) for entry in zip(argnames, args) + kwargs.items())))
if logmsg:
logger.log(level, logmsg)
return func(*args, **kwargs)
return wrapper
return decorate
It is being called from my flask application like this:
@app.route("/hello/")
@app.route("/hello/<name>")
@api_decorators.logged
def hello(name=None):
s = "Hello"
if name:
s = "%s %s!" % (s, name)
else:
s = "%s %s!" % (s, "World!")
return s
The error that is produced is
TypeError: decorate() got an unexpected keyword argument 'name'
The entire stack trace is
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\flask\app.py", line 1836, in __call__
return self.wsgi_app(environ, start_response)
File "C:\Python27\lib\site-packages\flask\app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "C:\Python27\lib\site-packages\flask\app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "C:\Python27\lib\site-packages\flask\app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "C:\Python27\lib\site-packages\flask\app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Python27\lib\site-packages\flask\app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "C:\Python27\lib\site-packages\flask\app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Python27\lib\site-packages\flask\app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
TypeError: decorate() got an unexpected keyword argument 'name'
How can I fix the combined code to eliminate this error?
Answer: You need to actually call the `logged` decorator, even if you don't have a
parameter to pass.
@api_decorators.logged()
def hello(name=None):
Your code has three functions:
1. The `logged` function, which creates a decorator that is configured with the arguments passed to it. It returns:
2. The inner `decorator` function, which takes a function to decorate. It returns:
3. The wrapper function, which wraps the decorated function.
So your code should be called like this:
logged()(hello)(name='something')
#Call 1 2 3, which calls hello inside it
But in your code, it is called like this:
logged(hello)(name='something')
#Call 1 2
The `decorator` function doesn't expect the `name` argument, which is what
causes the error.
You can use a hack allow the decorator to be used without calling it first.
You need to detect when the decorator is used without being called. I think it
would be something like this:
def logged(level=logging.INFO, name=None, message=None):
...
# At the bottom, replace the return with this
# If level is callable, that means logged is being called as a decorator
if callable(level):
f = level
level = logging.INFO
return decorator(f)
else:
return decorator
|
LiveScore BeautifulSoup Python
Question: I am using BeautifulSoup to parse this site:
<http://www.livescore.com/soccer/champions-league/>
I'm looking to get the links for the rows with numbers:
FT Zenit St. Petersburg 3 - 0 Standard Liege"
The 3 - 0 is a link a link; what I want to do is find every link with numbers
(so not results like
15:45 APOEL Nicosia ? - ? Paris Saint Germain
), so I can go load these links and parse out the minute data (`<td
class="min">`)
Hi!!! Needs edited. Now I'm able to get the links. Like this:
import urllib2, re, bs4
sitioweb = urllib2.urlopen('http://www.livescore.com/soccer/champions-league/').read()
soup = bs4.BeautifulSoup(sitioweb)
href_tags = soup.find_all('a', {'class':"scorelink"})
links = []
for x in xrange(1, len(href_tags)):
insert = href_tags[x].get("href");links.append(insert)
print links
Now my problem is the following: I want to write all this into a DB (like
sqlite) with the number of minute in which a goal was made (this information I
can get from the link I get) but this is possible only in the case that the
goal count is not ? - ?, as there isn't any goal made.
I hope you can understand me...
Best regards and thanks a lot for your help,
Marco
Answer: The following search matches only your links:
import re
links = soup.find_all('a', class_='scorelink', href=True,
text=re.compile('\d+ - \d+'))
The search is limited to:
* `<a>` tags
* with the class `scorelink`
* a non-empty `href` attribute
* and the link text containing two digits separated by a dash.
Extracting just the links is then trivial:
score_urls = [link['href'] for link in soup.find_all(
'a', class_='scorelink', href=True, text=re.compile('\d+ - \d+'))]
Demo:
>>> from bs4 import BeautifulSoup
>>> import requests
>>> from pprint import pprint
>>> soup = BeautifulSoup(requests.get('http://www.livescore.com/soccer/champions-league/').content)
>>> [link['href'] for link in soup.find_all('a', class_='scorelink', href=True, text=re.compile('\d+ - \d+'))]
['/soccer/champions-league/group-e/cska-moscow-vs-manchester-city/1-1821202/', '/soccer/champions-league/qualifying-round/zenit-st-petersburg-vs-standard-liege/1-1801440/', '/soccer/champions-league/qualifying-round/apoel-nicosia-vs-aab/1-1801432/', '/soccer/champions-league/qualifying-round/bate-borisov-vs-slovan-bratislava/1-1801436/', '/soccer/champions-league/qualifying-round/celtic-vs-maribor/1-1801428/', '/soccer/champions-league/qualifying-round/fc-porto-vs-lille/1-1801444/', '/soccer/champions-league/qualifying-round/arsenal-vs-besiktas/1-1801438/', '/soccer/champions-league/qualifying-round/athletic-bilbao-vs-ssc-napoli/1-1801446/', '/soccer/champions-league/qualifying-round/bayer-leverkusen-vs-fc-koebenhavn/1-1801442/', '/soccer/champions-league/qualifying-round/malmo-ff-vs-salzburg/1-1801430/', '/soccer/champions-league/qualifying-round/pfc-ludogorets-razgrad-vs-steaua-bucuresti/1-1801434/']
>>> pprint(_)
['/soccer/champions-league/group-e/cska-moscow-vs-manchester-city/1-1821202/',
'/soccer/champions-league/qualifying-round/zenit-st-petersburg-vs-standard-liege/1-1801440/',
'/soccer/champions-league/qualifying-round/apoel-nicosia-vs-aab/1-1801432/',
'/soccer/champions-league/qualifying-round/bate-borisov-vs-slovan-bratislava/1-1801436/',
'/soccer/champions-league/qualifying-round/celtic-vs-maribor/1-1801428/',
'/soccer/champions-league/qualifying-round/fc-porto-vs-lille/1-1801444/',
'/soccer/champions-league/qualifying-round/arsenal-vs-besiktas/1-1801438/',
'/soccer/champions-league/qualifying-round/athletic-bilbao-vs-ssc-napoli/1-1801446/',
'/soccer/champions-league/qualifying-round/bayer-leverkusen-vs-fc-koebenhavn/1-1801442/',
'/soccer/champions-league/qualifying-round/malmo-ff-vs-salzburg/1-1801430/',
'/soccer/champions-league/qualifying-round/pfc-ludogorets-razgrad-vs-steaua-bucuresti/1-1801434/']
|
Pick subset from list at random and maintain equal number of picks overall in python
Question: given a list of strings like so (in reality I have a much longer list but I'll
keep it short for here):
items=['fish','headphones','wineglass','bowtie','cheese','hammer','socks']
I would like to pick a subset, say 3, of this list randomly so that items can
only get picked once. This is easy enough using the following:
import itertools
import random
def random_combination(iterable, r):
"Random selection from itertools.combinations(iterable, r)"
pool = tuple(iterable)
n = len(pool)
indices = sorted(random.sample(xrange(n), r))
return tuple(pool[i] for i in indices)
items=['fish','headphones','wineglass','bowtie','cheese','hammer','socks']
randomPick=random_combination(items,3)
Next, to be a pain, I don't want to do this just once, but several times say
10 times. The final product would be 10 lists of randomly-only-picked-once
items with the constraint that over those 10 lists items are presented an
equal amount of times across lists. I'd like to avoid the "socks" to be picked
up 10 times and the "hammer" only once for example.
This is the step that I'm stuck with, I simply don't know enough programming
or enough about the available functions in python to perform such a thing.
Can anyone help?
Answer: The following code might help. It pops a random element until (a copy of)
`iterable` is empty, then starts over from the entire list. The downside is
every item is picked once before a single item can be picked a second time.
However, as you can see from the output, the distribution of items ends up
about equal.
import random
def equal_distribution_combinations(iterable, n, csize):
"""
Yield 'n' lists of size 'csize' containing distinct random elements
from 'iterable.' Elements of 'iterable' are approximately evenly
distributed across all yielded combinations.
"""
i_copy = list(iterable)
if csize > len(i_copy):
raise ValueError(
"csize cannot exceed len(iterable), as elements could not distinct."
)
for i in range(n):
comb = []
for j in range(csize):
if not i_copy:
i_copy = list(iterable)
randi = random.randint(0, len(i_copy) - 1)
# If i_coppy was reinstantiated it would be possible to have
# duplicate elements in comb without this check.
while i_copy[randi] in comb:
randi = random.randint(0, len(i_copy) - 1)
comb.append(i_copy.pop(randi))
yield comb
## Edit
Apologies for Python 3. The only change to the function for Python 2 should be
`range` -> `xrange`.
## Edit 2 (answering comment question)
`equal_distribution_combinations` should result in an even distribution for
any `n`, `csize`, and length of `iterable`, as long as `csize` does not exceed
`len(iterable)` (as the combination elements could not be distinct).
Here's a test using the specific numbers in your comment:
items = range(30)
item_counts = {k: 0 for k in items}
for comb in equal_distribution_combinations(items, 10, 10):
print(comb)
for e in comb:
item_counts[e] += 1
print('')
for k, v in item_counts.items():
print('Item: {0} Count: {1}'.format(k, v))
Output:
[19, 28, 3, 20, 2, 9, 0, 25, 27, 12]
[29, 5, 22, 10, 1, 8, 17, 21, 14, 4]
[16, 13, 26, 6, 23, 11, 15, 18, 7, 24]
[26, 14, 18, 20, 16, 0, 1, 11, 10, 2]
[27, 21, 28, 24, 25, 12, 13, 19, 22, 6]
[23, 3, 8, 4, 15, 5, 29, 9, 7, 17]
[11, 1, 8, 28, 3, 13, 7, 26, 16, 23]
[9, 29, 14, 15, 17, 21, 18, 24, 12, 10]
[19, 20, 0, 2, 25, 5, 22, 4, 27, 6]
[12, 13, 24, 28, 6, 7, 26, 17, 25, 23]
Item: 0 Count: 3
Item: 1 Count: 3
Item: 2 Count: 3
Item: 3 Count: 3
Item: 4 Count: 3
Item: 5 Count: 3
Item: 6 Count: 4
Item: 7 Count: 4
Item: 8 Count: 3
Item: 9 Count: 3
Item: 10 Count: 3
Item: 11 Count: 3
Item: 12 Count: 4
Item: 13 Count: 4
Item: 14 Count: 3
Item: 15 Count: 3
Item: 16 Count: 3
Item: 17 Count: 4
Item: 18 Count: 3
Item: 19 Count: 3
Item: 20 Count: 3
Item: 21 Count: 3
Item: 22 Count: 3
Item: 23 Count: 4
Item: 24 Count: 4
Item: 25 Count: 4
Item: 26 Count: 4
Item: 27 Count: 3
Item: 28 Count: 4
Item: 29 Count: 3
As can be seen, the items are evenly distributed.
|
How to replace the words that appear once in a sentence in python
Question: I want to replace the words that appear once in a sentence with `'<unk>'`.
Like for a sentence: `hello hello world my world`, I want the output to be
`hello hello world <unk> world`, how to do that?
Right now I'm doing like this:
wordlist1 = trainfiles.split(None)
wordlist2 = []
for word1 in wordlist1:
lastchar = word1[-1:]
if lastchar in [",",".","!","?",";"]:
word2 = word1.rstrip(lastchar)
else:
word2 = word1
wordlist2.append(word2)
freq = {}
for word2 in wordlist2:
freq[word2] = freq.get(word2,0)+1
keylist = freq.keys()
keylist.sort()
for key2 in keylist:
if freq[key2] == 1:
print "%-10s %d" % ('<unk>', freq[key2])
else:
print "%-10s %d" % (key2, freq[key2])
Which gives me an output like:
hello 2
<unk> 1
world 2
But, I need the output like :
hello hello world <unk> world
How to do that?
Answer: Use `collections.Counter` to count the frequency the words in your sentence
from collections import Counter
s = 'hello hello world my world'
counts = Counter(s.split())
Then use a generator expression to replace any word that has a count of 1, and
join the result a space character.
replaced = ' '.join(i if counts[i] > 1 else '<unk>' for i in s.split())
Result
'hello hello world <unk> world'
|
How to Define Google Endpoints API File Download Message Endpoint
Question: All the examples I can find on google endpoint api (e.g., tic-tac-toe sample)
show strings, integers, enums, etc fields. None of the examples say anything
about how to specify document (e.g., image or zip files) uploads or downloads
using the API. Is this not possible?
If this is possible, can anyone share a code snippet on how to define google
endpoint api on the server to allow downloads and uploads of files? For
example, is there a way to set HTTPResponse headers to specify that an
endpoint response will serve a zip file? How do we include the zip file in the
response?
An example with python or php would be appreciated. If anyone from the
endpoints-proto-datastore team is watching this discussion, please say whether
or not file downloads are supported in endpoints at the moment. We hate to
waste our time trying to figure this out if it is simply impossible. Thanks.
We are seeking a complete example for upload and download. We need to store
the key for the uploaded file in our database during upload and retrieve it
for download. The client app sends a token that the API needs to use to figure
out what file to download. Hence, we would need to store the blob key
generated during the upload process in our database. Our database would have
the mapping between the token and the blob file's key.
class BlobDataFile(models.Model):
data_code = models.CharField(max_length=10) # Key used by client app to request file
blob_key = models.CharField()
By the way, our app is written in Django 1.7 with a mysql (modeled with
models.Model) database. It is infuriating that all the examples for Google App
Engine upload I can find is written for a standalone webapp Handlers (no
urls.py/views.py solutions could be found anywhere). Hence, building a
standalone uploader is as much of a challenge as writing the API code. If your
solution has full urls.py/views.py example for uploading files and saving the
blob_key in our BlobDataFile, it would be good enough for us.
Answer: f you use the blobstore use the
[get_serving_url](https://cloud.google.com/appengine/docs/python/images/functions#Image_get_serving_url)
function to read the images from url in the client, or use the
[messages.ByteField](https://cloud.google.com/appengine/docs/python/tools/protorpc/messages/fieldclasses)
in the ResourceContainer and serialize the image with base64.b64decode
#the returned class
class Img(messages.Message):
message = messages.BytesField (1)
#The api class
@endpoints.api(name='helloImg', version='v1')
class HelloImgApi(remote.Service):
ID_RESOURCE = endpoints.ResourceContainer(
message_types.VoidMessage,
id=messages.StringField(1, variant=messages.Variant.STRING))
@endpoints.method(ID_RESOURCE, Img,
path='serveimage/{id}', http_method='GET', #ID is the blobstore key
name='greetings.getImage')
def image_get(self, request):
try:
blob_reader = blobstore.BlobReader(blob_key)
value = blob_reader.read()
return Img(message=value)
except:
raise endpoints.NotFoundException('image %s not found.' %
(request.id,))
APPLICATION = endpoints.api_server([HelloImgApi])
And this is the response (save it in the client with the proper format)
{
"message": "/9j/4AAQSkZJRgABAQAAAQABAAD//gA+Q1JFQVRPUjogZ2QtanBlZyB2MS4wICh1c2luZyBJSkcgSlBFRyB2NjIpLCBkZWZhdWx0IHF1YWxpdHkK/9sAQwAIBgYHBgUIBwcHCQkICgwUDQwLCwwZEhMPFB0aHx4dGhwcICQuJyAiLCMcHCg3KSwwMTQ0NB8nOT04MjwuMzQy/9sAQwEJCQkMCwwYDQ0YMiEcITIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIy/8AAEQgBZwKAAwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8A9/ooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACkpaKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAoopKACkZgilmIAHUntWTr3iTTfD1qZr2YBiPkjHLN9BXjniTxzqniSRreItb2ZOBDGeWH+0e/wBK5a+KhRWu/Y48TjadBa6vsei6z8TNF0udreDzL2VeD5ONgPpuP9M1zF18U9VuVZrWzt7OH/npKS5/DpmvP9kVqMyYkl/uDoPrUEs0kzbnbPoOwryZ46tPZ2PDqZlXm9HZHp/g74g3V54gFlqU2+Kf5Y3YBcN+HTNeqCvlqORopFkRiroQykdiK+k/D+oDVNBsrwHJkiUn645rvwFdzTjLdHpZZiZVIuEndo06KKK9E9YKKKKACikpaACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACkpaKAEoopaACkpaKACkpaKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigApKWoLu7gsrZ7i5lSKJBlnc4AFF7CbS1ZKSAMk1wPi74j22lb7PSytxeDhn6pH/ia5bxh8RrjVTJY6SzwWf3Wl6PJ/gK4iOAbfOnJVP1b6V5WJx9vdp/eeLjMzt7lL7ya4uLzWLt7q7naRzy0jngVE9wsSmO34z1c9TTZpzLhQNkY6KKhryG23dnhtuTuxOvWilPSu68DeApdbdNR1JGj04HKJ0M3+C+/etaVKVWXLE2oUZ1pcsEUvB/ge68STC4nDQ6ep5fHMnsv+Ne4afYW2mWMVnaRiOGJcKoqWCCK2hWGGNY40GFVRgAVLXv4fDxoxstz6fC4WGHjZbhRRRXQdQUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUlFAC0UUlAC0UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFJRmsLxN4psfDViZbhg07D91CDyx/wqZSUVdkznGEeaT0Les65Y6FYvdXsoRR91e7H0Arw3xT4wvvE9zhyYrNT+7gU8fVvU1Q1zXr/wAQ37XN3IWOcJGPuoPQCqyqtoMsA03Ydl/+vXh4rGOp7sdj5vG5hKr7sNI/mIkSQKJJhlj91P8AGopZWlfcx+ntTWYuxZjkmkrhPNCg0V3PgPwO+tzLqN+hWwQ5RT/y1P8AhWlKlKrLlibUKEq0+WJJ4E8Btq8ialqcZWxU5SM8GU+/t/OvZo41iRURQqqMAAYAFEcSRRrHGoVFGAoHAFPr6GhQjRjZH1WGw0KEOWIUUUVudAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRSUALRRRQAlFLRQAUUUUAFJS0UAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUlFcn4y8Z23hq1MUZWW/kHyR5+77n2qJzjCPNIipUjTjzSehL4u8Y2nhizxxNfSD91CD+p9BXheo6lea1qD3V5K0s8h/AewHYUy8vLrVL6S5uZGmuJTksf5fSnZW2XauDKerf3fpXgYrFSrO3Q+YxmNlXlboHy2gwMGY9T2X/wCvVckkkk5JoPNFchwBSUtb/hLwvceJtTES5S0jOZpfQeg9zV04SnLliaUqcqklGO5f8D+DZfEd4Lm5Vk02Fvnb/nof7o/rXuUEEVtCkMKKkaDaqqMACo7Cxt9OsorS1jWOGJdqqKsV9FhsPGjGy3Pq8JhY4eFlv1FoooroOoKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigBKWiigApKWkoAWiiigAopKWgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigApM0ZrlvGXjC38M2O1dsl9KD5UWen+0faonNQjzSIqVI04uUthnjTxlb+G7QxRFZL+Qfu4/7v+0a8Muru51K9e4uJGlnlbJJ5JNF5eXOpXsl1dStLPK2WY08AWy4HMp6n+7XgYnEyrS8j5fGYyVeXkHFspVcGU/eb+77CoOtFFchwCUUtSWtrNe3UdtbxtJNIwVVHUmhJt2RSTbsi5omjXWvapFY2q/MxyzdkXuTX0BoWiWug6ZFZWq4VR8zd2Pcms7wd4Wg8NaWqEK15KA00nv6D2FdJ3r6DB4VUo3e7PqMBg1QjzS+JgKWiiu09AKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAEopaKAEopaKAEopaKAEopaSgBaKKKAEpaKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKSsjxH4htfDulveXLAt0jjB5dvQUpSUVdkykormlsVfFnim28NaaZXIe5cYhizyx9T7V4HqOoXWrX8t5dyGSeQ5JPb2HtU2s6zd67qUl7dyFnY8L2UegqCNBAokcfOfuqe3vXz+LxTqystj5jHYx15WXwoVVFuuT/AK0/+O1ETk5pSSxJPU0lcR51wpKWg0AJ1r2T4deDhpdqNVvo/wDTZl/dow/1Sn+prm/hz4Q/tK5XV76P/RYW/cow/wBYw7/QV7GABwK9nAYW372XyPfyzB2/fT+QUUUYr1T2xaKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAopKWgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAopKiuLiK1t3nncJFGpZmY4AFFxN2K+q6pa6Np8t7dyBIoxnnqT6D3r5+8S+I7rxLqjXU5KxDiGLPCL/jV/xt4ul8S6iViZl0+EkRJ/eP8AeNc5BEGy7/cH614eNxXtHyx2PnMwxvtXyR+FfiPhjCL5sgz/AHV9TSMxdixOSaV3Ltk8DsPSm15tzyb3EopaSgQVveE/Dc3iTV1gAK2yfNM/oPT6msmxsp9RvYbO2QvNK21QK+gfDHh+Dw7pEdpEAZD80sn95q7cFhvayu9kell+E9vPml8KNO1tYbK1itreMJFGoVVHYCpqMUV9ClY+oSsrIWiiigYUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAJS0UUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFJRQAhOASTwK8a+IvjM6ncNpFhJ/okTfvXU/6xh2+grofiR4y/s+3bR7CT/SpV/eup/1ant9TXjyqXYKOSa8rHYq37uPzPEzLGW/dQfqOiiMr46KOSfQVM7g4VRhF6ChsRp5aHj+I+ppleM2eA3cKKKKQgpDS123w88K/2zqI1C7TNlbNkAjiR/T6CtaNJ1ZqKNqFGVaahE634ceE/wCy7Iarex4vLhfkVhzGn+JrvhQAAMDpS19NSpRpwUYn2FGjGjBQj0CiiitDUKKKKACiiigAooooAKSlooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKAEpaSigBaSiigBaKYzqv3mA+tNE8RbAlQn0zSuhXRLRSUUxi0UlLQAUUUUAFFJRQAVzfjPxRD4Z0hpAQ13KCsEfqfX6CtnU9St9J0+a9u3CQxLlj6+w96+d/EevXHiPWJb64JCn5Yo88IvYVx4vEeyjZbs4Mdi1QhZfEzPubia8uZLidzJNIxZmPUmpUXyU5++36Co4EAHmN/wEe9PJJ5718/KVz5acrsSiiioMxKKWgKWIABJPAAosNK5oaHo8+u6tDY24OXOXbsq9zX0HpWm2+k6dDZWyhY41AHv71zvgLwuNB0oT3Cf6dcANJkcoOy119fQ4LDeyhzPdn1OXYT2NPml8TFoooruPSCiiigAooooAKKKKAEpaKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAopKKACiiuJ8Y+P7bQVezsys+oYwR/DH9ff2rOpUjTjzSZnVqwpR5ps6PWde07QrYz31wqD+FOrN9BXmOt/FPULxjDpEIto+0jjc5/DoK4m6vL3Wbt7q9naRicl3PA9hTfMWIbYRg93PU14uIzCcnaGiPncVmlSbtT0Rcu7/U79vM1DUZj6B3P8hVdZkhkDpLcGQchw+0iqxJJyTk0lcDqSbu2eY6k27tnZaR8RdU00qkrNdQjtMcsPxr0bw9430zX3ECMYbnH+rk7/Q968Hp8UjwyrJG7I6nKspwQa6qGOq03q7o7cPmVak7N3R9OClrhfAnjP+2YRp984F9GPlY/8tB/jXc179KrGpHmifT0a0K0FOAtFJS1oaiUhIUEk4ApTXA/ErxZ/ZGnf2baSYvLlfmI6onr+NZ1KipxcmZVqsaUHORxvxG8WnWdROnWkn+hW7ckHiR/X6CuJij8x/8AZHJNRjLN6k1bACJtH4185WqucnJnyWIryqTc31FY54HAHApKKK5zlCkpaSgA+ld78NfC/wDaN9/a93Hm1t2xEpHDv6/Qfzrk9D0efXdXgsIAQZD87f3V7mvoTTdPg0vT4LK2QJFEoVRXpZfhueXtJbI9fK8J7SftJbL8y3RS0V7p9KFFFFABRRRQAUUUUAFFFFABRRRQAlLRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUlFV72CS5spoIpmhd0KrIo5U+tJiZwfj3x8NKV9L0qQNesMSyjkQj0H+1/KvI0Rp3aaZ2bJyzE5LGtjxD4W1LQb9xfK0kTNlbgch8+/rWUTkY6AdBXzuLrVJztLQ+Tx9erOpaenkK7lgABhR0AptFFcR54UUUUAFFFFAEttczWdzHc27lJY2DIw7GvffCniGLxFo0d0uFmX5Jk/ut/h3r59rpfBPiFtA11DIx+yXGI5h2Ho34V3YHEOlOz2Z6WW4t0anLLZnvNLTVYOoZTkEZBFBIAya+iPqzO17WbfQdIn1C5b5I1+Ve7N2A+pr5w1TVLjWNTnv7pt0srbj6AdgPYV1HxJ8V/wBuax9htpM2NoxAweJH7t/QVxkKGR/bvXiY2vzy5Vsj53McT7SXKtkWIEwN569qlo4xxRXlt3Z4zd2FFFFIQUdeBRXYfD7w3/bWsC6nTNpakMc9GbsK1pUnUmoo2oUZVqihE7v4e+GRouk/a7hP9MugGbI5RewrtKAAAAOlFfT06apxUUfZUaUaUFCPQWikpa0NAooooAKKKKACiiigAooooAKKSloAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooASloooAKSlooAr3dnBfW7QXMSyxMMFWGa8y8RfDF0L3GjtuXr5LdR9DXqtJWFbD06ytJHNiMJSrq00fM9zaz2czQ3EbRyKcFWGMVDXt/jTwjHrdm91axgX0YyB08wen19DXis0LQuykEYJBBGCD6H3rwMThZUJeR8vjMHPDSs9V3IqKWiuU4xKKKKACiiigD2v4da/wD2roQtZnzc2mEOTyy9j/SoPiZ4rGgaKbO3fF7dgquDyidz/SvNvCmv/wDCO65HeOW+zkFZlHda5bxT4lm8SeILi/lJCs22JP7qDoK9qlinOhbrsfRUca54bl+1sVlYu/qTWnDH5ceO561S02LcPNYfStGvLqy1seLXlryoSiiisTAKKKKAJrO0mvryG1t0LzTMEUD1NfQnh7RYdB0aCxiAyoy7f3m7muF+F3hwBX125Tk5jtgR27t/T869Pr3svw/JD2j3Z9NleF9nD2kt3+QUtFFekesFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABSUtFABRRRQAlLRRQAlFLRQAlFLRQAleX/Evwx5W7X7OLKHAvI1HbtIPcd69QqOeGO4geGVA8bqVZSOCDWValGrBxZjiKEa1NwkfM7LtIwcqeQfUUlbHiXQn8O69NprZNu+ZLVz/AHT/AA/hWMRivmKtN05OLPja1J0puEgooorMyCiiigA61zlzZFdV8sfdc7hXR0xokaVZCPmXoa1pVeRs3oVvZthEgijVB0Ap9FFZN3MW7u4lFFFABWn4f0eXXdagsIwdrnMjD+FB1NZley/DXw9/ZukHUJ0xc3YyMjlU7CuvB0PbVEuiO3AYb29VJ7Lc7O0tYrK0itoECRRKFUDsBU9FLX0iVtEfXJWVkFFFFMYUUUUAFFFFABRRRQAUUUlAC0UUUAFFFFABRRSUALRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUlAC0UUUAFFFFABRRSUALSUtJQBx/xE8Of274eeWFP9MtP3sRHU+orxFX86MSdG6MPQ19PMAykHoa8B8b6J/wAI94qlVVxZ3v7yP0BPUfnXlZjQuvaI8TNsNzL2qOfooIIJB60V4h86FFFFABRRSUDFpKKKACiijNOwG/4P0Jtf8QQwMD5EZ8yY/wCyO34178iLHGqIAFUYAHYVyPw90D+x9AWeVMXN1iR8jkL2FdhX0eCoeyp67s+sy7DexpK+7ClpM0V2HoC0UlFAC0UlFAC0UlFAC0UmaKAClpKKAFopKM0ALRSZozQAtFJmigBaKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiiigBKWiigAooooAKKKKACiiigAooooAKKKKACiiigBK474keH/AO2/DEskSZubTMseOpA6j8v5V2OaawDKQRkHqDUTipxcWRUgpxcX1PmGKTzoFc/eHyt9adWr4s0Y+HPF11ZgYtbj95Ce20nj8jkVknivmK1Nwm0z4zEUnTqOLFpKKM1iYhRSZpKYDs0maTNJmgY7NdF4J0M694khidc20H72Y9sDoPxP9a5vNe3/AA60P+yPDi3Eq4ubzEr56hf4R+XP412YKj7Sqr7I78vw/tqyvsjsRhQABgClzTM0Zr6I+sH5ozTM0ZoAfmjNMzRmgB9JTc0ZoAdmjNNzRmmA/NGaZmlzQA7NGabmjNADs0U3NGaAHZopM0ZoAWikzRmgB1FNpc0ALRSUUALRmkzRmgBc0uabS0AFLSUUALRSUUALRSUUALRSZpaACiiigApKWigAopKWgAooooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACiikoAWikzRmgBaKbmjNAC0ZpM0maQDqSm5o3UAOzSZpu6k3UAcD8WdB/tHw6upQrm4sG3HHUxn735cH8DXkCSiaJJO54P1r6YuIo7m3kglUNHIpVlPcEV816jp76F4gvtIlziNz5ZPdeqn8q8rMKN/fR4ea0L2qIZmjNMzRurx7Hg2HZpM03dSbqdh2HZozTN1a+geHNQ8R3YhtI8RA/vJm+6g/x9quEHJ2ii4U5TfLFalzwfoEmv65FGUJtYmDzt2wO34174uEUKowAMAVi+HtBtPDumraW3zMeZJD1c1rb69/CYf2MLPdn1GBwvsKdnu9yXNGai30b66jtJc0ZqLdRuoAlzRmo91G6gZJmjNR7qXdTAkzRmo80uaAH5ozTM0uaAH5ozTM0uaAHZpc0zNLmgB2aM03NGaAHZpc03NGaAHZopKM0AOopuaWgBc0ZpM0uaAClpKKAFopM0tABS0lFAC0UmaWgAooooAKKKKAFooooAKKSloAKKKKACiiigAooooAKKKKACiiigAopKKAFpKKSgBc0UmaTNAC5ozTc0maQDs0hNNzSFqAHk0m6oy1IWoAk3Um6oi1NL0CJS1IXqEyUwyUATl6QyVWMvvUZmHrQBbMleSfGDSvLmstchXn/Uykfmp/mK9NM/vWH4rsl1rw3fWRALNGWT2Ycisq0OeDRhXh7Sm4nh/mBwHHRhmk3VQsZiY3hfh4z0/nVkvXz86fLKx8tUpcsmiXdSFqiDFmCgEknAA716B4U8FoDHf6uv8AtR25/m3+FXSoSqOyNaGFnWlaJT8J+CLnW3W6vd0FiDnOMNJ9P8a9fsbW1020S1s4VhhQYCr/ADNU1uFVQq4VQMADoKcLkHvXt0MPGitNz6HDYWFBabmn5lHmVni4HrThN71udRe30u+qYl96eJPegC1vpd1Vw9ODUAT7qN1RBqUGmMlzS5qPNKDQBJmlzUeaXNAEmaM0zNLmgB+aXNMzS0AOzS5plLQA7NLTaWgBaXNNzS0ALS5ptLQAtLSUUALmlpKKAFzS0lFAC0UlLQAUtJRQAtFJS0AFFFFAC0UlLQAtFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFJRRQAUlFFABSZopKACkzRSGgAJpM0E000ABNNJpTTTSAQmmlqDTTQAhamF6GqJjQKwM9RNJQxqu5NAWHNNUD3GO9RSM1UZnftQFi292B3qpLqKL1cD8azLkzHOM1iXkFy+fvUCscZ4s0iHTNZOoWThrW4Y+YmeY2P9KxtzFtoGTXU6jo9xcIylGYHtXM3HhvVzN8ofb2rjq4RTlc4a2BjUlc6rw6unaawurh1luv4R1Cf/XrrU8Swt0avObLw3qPG8NXRWfh+4XG4muqFOMFaJ2U6UaceWKOsj11W6Grceqbqw7bR3XGTWrBp+3FWaGlHfFqspck1TitdtXI4cUAWUlJqdXNQJHU6rQBKrGpQaiUVKKAJAacDTBTxQA4GnA00U4UAOpRTRS0AOpaQUtAC0tJS0ALS0lLQAUtIKWgBaBRQKAFooooAWlpKBQAtFFFAC0UUUALRSUtABS0lLQAUUUUALRSUtABRRRQA6iiigBKWiigAooooAKKKKACiiigApM0UUAFJS0lABSGlpDQAUlFFACUlLSUANNIadSUANppp+KaRQAwimEVIRSEUAQsKjK1YK0wrQBVZKiaOrpSmGOgDPaEHtUL2wPatQx0wxe1IRkNZqe1RNYqf4RW0Yfak8n2pCMM2Cf3R+VNOnIf4B+VbvkD0pPIHpQBg/2Yn90U9dPA/hrc8gelHke1AGQtmB2qVbUDtWn5A9KcIfagDPW39qlWH2q6IvanCL2oAqLFipBHVkR04R0wIAlPC1KEpwSgZEFpwFSbKXbQAwClAp+2l20ANApcU7FLigY3FLilxS4oATFLilxRigBKXFLilxTASilxS4oASloxRQAUUtFABS0UUAFFFLQAUUUUAApaKKAClpKWgAooooAKWkpaACiiigB1FFFABRRRQAUUUUAFFFFABSUtFACUUUUAFJS0UAJRRRQAlJTqTFACUlOpMUANpMU6jFADMUmKfikxSAZikxT8UYoAj20m2pMUYoEQ7aQrU2KTbQBDspNlT7aTbQBBspPLqxto20AV/Lo8urG2jbQBX8ul8up9tG2gCDZS7Km20baAsRbKNlTbaNtAEWyl21JtpcUAR7aXbT8UYoAZtpcU/FGKAGYpcU7FGKAG4pcU7FGKAG4oxTsUYoATFGKdijFAxMUYpcUtADcUtLijFACYpaMUtMBKKWigBKWjFFABRS0UAFFFGKACjFLRQAUUUUAFFFLQAUUUUAFFFFADqKKKACiiigAooooAKKKKACiiigAooooATFFLRQAlFFFACUUtFACUlOpKQCYpMU7FFADcUmKdijFADcUmKfijFAhmKMU7FGKAG4pMU/FGKAGYoxT8UmKAG4pMU/FGKAGYoxT8UYoAZilxTsUYoAbijFOxS4oAZijFPxSYoAbijFPxRigBmKMU/FGKBjcUYp2KMUANxRinYoxQIbijFOxRigY3FGKdRigBMUYpaKAExRS0UAJRilooASlpaKAEopaKYCUYpaKAExS0UYoAKKKWgBMUUtFABRRRQAUUUUAFFLRQAmKWiigBaKKKAEooooAWiiigAooooAKKKKACiiigAooooAKKKKACiiigBKKKKADFFFFABRRRSATFGKKKADFFFFABijFFFABijFFFABiiiigAxRiiigAxRiiigAxRiiigAxRiiimAYoxRRQAUUUUAFFFFACUuKKKQCUtFFACYoxRRQAYoxRRQAUUUUALSUUUAFLRRQAUUUUAFFFFABRRRQAUUUUwCiiigAxRRRQAtFFFABRRRQAUUUUAf/9k="
}
in the client you can do this (in python for continuity)
import base64
myFile = open("mock.jpg", "wb")
img = base64.b64decode(value) #value is the returned string
myFile.write(img)
myFile.close()
|
Adding input validation to my Rock Paper Scissor PYTHON
Question: I am trying to add input validation to this so the user can only enter ROCK,
rock, PAPER, paper, SCISSORS or scissors. I am unsure about where to add it,
and really how to do it as an if statement. Any help is greatly appreciated.
Of course I would need to finish the game after they corrected their answer
import random
def main():
x = random.randint(1, 3)
rock = "ROCK, rock"
paper = "PAPER, paper"
scissors = "SCISSORS, scissors"
if x == 1:
x = 'ROCK'
elif x == 2:
x = 'PAPER'
elif x == 3:
x = 'SCISSORS'
guess = raw_input('rock, paper or scissors?: ')
print('CPU: ', x, 'Player: ', guess)
result = winner(x, guess)
if result == 'tie':
print('Its a tie try again!')
main()
else:
print(result, 'Wins')
def winner(x, guess):
if guess == 'scissors' and x == 'ROCK':
win = 'rock'
return win
elif guess == 'paper' and x == 'SCISSORS':
win = 'scissors'
return win
elif guess == 'paper' and x == 'ROCK':
win = 'paper'
return win
elif guess == 'rock' and x == 'PAPER':
win = 'paper'
return win
elif guess == 'rock' and x == 'SCISSORS':
win = 'rock'
return win
else:
win = 'tie'
return win
if __name__ == '__main__':
main()
Answer: right after `guess = raw_input('rock, paper or scissors?: ')` , you could
check with a simple:
if guess not in ('ROCK', 'rock', 'PAPER', 'paper', 'SCISSORS', 'scissors') : ...
Note 1: Instead of handling 'ROCK' and 'rock' , etc. I suggest you convert
user input to uppercase, and only handle upper case strings :
guess = raw_input('rock, paper or scissors?: ')
guess = guess.upper()
Note 2: I suggest you have a list of all possible answers :
valid_choices = ('ROCK','PAPER','SCISSORS')
and then just do:
guess = raw_input('rock, paper or scissors?: ')
guess = guess.upper()
if guess not in valid_choices : ... #handle error here.
|
Imports from testing folder in Python
Question: I am writing a simple Pong game and am having some trouble with imports in my
testing. My project structure is as follows:
app/
__init__.py
src/
__init__.py
Side.py
Ball.py
test/
__init__.py
SideTests.py
in Side.py, I have:
from math import sqrt, pow
class Side:
def __init__(self, start, end):
self.start = start
self.end = end
def collision(self, ball):
# returns True if there is a collision
# returns False otherwise
(the details of the algorithm don't matter). In Ball.py, I have:
class Ball:
def __init__(self, position, direction, speed, radius):
self.position = position
self.direction = direction
self.speed = speed
self.radius = radius
in SideTests.py, I have:
import unittest
from src.Side import Side
from src.Ball import Ball
class SideTests(unittest.TestCase):
def setUp(self):
self.side = Side([0, 0], [0, 2])
self.ball_col = Ball([1, 1], [0, 0], 0, 1)
def test_collision(self):
self.assertTrue(self.side.collision(self.ball_col))
When I run:
python test/SideTests.py
from app/, I get:
Traceback (most recent call last):
File "tests/SideTests.py", line 15, in test_collision
self.assertTrue(self.side.collision(self.ball_col))
AttributeError: Side instance has no attribute 'collision'
I know this is probably a very simple import error, but none of the examples
I've looked at have helped solve this issue.
Answer: First, fix the indentation and imports at SideTests.py
import unittest
from app.Side import Side
from app.Ball import Ball
class SideTests(unittest.TestCase):
def setUp(self):
self.side = Side([0, 0], [0, 2])
self.ball_col = Ball([1, 1], [0, 0], 0, 1)
You also do not need `test/__init__.py`.
Now to run this you either need to install a package called `app` in a
virtualenv or globally, or use a tool that will collect relative imports for
you before running tests, like nosetests which you can pip install.
~/app $ nosetests .
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
|
Reading from stdin in Python 2.7.6. Sys.stdout.flush() and python -u doesn't work
Question: It seems like many people have been struggling with getting buffers and stdin
and stout working across many flavors of Python. I'm writing a script in
Python 2.7.6 to read from stdin, do a regex match, and print a list of
matching strings.
import re, sys
barcodes=["The barcodes are:"]
curr=barcodes[0]
#iterate through stdin
for line in sys.stdin.readlines():
#do regex match in line
match = re.search('(?<=\:)[GATC]{6}', line.rstrip()).group(0)
matched = 0
#see if match has been seen before
if (match == curr):
matched = 1
print "matched curr"
else:
for a , val in enumerate(barcodes):
if (match == val):
print str(a) + " : " + val + " barcodes[a] " + str(barcodes[a])
curr = barcodes[a]
print curr
matched = 1
print "matched iteration"
#if match hasn't been seen before
if (matched == 0):
sys.stdout.write("NEW match")
sys.stdout.flush()
barcodes.append(match)
#print report of barcodes
for i in barcodes:
print i
Like many before me have found, this waits until it reads an EOF block from
stdin to print anything and I can't seem to find any documentation on how to
have the process run/print as it reads from stdin.
To be clear, this happens regardless of whether or not I call Python with the
-u flag.
Thank you for any guidance you can give me.
Answer: Here are some examples that read sys.stdin a line at a time. They do _not_
require the use of the python `-u` option.
#! /usr/bin/env python
import sys
def main():
count = 1
while True:
line = sys.stdin.readline()
if line == '':
break #EOF encountered
print "%3d: [%s]" % (count, line[:-1])
count += 1
if __name__ == '__main__':
main()
If you are using Linux / Unix, this version is better because it gives you
line editing.
#! /usr/bin/env python
import sys
import readline
def main():
count = 1
while True:
try:
line = raw_input()
print "%3d: [%s]" % (count, line)
count += 1
except EOFError:
break
if __name__ == '__main__':
main()
|
Python change locale with arrow
Question: I have a date string: "Viernes 24 de Octubre". I want to change it to arrow
datetime object. also i have installed es locales: `sudo apt-get install
language-pack-es-base` This doesn't work:
print arrow.get('Viernes 24 Octubre', 'dddd D MMMM', locale='es')
Thank you
Answer: [`arrow.parser.DateTimeParser()` uses
`calendar.month_name[1:]`](https://github.com/crsmithdev/arrow/blob/2876bf8582a8/arrow/parser.py#L33)
to parse month names i.e., you need to set locale before calling
`arrow.get()`:
import calendar
import locale
print(calendar.month_name[10])
# -> October
locale.setlocale(locale.LC_TIME, 'es_ES.UTF-8') # system-dependent locale name
print(calendar.month_name[10])
# -> Octubre
Note: changing the locale affects the whole program.
|
Remove duplicate sequences from fasta file based on ID
Question: I wrote a tiny biopython script to extract sequences from a fasta file based
on ID but it does extract duplicates so I am looking to filter sequences from
my fasta files which are duplicate (e.g. have the exact same ID).
I tried to modify my script but I failed:
from Bio import SeqIO
id = []
for line in open("short.txt","r"):
id.append(line.rstrip().strip('"'))
for rec in SeqIO.parse("out.fa","fasta"):
#print rec.id
if rec.id in id:
if rec.id not in rec.format:
print rec.format("fasta")
Can anyone help?
Answer:
ids = set()
for rec in blah:
if rec.id not in ids:
ids.add(rec.id)
# process it
|
Beginner: Adding x number of widgets to layout
Question: Really easy I'm sure but I'm learning Python & kivy (as a hobbyist not
professional).
I have made my first 'complex' kivy layout, and am now attempting to add
python code to it, and _I am fundamentally mis-understanding some things_. I
am keeping it all organised in seperate files where possible.
**1\. Within a GridLayout I have a ScrollView. All I want is to be able to add
'x' number of buttons to the ScrollView in it's python class.**
all relavent files ('...' indicating I have trimmed to only the relevant
parts)
**seatingmanager.py:**
...
Builder.load_file('timescroll.kv')
...
class SeatingManager(AnchorLayout):
pass
class SeatingManagerApp(App):
def build(self):
return SeatingManager()
**seatingmanager.kv:**
<SeatingManager>
...
AnchorLayout:
...
GridLayout:
...
TimeScroll:
size_hint: None None
height: 50
width: 500
**2\. This is creating an instance of the TimeScroll class? This is where to
add specific attributes to this instance?**
timescroll.kv:
#:import timescroll timescroll
<TimeScroll>
**3\. This is where I can add attributes to all TimeScroll instances? If I am
not adding any is this file necessary (other than importing timescroll.py)?**
**timescroll.py:**(where I presume my problems lay)
from kivy.uix.togglebutton import ToggleButton
from kivy.uix.scrollview import ScrollView
from kivy.uix.boxlayout import BoxLayout
class TimeScroll(ScrollView):
def build(self):
layout = BoxLayout(orientation='vertical', size_hint_x=None,width=1500)
for i in range(10):
btn = ToggleButton(text=str(i), group='timeHeaders')
layout.add_widget(btn)
self.add_widget(layout)
return layout
**4\. Is the build method automatically called when an instance of this class
is created? If not, why is it called automatically in the first file?**
**5\. This code doesn't work, simply leaves the ScrollView blank, so I presume
I am adding the BoxLayout to the ScrollView incorrectly or the build method
isn't automatically called.**
Answer: > 4. Is the build method automatically called when an instance of this class
> is created? If not, why is it called automatically in the first file?
>
The build method is never called for widgets, unless you do so yourself. You
should use the `__init__` method, as per normal python convention (and don't
forget to call `super`).
The App class has a build method that is called to start the user's own code,
but the App is not a widget, and this is the only place kivy will
automatically run a build method.
|
Evaluate math equations from unsafe user input in Python
Question: I have a website where the user enters math equations (expressions) and then
those equations are evaluated against data (constants) provided by the
website. The math operations needed include symbols, arithmetic operations,
`min()`, `max()` and some other basic functions. A sample equation could be:
max(a * b + 100, a / b - 200)
One could simply `eval()` this using Python, but as we all know this leads
compromising the site. What would be the safe approach of doing math equation
evaluation?
* What math equation parsing and evaluation engines there are for Python
* If one chooses to use Python itself to evaluate the expression are there any Python sandboxes which would limit the Python, so that only user supplier operators and functions are available. Full-fledged Python, like defining functions, should be totally disabled. Subprocesses are ok (see [PyPy sandbox](http://pypy.readthedocs.org/en/latest/sandbox.html)). Specially, for loops and other holes for exploiting memory and CPU usage should be closed.
* Any other approaches e.g. by using a command line binary (bc)?
Answer: There is a relatively easy of doing this in Python without third party
packages.
* Using `compile()` to prepare a single-line Python expression to be bytecode for `eval()`
* Not running the bytecode through `eval()`, but instead run it in your custom opcode loop and only implement opcodes which you really need. E.g. no built-ins, no attribute access, so the sandbox cannot escaped.
However there are some gotchas, like preparing for CPU exhaustion and memory
exhaustion, which are not specific to this method and are issue on other
approaches too.
[Here is a full blog post about the
topic](http://opensourcehacker.com/2014/10/29/safe-evaluation-of-math-
expressions-in-pure-python/). [Here is a related
gist](https://gist.github.com/miohtama/34a83d870a14aa7e580d). Below is
shortened sample code.
""""
The orignal author: Alexer / #python.fi
"""
import opcode
import dis
import sys
import multiprocessing
import time
# Python 3 required
assert sys.version_info[0] == 3, "No country for old snakes"
class UnknownSymbol(Exception):
""" There was a function or constant in the expression we don't support. """
class BadValue(Exception):
""" The user tried to input dangerously big value. """
MAX_ALLOWED_VALUE = 2**63
class BadCompilingInput(Exception):
""" The user tried to input something which might cause compiler to slow down. """
def disassemble(co):
""" Loop through Python bytecode and match instructions with our internal opcodes.
:param co: Python code object
"""
code = co.co_code
n = len(code)
i = 0
extended_arg = 0
result = []
while i < n:
op = code[i]
curi = i
i = i+1
if op >= dis.HAVE_ARGUMENT:
# Python 2
# oparg = ord(code[i]) + ord(code[i+1])*256 + extended_arg
oparg = code[i] + code[i+1] * 256 + extended_arg
extended_arg = 0
i = i+2
if op == dis.EXTENDED_ARG:
# Python 2
#extended_arg = oparg*65536L
extended_arg = oparg*65536
else:
oparg = None
# print(opcode.opname[op])
opv = globals()[opcode.opname[op].replace('+', '_')](co, curi, i, op, oparg)
result.append(opv)
return result
# For the opcodes see dis.py
# (Copy-paste)
# https://docs.python.org/2/library/dis.html
class Opcode:
""" Base class for out internal opcodes. """
args = 0
pops = 0
pushes = 0
def __init__(self, co, i, nexti, op, oparg):
self.co = co
self.i = i
self.nexti = nexti
self.op = op
self.oparg = oparg
def get_pops(self):
return self.pops
def get_pushes(self):
return self.pushes
def touch_value(self, stack, frame):
assert self.pushes == 0
for i in range(self.pops):
stack.pop()
class OpcodeArg(Opcode):
args = 1
class OpcodeConst(OpcodeArg):
def get_arg(self):
return self.co.co_consts[self.oparg]
class OpcodeName(OpcodeArg):
def get_arg(self):
return self.co.co_names[self.oparg]
class POP_TOP(Opcode):
"""Removes the top-of-stack (TOS) item."""
pops = 1
def touch_value(self, stack, frame):
stack.pop()
class DUP_TOP(Opcode):
"""Duplicates the reference on top of the stack."""
# XXX: +-1
pops = 1
pushes = 2
def touch_value(self, stack, frame):
stack[-1:] = 2 * stack[-1:]
class ROT_TWO(Opcode):
"""Swaps the two top-most stack items."""
pops = 2
pushes = 2
def touch_value(self, stack, frame):
stack[-2:] = stack[-2:][::-1]
class ROT_THREE(Opcode):
"""Lifts second and third stack item one position up, moves top down to position three."""
pops = 3
pushes = 3
direct = True
def touch_value(self, stack, frame):
v3, v2, v1 = stack[-3:]
stack[-3:] = [v1, v3, v2]
class ROT_FOUR(Opcode):
"""Lifts second, third and forth stack item one position up, moves top down to position four."""
pops = 4
pushes = 4
direct = True
def touch_value(self, stack, frame):
v4, v3, v2, v1 = stack[-3:]
stack[-3:] = [v1, v4, v3, v2]
class UNARY(Opcode):
"""Unary Operations take the top of the stack, apply the operation, and push the result back on the stack."""
pops = 1
pushes = 1
class UNARY_POSITIVE(UNARY):
"""Implements TOS = +TOS."""
def touch_value(self, stack, frame):
stack[-1] = +stack[-1]
class UNARY_NEGATIVE(UNARY):
"""Implements TOS = -TOS."""
def touch_value(self, stack, frame):
stack[-1] = -stack[-1]
class BINARY(Opcode):
"""Binary operations remove the top of the stack (TOS) and the second top-most stack item (TOS1) from the stack. They perform the operation, and put the result back on the stack."""
pops = 2
pushes = 1
class BINARY_POWER(BINARY):
"""Implements TOS = TOS1 ** TOS."""
def touch_value(self, stack, frame):
TOS1, TOS = stack[-2:]
print(TOS1, TOS)
if abs(TOS1) > BadValue.MAX_ALLOWED_VALUE or abs(TOS) > BadValue.MAX_ALLOWED_VALUE:
raise BadValue("The value for exponent was too big")
stack[-2:] = [TOS1 ** TOS]
class BINARY_MULTIPLY(BINARY):
"""Implements TOS = TOS1 * TOS."""
def touch_value(self, stack, frame):
TOS1, TOS = stack[-2:]
stack[-2:] = [TOS1 * TOS]
class BINARY_DIVIDE(BINARY):
"""Implements TOS = TOS1 / TOS when from __future__ import division is not in effect."""
def touch_value(self, stack, frame):
TOS1, TOS = stack[-2:]
stack[-2:] = [TOS1 / TOS]
class BINARY_MODULO(BINARY):
"""Implements TOS = TOS1 % TOS."""
def touch_value(self, stack, frame):
TOS1, TOS = stack[-2:]
stack[-2:] = [TOS1 % TOS]
class BINARY_ADD(BINARY):
"""Implements TOS = TOS1 + TOS."""
def touch_value(self, stack, frame):
TOS1, TOS = stack[-2:]
stack[-2:] = [TOS1 + TOS]
class BINARY_SUBTRACT(BINARY):
"""Implements TOS = TOS1 - TOS."""
def touch_value(self, stack, frame):
TOS1, TOS = stack[-2:]
stack[-2:] = [TOS1 - TOS]
class BINARY_FLOOR_DIVIDE(BINARY):
"""Implements TOS = TOS1 // TOS."""
def touch_value(self, stack, frame):
TOS1, TOS = stack[-2:]
stack[-2:] = [TOS1 // TOS]
class BINARY_TRUE_DIVIDE(BINARY):
"""Implements TOS = TOS1 / TOS when from __future__ import division is in effect."""
def touch_value(self, stack, frame):
TOS1, TOS = stack[-2:]
stack[-2:] = [TOS1 / TOS]
class BINARY_LSHIFT(BINARY):
"""Implements TOS = TOS1 << TOS."""
def touch_value(self, stack, frame):
TOS1, TOS = stack[-2:]
stack[-2:] = [TOS1 << TOS]
class BINARY_RSHIFT(BINARY):
"""Implements TOS = TOS1 >> TOS."""
def touch_value(self, stack, frame):
TOS1, TOS = stack[-2:]
stack[-2:] = [TOS1 >> TOS]
class BINARY_AND(BINARY):
"""Implements TOS = TOS1 & TOS."""
def touch_value(self, stack, frame):
TOS1, TOS = stack[-2:]
stack[-2:] = [TOS1 & TOS]
class BINARY_XOR(BINARY):
"""Implements TOS = TOS1 ^ TOS."""
def touch_value(self, stack, frame):
TOS1, TOS = stack[-2:]
stack[-2:] = [TOS1 ^ TOS]
class BINARY_OR(BINARY):
"""Implements TOS = TOS1 | TOS."""
def touch_value(self, stack, frame):
TOS1, TOS = stack[-2:]
stack[-2:] = [TOS1 | TOS]
class RETURN_VALUE(Opcode):
"""Returns with TOS to the caller of the function."""
pops = 1
final = True
def touch_value(self, stack, frame):
value = stack.pop()
return value
class LOAD_CONST(OpcodeConst):
"""Pushes co_consts[consti] onto the stack.""" # consti
pushes = 1
def touch_value(self, stack, frame):
# XXX moo: Validate type
value = self.get_arg()
assert isinstance(value, (int, float))
stack.append(value)
class LOAD_NAME(OpcodeName):
"""Pushes the value associated with co_names[namei] onto the stack.""" # namei
pushes = 1
def touch_value(self, stack, frame):
# XXX moo: Get name from dict of valid variables/functions
name = self.get_arg()
if name not in frame:
raise UnknownSymbol("Does not know symbol {}".format(name))
stack.append(frame[name])
class CALL_FUNCTION(OpcodeArg):
"""Calls a function. The low byte of argc indicates the number of positional parameters, the high byte the number of keyword parameters. On the stack, the opcode finds the keyword parameters first. For each keyword argument, the value is on top of the key. Below the keyword parameters, the positional parameters are on the stack, with the right-most parameter on top. Below the parameters, the function object to call is on the stack. Pops all function arguments, and the function itself off the stack, and pushes the return value.""" # argc
pops = None
pushes = 1
def get_pops(self):
args = self.oparg & 0xff
kwargs = (self.oparg >> 8) & 0xff
return 1 + args + 2 * kwargs
def touch_value(self, stack, frame):
argc = self.oparg & 0xff
kwargc = (self.oparg >> 8) & 0xff
assert kwargc == 0
if argc > 0:
args = stack[-argc:]
stack[:] = stack[:-argc]
else:
args = []
func = stack.pop()
assert func in frame.values(), "Uh-oh somebody injected bad function. This does not happen."
result = func(*args)
stack.append(result)
def check_for_pow(expr):
""" Python evaluates power operator during the compile time if its on constants.
You can do CPU / memory burning attack with ``2**999999999999999999999**9999999999999``.
We mainly care about memory now, as we catch timeoutting in any case.
We just disable pow and do not care about it.
"""
if "**" in expr:
raise BadCompilingInput("Power operation is not allowed")
def _safe_eval(expr, functions_and_constants={}, check_compiling_input=True):
""" Evaluate a Pythonic math expression and return the output as a string.
The expr is limited to 1024 characters / 1024 operations
to prevent CPU burning or memory stealing.
:param functions_and_constants: Supplied "built-in" data for evaluation
"""
# Some safety checks
assert len(expr) < 1024
# Check for potential bad compiler input
if check_compiling_input:
check_for_pow(expr)
# Compile Python source code to Python code for eval()
code = compile(expr, '', 'eval')
# Dissect bytecode back to Python opcodes
ops = disassemble(code)
assert len(ops) < 1024
stack = []
for op in ops:
value = op.touch_value(stack, functions_and_constants)
return value
|
`reduce(add,...)` vs `sum(...)`, why the second fails on a list of objects
Question: I have the expectation that `reduce(add,lst)` and `sum(lst)` should give me
the same result, but
In [18]: class p():
def __init__(self, x, y):
self.x=x ; self.y=y
def __repr__(self):
return "(%r,%r)"%(self.x,self.y)
def __add__(self, P):
return p(self.x+P.x, self.y+P.y)
....:
In [19]: pts=[p(1,0), p(2,1), p(-3,4)]
In [20]: from operator import add
In [21]: print reduce(add,pts)
(0,5)
In [22]: print sum(pts)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-22-98a81789d257> in <module>()
----> 1 print sum(pts)
TypeError: unsupported operand type(s) for +: 'int' and 'instance'
In [23]:
of course I'm not understanding something, possibly obvious; could someone
enlighten me?
Answer: `reduce()` starts with `pts[0]` as the initial value (unless you give it an
explicit starting value), but `sum()` defaults with `0`. From the [`sum()`
function documentation](https://docs.python.org/2/library/functions.html#sum):
> `sum(iterable[, start])`
> Sums _start_ and the items of an iterable from left to right and returns
> the total. _start_ defaults to `0`.
Give `sum()` a better _start_ value; use the first object just like `reduce()`
does:
sum(pts[1:], pts[0])
or a suitable empty value:
sum(pts, p(0, 0))
|
What is the proper goose import syntax
Question: The goose install places goose in the python-goose directory. When I try to
import goose at the IDLE prompt I get:
>>> from goose import Goose
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
from goose import Goose
ImportError: No module named goose
Because goose is installed in the python-goose directory I believe the import
syntax should be: from python-goose.goose import Goose however when I run this
I get the following syntax error message:
>>> from python-goose.goose import Goose
SyntaxError: invalid syntax
Any suggestions on how to properly import goose would be appreciated.
Answer: Simply copy your **goose** directory `/goose-extracor/goose` and paste it with
goose-extractor and python packages directories
/goose-extractor
/goose
And import it
from goose import Goose
|
performance of NumPy with different BLAS implementations
Question: I'm running an algorithm that is implemented in Python and uses NumPy. The
most computationally expensive part of the algorithm involves **solving a set
of linear systems** (i.e. a call to `numpy.linalg.solve()`. I came up with
this small benchmark:
import numpy as np
import time
# Create two large random matrices
a = np.random.randn(5000, 5000)
b = np.random.randn(5000, 5000)
t1 = time.time()
# That's the expensive call:
np.linalg.solve(a, b)
print time.time() - t1
I've been running this on:
1. My laptop, a late 2013 MacBook Pro 15" with 4 cores at 2GHz (`sysctl -n machdep.cpu.brand_string` gives me _Intel(R) Core(TM) i7-4750HQ CPU @ 2.00GHz_)
2. An Amazon EC2 `c3.xlarge` instance, with 4 vCPUs. Amazon advertises them as "High Frequency Intel Xeon E5-2680 v2 (Ivy Bridge) Processors"
Bottom line:
* On the Mac it runs in **~4.5 seconds**
* On the EC2 instance it runs in **~19.5 seconds**
I have tried it also on other OpenBLAS / Intel MKL based setups, and the
runtime is always comparable to what I get on the EC2 instance (modulo the
hardware config.)
**Can anyone explain why the performance on Mac (with the Accelerate
Framework) is > 4x better?** More details about the NumPy / BLAS setup in each
are provided below.
## Laptop setup
`numpy.show_config()` gives me:
atlas_threads_info:
NOT AVAILABLE
blas_opt_info:
extra_link_args = ['-Wl,-framework', '-Wl,Accelerate']
extra_compile_args = ['-msse3', '-I/System/Library/Frameworks/vecLib.framework/Headers']
define_macros = [('NO_ATLAS_INFO', 3)]
atlas_blas_threads_info:
NOT AVAILABLE
openblas_info:
NOT AVAILABLE
lapack_opt_info:
extra_link_args = ['-Wl,-framework', '-Wl,Accelerate']
extra_compile_args = ['-msse3']
define_macros = [('NO_ATLAS_INFO', 3)]
atlas_info:
NOT AVAILABLE
lapack_mkl_info:
NOT AVAILABLE
blas_mkl_info:
NOT AVAILABLE
atlas_blas_info:
NOT AVAILABLE
mkl_info:
NOT AVAILABLE
## EC2 instance setup:
On Ubuntu 14.04, I installed OpenBLAS with
sudo apt-get install libopenblas-base libopenblas-dev
When installing NumPy, I created a `site.cfg` with the following contents:
[default]
library_dirs= /usr/lib/openblas-base
[atlas]
atlas_libs = openblas
`numpy.show_config()` gives me:
atlas_threads_info:
libraries = ['lapack', 'openblas']
library_dirs = ['/usr/lib']
define_macros = [('ATLAS_INFO', '"\\"None\\""')]
language = f77
include_dirs = ['/usr/include/atlas']
blas_opt_info:
libraries = ['openblas']
library_dirs = ['/usr/lib']
language = f77
openblas_info:
libraries = ['openblas']
library_dirs = ['/usr/lib']
language = f77
lapack_opt_info:
libraries = ['lapack', 'openblas']
library_dirs = ['/usr/lib']
define_macros = [('ATLAS_INFO', '"\\"None\\""')]
language = f77
include_dirs = ['/usr/include/atlas']
openblas_lapack_info:
NOT AVAILABLE
lapack_mkl_info:
NOT AVAILABLE
blas_mkl_info:
NOT AVAILABLE
mkl_info:
NOT AVAILABLE
Answer: The reason for this behavior could be that Accelerate uses multithreading,
while the others don't.
Most BLAS implementations follow the environment variable `OMP_NUM_THREADS` to
determine how many threads to use. I believe they only use 1 thread if not
told otherwise explicitly. [Accelerate's man
page](https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man7/Accelerate.7.html),
however sounds like threading is turned on by default; it can be turned off by
setting the environment variable `VECLIB_MAXIMUM_THREADS`.
To determine if this is really what's happening, try
export VECLIB_MAXIMUM_THREADS=1
before calling the Accelerate version, and
export OMP_NUM_THREADS=4
for the other versions.
Independent of whether this is really the reason, it's a good idea to always
set these variables when you use BLAS to be sure you control what is going on.
|
Sorting Python list contating dicts
Question: I am looking for the fastest way to re-factor that following list which
contains dicts as items
[{u'domain': u'1d663096.bestapp243.biz',
u'flag_char_code': u'DR',
u'flag_hex': u'8081',
u'identifier': u'0000000002264A00',
u'indicator': u'Snd',
u'ip': u'172.30.133.105',
u'proto': u'UDP',
u'r_q': u'R Q',
u'raw': u'10/12/2014 11:20:27 AM 1114 PACKET 0000000002264A00 UDP Snd 172.30.133.105 aba2 R Q [8081 DR NOERROR] A .1d663096.bestapp243.biz.',
u'record': u'A',
u'status': u'NOERROR',
u'thread_id': u'1114',
u'timestamp': u'2014-10-12T11:20:27',
u'xid': u'aba2'},
{u'domain': u'1d663096.bestapp243.biz',
u'flag_char_code': u'DR',
u'flag_hex': u'8081',
u'identifier': u'0000000002264A00',
u'indicator': u'Snd',
u'ip': u'172.30.133.105',
u'proto': u'UDP',
u'r_q': u'R Q',
u'raw': u'10/12/2014 11:20:27 AM 1114 PACKET 0000000002264A00 UDP Snd 172.30.133.105 aba2 R Q [8081 DR NOERROR] A .1d663096.bestapp243.biz.',
u'record': u'A',
u'status': u'NOERROR',
u'thread_id': u'1114',
u'timestamp': u'2014-10-12T11:20:27',
u'xid': u'aba2'},
{u'domain': u'mgames.cf',
u'flag_char_code': u'DR',
u'flag_hex': u'8081',
u'identifier': u'000000000220ED40',
u'indicator': u'Snd',
u'ip': u'172.30.138.116',
u'proto': u'UDP',
u'r_q': u'R Q',
u'raw': u'10/13/2014 2:31:46 PM 110C PACKET 000000000220ED40 UDP Snd 172.30.138.116 f957 R Q [8081 DR NOERROR] A .mgames.cf.',
u'record': u'A',
u'status': u'NOERROR',
u'thread_id': u'110C',
u'timestamp': u'2014-10-13T14:31:46',
u'xid': u'f957'},
{u'domain': u'google.com',
u'flag_char_code': u'DR',
u'flag_hex': u'8081',
u'identifier': u'0000000002264A00',
u'indicator': u'Snd',
u'ip': u'172.30.133.105',
u'proto': u'UDP',
u'r_q': u'R Q',
u'raw': u'10/12/2014 11:20:27 AM 1114 PACKET 0000000002264A00 UDP Snd 172.30.133.105 aba2 R Q [8081 DR NOERROR] A .google.com.',
u'record': u'A',
u'status': u'NOERROR',
u'thread_id': u'1114',
u'timestamp': u'2014-10-12T11:20:27',
u'xid': u'aba2'},
{u'domain': u'qwe.domainsworkingsdromms.com',
u'flag_char_code': u'DR',
u'flag_hex': u'8381',
u'identifier': u'030E8D88',
u'indicator': u'Snd',
u'ip': u'172.27.29.77',
u'proto': u'UDP',
u'r_q': u'R Q',
u'raw': u'10/14/2014 10:37:13 AM 17E0 PACKET 030E8D88 UDP Snd 172.27.29.77 80eb R Q [8381 DR NXDOMAIN] A .qwe.domainsworkingsdromms.com.',
u'record': u'A',
u'status': u'NXDOMAIN',
u'thread_id': u'17E0',
u'timestamp': u'2014-10-14T10:37:13',
u'xid': u'80eb'}]
to output something like:
{
'172.30.133.105': {
'1d663096.bestapp243.biz':
[
{u'domain': u'1d663096.bestapp243.biz',
u'flag_char_code': u'DR',
u'flag_hex': u'8081',
u'identifier': u'0000000002264A00',
u'indicator': u'Snd',
u'ip': u'172.30.133.105',
u'proto': u'UDP',
u'r_q': u'R Q',
u'raw': u'10/12/2014 11:20:27 AM 1114 PACKET 0000000002264A00 UDP Snd 172.30.133.105 aba2 R Q [8081 DR NOERROR] A .1d663096.bestapp243.biz.',
u'record': u'A',
u'status': u'NOERROR',
u'thread_id': u'1114',
u'timestamp': u'2014-10-12T11:20:27',
u'xid': u'aba2'},
{u'domain': u'1d663096.bestapp243.biz',
u'flag_char_code': u'DR',
u'flag_hex': u'8081',
u'identifier': u'0000000002264A00',
u'indicator': u'Snd',
u'ip': u'172.30.133.105',
u'proto': u'UDP',
u'r_q': u'R Q',
u'raw': u'10/12/2014 11:20:27 AM 1114 PACKET 0000000002264A00 UDP Snd 172.30.133.105 aba2 R Q [8081 DR NOERROR] A .1d663096.bestapp243.biz.',
u'record': u'A',
u'status': u'NOERROR',
u'thread_id': u'1114',
u'timestamp': u'2014-10-12T11:20:27',
u'xid': u'aba2'},
],
'google.com':
[
{u'domain': u'google.com',
u'flag_char_code': u'DR',
u'flag_hex': u'8081',
u'identifier': u'0000000002264A00',
u'indicator': u'Snd',
u'ip': u'172.30.133.105',
u'proto': u'UDP',
u'r_q': u'R Q',
u'raw': u'10/12/2014 11:20:27 AM 1114 PACKET 0000000002264A00 UDP Snd 172.30.133.105 aba2 R Q [8081 DR NOERROR] A .google.com.',
u'record': u'A',
u'status': u'NOERROR',
u'thread_id': u'1114',
u'timestamp': u'2014-10-12T11:20:27',
u'xid': u'aba2'}
]
},
'172.30.138.116': {
'mgames.cf':
[
{u'domain': u'mgames.cf',
u'flag_char_code': u'DR',
u'flag_hex': u'8081',
u'identifier': u'000000000220ED40',
u'indicator': u'Snd',
u'ip': u'172.30.138.116',
u'proto': u'UDP',
u'r_q': u'R Q',
u'raw': u'10/13/2014 2:31:46 PM 110C PACKET 000000000220ED40 UDP Snd 172.30.138.116 f957 R Q [8081 DR NOERROR] A .mgames.cf.',
u'record': u'A',
u'status': u'NOERROR',
u'thread_id': u'110C',
u'timestamp': u'2014-10-13T14:31:46',
u'xid': u'f957'}
]
}
}
Should I divide them into two lists, the 1st one will have ips as keys then
iterate over them to fix the domains as keys for the sub dicts ?
Any ideas would be appreciated. Thanks
Answer: If you need to re-factor, know the re-factored structure and do that
explicitly
foo = {}
for rec in data:
foo[rec['ip']] = {}
foo[rec['ip']][rec['domain']] = rec
**Note** , if you are not to orthodox in using `dict` and rather open for
`defaultdict`, `autovivication` can work wonder here
>>> from collections import defaultdict
>>> def tree(): return defaultdict(tree)
>>> foo = tree()
>>> for rec in data:
foo[rec['ip']][rec['domain']] = rec
**Example**
>>> import pprint
>>> pprint.pprint(foo)
{u'172.27.29.77': {u'qwe.domainsworkingsdromms.com': {u'domain': u'qwe.domainsworkingsdromms.com',
u'flag_char_code': u'DR',
u'flag_hex': u'8381',
u'identifier': u'030E8D88',
u'indicator': u'Snd',
u'ip': u'172.27.29.77',
u'proto': u'UDP',
u'r_q': u'R Q',
u'raw': u'10/14/2014 10:37:13 AM 17E0 PACKET 030E8D88 UDP Snd 172.27.29.77 80eb R Q [8381 DR NXDOMAIN] A .qwe.domainsworkingsdromms.com.',
u'record': u'A',
u'status': u'NXDOMAIN',
u'thread_id': u'17E0',
u'timestamp': u'2014-10-14T10:37:13',
u'xid': u'80eb'}},
u'172.30.133.105': {u'google.com': {u'domain': u'google.com',
u'flag_char_code': u'DR',
u'flag_hex': u'8081',
u'identifier': u'0000000002264A00',
u'indicator': u'Snd',
u'ip': u'172.30.133.105',
u'proto': u'UDP',
u'r_q': u'R Q',
u'raw': u'10/12/2014 11:20:27 AM 1114 PACKET 0000000002264A00 UDP Snd 172.30.133.105 aba2 R Q [8081 DR NOERROR] A .google.com.',
u'record': u'A',
u'status': u'NOERROR',
u'thread_id': u'1114',
u'timestamp': u'2014-10-12T11:20:27',
u'xid': u'aba2'}},
u'172.30.138.116': {u'mgames.cf': {u'domain': u'mgames.cf',
u'flag_char_code': u'DR',
u'flag_hex': u'8081',
u'identifier': u'000000000220ED40',
u'indicator': u'Snd',
u'ip': u'172.30.138.116',
u'proto': u'UDP',
u'r_q': u'R Q',
u'raw': u'10/13/2014 2:31:46 PM 110C PACKET 000000000220ED40 UDP Snd 172.30.138.116 f957 R Q [8081 DR NOERROR] A .mgames.cf.',
u'record': u'A',
u'status': u'NOERROR',
u'thread_id': u'110C',
u'timestamp': u'2014-10-13T14:31:46',
u'xid': u'f957'}}}
|
Dump all network requests and responses in Python
Question: How can I use python to dump all of the network requests and responses? What
I'm looking to do would compare to the following (this example is in nodejs
<https://github.com/ariya/phantomjs/blob/master/examples/netlog.js>)
I have been trying a tonne of different tools, including the following:
Example:
import requests
import logging
logging.basicConfig(level=logging.DEBUG)
r = requests.get('http://www.google.com')
Example:
import urllib2
request = urllib2.Request('http://jigsaw.w3.org/HTTP/300/302.html')
response = urllib2.urlopen(request)
print "Response code was: %d" % response.getcode()
Example:
import urllib2
passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
authhandler = urllib2.HTTPBasicAuthHandler(passman)
handler=urllib2.HTTPHandler(debuglevel=1)
opener = urllib2.build_opener(handler)
opener=urllib2.build_opener(authhandler, urllib2.HTTPHandler(debuglevel=1))
urllib2.install_opener(opener)
response = urllib2.urlopen('http://groupon.com')
print response
...there are more.
An example of the type of information that I would like to capture is the
following (I used fiddler2 to get this information. All of this and more came
from visiting groupon.com):
# Result Protocol Host URL Body Caching Content-Type Process Comments Custom
6 200 HTTP www.groupon.com / 23,236 private, max-age=0, no-cache, no-store, must-revalidate text/html; charset=utf-8 chrome:6080
7 200 HTTP www.groupon.com /homepage-assets/styles-6fca4e9f48.css 6,766 public, max-age=31369910 text/css; charset=UTF-8 chrome:6080
8 200 HTTP Tunnel to img.grouponcdn.com:443 0 chrome:6080
9 200 HTTP img.grouponcdn.com /deal/gsPCLbbqioFVfvjT3qbBZo/The-Omni-Mount-Washington-Resort_01-960x582/v1/c550x332.jpg 94,555 public, max-age=315279127; Expires: Fri, 18 Oct 2024 22:20:20 GMT image/jpeg chrome:6080
10 200 HTTP img.grouponcdn.com /deal/d5YmjhxUBi2mgfCMoriV/pE-700x420/v1/c220x134.jpg 17,832 public, max-age=298601213; Expires: Mon, 08 Apr 2024 21:35:06 GMT image/jpeg chrome:6080
11 200 HTTP www.groupon.com /homepage-assets/main-fcfaf867e3.js 9,604 public, max-age=31369913 application/javascript chrome:6080
12 200 HTTP www.groupon.com /homepage-assets/locale.js?locale=en_US&country=US 1,507 public, max-age=994 application/javascript chrome:6080
13 200 HTTP www.groupon.com /tracky 3 application/octet-stream chrome:6080
14 200 HTTP www.groupon.com /cart/widget?consumerId=b577c9c2-4f07-11e4-8305-0025906127fe 17 private, max-age=0, no-cache, no-store, must-revalidate application/json; charset=utf-8 chrome:6080
15 200 HTTP www.googletagmanager.com /gtm.js?id=GTM-B76Z 39,061 private, max-age=911; Expires: Wed, 22 Oct 2014 20:48:14 GMT text/javascript; charset=UTF-8 chrome:6080
Answer: This isn't exactly it, but it's close enough, and yes, it was urllib2:
from bs4 import BeautifulSoup
import requests
import re
import csv
import json
import time
import fileinput
import urllib2
data = urllib2.urlopen("http://stackoverflow.com").read()
soup = BeautifulSoup(data)
The `.read()` returns enough data to be scraped for all the http header's
urls.
|
np.mean 'str' object has no attribute 'mean' error
Question: I have Python code that worked up until yesterday and now has some floating
problem that I've been unsuccessful at tracking down. I'm trying to calculate
the mean of a list with np.mean but I get an error stating: `AttributeError:
'str' object has no attribute 'mean'`
However if I print out the offending mean calculation:
volumetric_power = [x/y for x,y in zip(mcnp_calculated_power,mcnp_vol)]
print volumetric_power
normalized_power = [x/np.mean(volumetric_power) for x in volumetric_power]
The output I get is an expected list of floats (which I calculated as such):
[3.3999658877707657, 3.4055375518790156, 3.3929180596302109, 3.4015594541910334, 3.415077891066217, ... ]
Now to try to bypass this I eventually just calculated the average manually
`sum(volumetric_power)/len(volumetric_power)` and I was able to get past the
error. However later in the file I encountered an error with np.loadtxt (the
relevant error here):
get_radial_data(filename,zirc_mesh)
File "./Bicnp.py", line 313, in get_radial_data
data = np.loadtxt(cwd+'/grope.o', usecols=(1,2,3), skiprows=12)
AttributeError: 'str' object has no attribute 'loadtxt'
After all this I think the error is some Python library type error (using
ignorant lingo) where I'm not sure where to look. All of this worked fine
yesterday, where the one thing I did today was add python27 and python33 from
centos-release-scl per
<http://wiki.centos.org/AdditionalResources/Repositories/SCL>.
Thanks for the help.
Update: I forgot to mention this is running Python 2.6.6
Answer: Somewhere in your code, you are assigning a string object to the `np`
variable. That replaces the numpy module you'd previously imported under that
name, and explains the strange error messages you are seeing (possibly in very
different parts of the code).
If you have a consistent place where you're getting the error, try printing
`np` near there to see what value it has, and then tracking down where you're
making the bad assignment. Just pick a different variable name for the string
and you should be all set.
|
how do I diagnose a vanishing port listener?
Question: I'm pulling data off a port using a python process, launched as an upstart job
on an Ubuntu server. The data is sent using TCP with each client sending a
single relatively small string of information:
The upstart config:
start on runlevel [2345]
stop on runlevel [!2345]
respawn
respawn limit 3 5
setuid takeaim
setgid takeaim
exec /home/takeaim/production/deploy/production/update_service_demon.sh
The update_service_demon.sh script (I found it easier to debug separating this
out of upstart):
#!/bin/bash
# Make sure we're in the right virtual env and location
source /home/takeaim/.virtualenvs/production/bin/activate
source /home/takeaim/.virtualenvs/production/bin/postactivate
cd /home/takeaim/production
exec python drupdate/dr_update_service.py
The python script (it dispatches the real work to a celery worker):
from collections import defaultdict
import select
import socket
from django.conf import settings
from drupdate.tasks import do_dr_update
def create_server_socket():
"""Set up the and return server socket"""
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.setblocking(0)
server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_socket.bind(('0.0.0.0', settings.DRUPDATE['PORT']))
server_socket.listen(settings.DRUPDATE['MAX_CONNECT_REQUESTS'])
return server_socket
def serve(echo_only=False):
message_length = settings.DRUPDATE['MSG_LENGTH']
message_chunks = defaultdict(list)
server_socket = create_server_socket()
inputs = [server_socket]
while inputs:
readable, writable, exceptional = select.select(inputs, [], inputs)
for sock in readable:
if sock is server_socket:
client_socket, address = server_socket.accept()
client_socket.setblocking(0)
inputs.append(client_socket)
else:
chunk = sock.recv(message_length)
if chunk:
message_chunks[sock].append(chunk)
else:
# This client_socket is finished, hand off message for processing
message = ''.join(message_chunks[sock])
if echo_only:
print(message)
else:
do_dr_update.delay(message)
inputs.remove(sock)
sock.close()
for sock in exceptional:
inputs.remove(sock)
sock.close()
if sock is server_socket:
# replace bad server socket
server_socket = create_server_socket()
inputs.append(server_socket)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description="Process incoming DR messages")
parser.add_argument('--echo', help='Just echo incoming messages to the console - no updates will take place',
dest='echo_only', action='store_true', default=False)
args = parser.parse_args()
serve(echo_only=args.echo_only)
The process disappears every now and then despite the restart. I'm reluctant
to make the restarts unlimited unless I can understand why the process
disappears. A manual restart works fine... until it disappears again. It can
be up for days and then just vanishes.
What is the best way to find out what is going on?
Answer: Add enough logging to the system to enable traces to be analysed after a
failure.
Here are some suggestions for logging in order of verbosity:
* Replace the `exec python drupdate/dr_update_service.py` call with the following snippet which will log the exit code of your python process to syslog on exit. The exit code may give some clues as to how the process terminate. eg If the process terminate by a signal the exit code will be >= 128.
`python drupdate/dr_update_service.py || logger "He's dead Jim, exit code $?"`
* Add a `try/except` block around your `server` call in `__main__`. In the exception handler, print the traceback to file or a logging subsystem.
* If the above methods fail to provide clues, wrap your entire script with a call to `strace -f -tt` and divert the output to a log file. This will trace the entire set of system calls made by your program, their arguments and return codes. This will help debug issues which may be related to system calls which return errors. Applying this method will slow down your process and generate a huge amount of output which may in turn change the behaviour of your program and mask the underlying issue.
|
Python 3.4 Compiler?
Question: Ever since I started learning Python, I have wanted to distribute some small
programs I have made to my friends. Without handing out my source code. My
question is, what compilers are there for Python 3.4? I have heard of
cx_freeze and tried it, but it doesn't work for me. I am on Windows and it
compiles my code for Mac. Any compiler suggestions or how to compile?
PLEASE NOTE: I am a beginner to Python. I barely know much of the language or
how any of it works.
Answer: Have a look at this page. It should answer your question.
<https://wiki.python.org/moin/DistributionUtilities>
INNO would be a brute force way to ensure that all of the required python
files are installed. INNO looks like the traditional install shield installers
that Windows users are used to.
Here is the link to INNO: <http://www.jrsoftware.org/isinfo.php>
Here is an INNO script that will install python as a python as a prereq then
install your python scripts. Obviously you will have to adjust the paths to
reflect your system and you will have to download the python-x.y.z.msi file.
; Script generated by the Inno Setup Script Wizard.
; SEE THE DOCUMENTATION FOR DETAILS ON CREATING INNO SETUP SCRIPT FILES!
#define MyAppName "PythonTest"
#define MyAppVersion "1.5"
#define MyAppPublisher "My Company, Inc."
#define MyAppURL "http://www.example.com/"
#define MyAppExeName "PythonTest.py"
[Setup]
; NOTE: The value of AppId uniquely identifies this application.
; Do not use the same AppId value in installers for other applications.
; (To generate a new GUID, click Tools | Generate GUID inside the IDE.)
AppId={{72A57135-5C83-4A70-83F6-965EBE7DA65C}
AppName={#MyAppName}
AppVersion={#MyAppVersion}
;AppVerName={#MyAppName} {#MyAppVersion}
AppPublisher={#MyAppPublisher}
AppPublisherURL={#MyAppURL}
AppSupportURL={#MyAppURL}
AppUpdatesURL={#MyAppURL}
DefaultDirName={pf}\{#MyAppName}
DefaultGroupName={#MyAppName}
OutputBaseFilename=setup
Compression=lzma
SolidCompression=yes
[Languages]
Name: "english"; MessagesFile: "compiler:Default.isl"
[Tasks]
Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked
[Files]
Source: "C:\Users\engineering12\Desktop\PythonTest.py"; DestDir: "{app}"; Flags: ignoreversion
Source: "C:\Users\engineering12\Downloads\python-2.7.8.msi"; DestDir: "{app}"; Flags: nocompression
; NOTE: Don't use "Flags: ignoreversion" on any shared system files
[Icons]
Name: "{group}\{#MyAppName}"; Filename: "{app}\{#MyAppExeName}"
Name: "{commondesktop}\{#MyAppName}"; Filename: "{app}\{#MyAppExeName}"; Tasks: desktopicon
[Run]
Filename: "msiexec.exe"; Parameters: "/i ""{app}\python-2.7.8.msi"""
Here is the script PythonTest.py:
import time
for i in range(100):
print i
time.sleep(2)
INNO will generate a file called setup.exe that you can distribute to your
friends. Your python scripts will be available via the start menu.
|
How to Use VirtualEnv libraries along with system wide libraries?
Question: It might sound dumb, but I am having a hard time understanding how to use
VirtualEnv. My use case is as follows: 1\. My EC2 is python 2.6.9 and I need
to use graphlab create which uses > 2.7 2\. I installed a virtualenv and
installed graphlab in it with python 2.7.5 3\. Now I want to use graphlab
create with my other files which are not on virtual env
Is it possible to do that? if yes how. More specifically, I want to be able to
use
import graphlab
in my non virtualenv python files !!
Answer: A virtual env, as long as I know, is an enviroment binded to a specific
version of python, that you could create to install into it as many libraries
as you want, and be sure that noone of these libraries will be available
outside of their virtualenv (system wide or other envs).
If you create a virtualenv based on Python 2.6 (let call it ENV_A), and
another one based on Python 2.7 (ENV_B), there is no way to use in ENV_B a
library from ENV_A. The only way I could think to make these two enviroments
to interoperate is to create a python script into ENV_A, then call it with
"ENV_A\python.exe ENV_A\script.py [arguments]" using a system call from ENV_B
and parse the ouput: not sure it's an option for you.
|
How to insert object to mysql using python?
Question: I write code to read RFID tag using python, check the existence of the ID in
the database. If successful, right ID on database led is ON, but command
insert to database not work. Python gives no error message, and I do not
understand what's wrong with it.
How can I fix it? Please help me.
Thank you very much.
Here is my code:
import serial
import time
import RPi.GPIO as GPIO
import MySQLdb
from datetime import datetime
from binascii import hexlify
serial=serial.Serial("/dev/ttyAMA0",
baudrate=9600,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS,
timeout=0.1)
GPIO.setmode(GPIO.BOARD)
GPIO.setup(12, GPIO.OUT)
chuoi= "\xAA\x00\x03\x25\x26\x00\x00\xBB"
db = MySQLdb.connect(host='localhost', user='root', passwd='root', db='ex_1')
cur = db.cursor(MySQLdb.cursors.DictCursor)
while 1:
serial.write(chuoi)
data =serial.readline()
tach = data[5:8]
hex = hexlify(tach)
num = int(hex,16)
print(num)
cur.execute("SELECT * FROM thong_tin WHERE Card_ID = '%d'" %num)
rows = cur.fetchall()
for row in rows:
ID = row['Card_ID']
if( num == ID):
GPIO.output(12, True)
cur.execute("INSERT INTO data (Card_ID,Time_out,Time_in) VALUES ('%d',NOW(),NOW())" %num) #-> it does not add to database
time.sleep(3)
GPIO.output(12, False);
else:
GPIO.output(12, False);
Answer: At first you have a indentation error in your code:
if( num == ID):
GPIO.output(12, True)
cur.execute("INSERT INTO data (Card_ID,Time_out,Time_in) VALUES ('%d',NOW(),NOW())" %num) #-> it does not add to database
time.sleep(3)
GPIO.output(12, False);
One line is indented more than the others.
Also you do not have to use ; at the end of the line in Python. And you should
not use the normal string formating with DB queries for security reason (SQL
injections). From the MySQLdb User's Guide:
c.execute("""SELECT spam, eggs, sausage FROM breakfast
WHERE price < %s""", (max_price,))
This form will automatically quote the values in a safe way.
|
Plotting datetimeindex on x-axis with matplotlib creates wrong ticks in pandas 0.15 in contrast to 0.14
Question: I create a simple pandas dataframe with some random values and a DatetimeIndex
like so:
import pandas as pd
from numpy.random import randint
import datetime as dt
import matplotlib.pyplot as plt
# create a random dataframe with datetimeindex
dateRange = pd.date_range('1/1/2011', '3/30/2011', freq='D')
randomInts = randint(1, 50, len(dateRange))
df = pd.DataFrame({'RandomValues' : randomInts}, index=dateRange)
Then I plot it in two different ways:
# plot with pandas own matplotlib wrapper
df.plot()
# plot directly with matplotlib pyplot
plt.plot(df.index, df.RandomValues)
plt.show()
(Do not use both statements at the same time as they plot on the same figure.)
I use **Python 3.4 64bit** and **matplotlib 1.4**. With **pandas 0.14** , both
statements give me the expected plot (they use slightly different formatting
of the x-axis which is okay; note that data is random so the plots do not look
the same): 

However, when using **pandas 0.15** , the pandas plot looks alright but the
matplotlib plot has some strange tick format on the x-axis:


Is there any good reason for this behaviour and why it has changed from pandas
0.14 to 0.15?
Answer: The reason for this change in behaviour is that starting from 0.15, the pandas
`Index` object is no longer a numpy ndarray subclass. But the _real reason_ is
that matplotlib does not support the `datetime64` dtype.
As a workaround, in the case you want to use the matplotlib `plot` function,
you can convert the index to python datetime's using `to_pydatetime`:
plt.plot(df.index.to_pydatetime(), df.RandomValues)
* * *
More in detail explanation:
Because `Index` is no longer a ndarray subclass, matplotlib will convert the
index to a numpy array with `datetime64` dtype (while before, it retained the
`Index` object, of which scalars are returned as `Timestamp` values, a
subclass of `datetime.datetime`, which matplotlib can handle). In the `plot`
function, it calls `np.atleast_1d()` on the input which now returns a
datetime64 array, which matplotlib handles as integers.
I opened an issue about this (as this gets possibly a lot of use):
<https://github.com/pydata/pandas/issues/8614>
|
Python break deep recursion - 2.7.8
Question: So I have a huge recursive function. At some point, it finds a result, and I
need it to stop all functions. Suggestions?
Answer: Assuming you cannot refactor your code, you could use generators to achieve
this. Let's say we have the following code:
from __future__ import print_function
def g(n):
print("g before", n)
if n < 3:
for i in g(n+1):
yield i
else:
yield n
print("g after", n)
The normal execution would be:
>> for i in g(0):
... print(i)
...
g before 0
g before 1
g before 2
g before 3
3
g after 3
g after 2
g after 1
g after 0
Now what you want is to yield 3 and stop right there, so we just fetch the
first value:
>>> print(g(0).__iter__().next())
g before 0
g before 1
g before 2
g before 3
3
|
What is the most pythonic way to use len on a scalar?
Question: I read this question
[python: how to identify if a variable is an array or a
scalar](http://stackoverflow.com/questions/16807011/python-how-to-identify-if-
a-variable-is-an-array-or-a-scalar)
but when using the following code I get a false on an `np.array` as can be
demonstrated below.
import collections
isinstance(np.arange(10), collections.Sequence)
# returns false
I find it a bit annoying that I can't do `len(1)` and simply get `1`.
The only work around I can think of is a `try except` statement such as the
following:
a = 1
try:
print len(a)
except TypeError:
print 1
Is there a more Pythonic way to do this?
Answer: `collections.Sequence` only applies to sequence objects, which are a very
specific type of iterable object. Incidentally, a `numpy.ndarray` (which is
returned by `numpy.arange`) is not a sequence.
You need to test for either
[`collections.Iterable`](https://docs.python.org/2/library/collections.html#collections.Iterable),
which represents any iterable object:
>>> isinstance([1, 2, 3], collections.Iterable)
True
>> isinstance(np.arange(10), collections.Iterable)
True
>>> isinstance(1, collections.Iterable)
False
>>>
or
[`collections.Sized`](https://docs.python.org/2/library/collections.html#collections.Sized),
which represents any object that works with `len`:
>>> isinstance([1, 2, 3], collections.Sized)
True
>>> isinstance(np.arange(10), collections.Sized)
True
>>> isinstance(1, collections.Sized)
False
>>>
You can then use a conditional expression or similar to do what you want:
print len(a) if isinstance(a, collections.Iterable) else 1
print len(a) if isinstance(a, collections.Sized) else 1
For a complete list of the available abstract base classes in the
`collections` module, see [Collections Abstract Base
Classes](https://docs.python.org/2/library/collections.html#collections-
abstract-base-classes) in the Python docs.
|
How to implement custom control over python multiprocessing.Pool?
Question: Usually i use following code, and it works fine when you do not matter in
which order function `process_func` will handle some parameter:
params = [1,2,3,4,5 ... ]
def process_func():
...
pool = new Pool(40)
pool.map(process_func, params)
pool.close()
pool.join()
In example above we have processes of one type, with maximum simultanious
number of 40. But.. imagine we have processes (parameters) of different type,
which should be executed simultaniously. For example, in my selenium grid i
have 40 firefox, 40 chromes. And i have 5000 test cases, some of them prefer
chrome, some of them firefox, some of them does not matter.
For example, lets say we have following types:
* type firefox: maximum simultanious number: 40
* type chrome: maximum simultanious number: 40
In this case our pool will have maximum of 80 simultanious processes, but
there is strict rule: 40 of them must be firefox, 40 of them must be chromes.
It means that params won't be taken one after each other. Pool must select
value from params list in a way to have maximum of each process type.
How it is possible to achieve that?
Answer: I would modify your `process_func()` to take one more parameter that tells it
which "type" to be and use two separate pools. Adding
[functools.partial](https://docs.python.org/2/library/functools.html#functools.partial)
will allow us to still use `pool.map()`:
from functools import partial
from multiprocessing import Pool
params = [1,2,3,4,5 ... ]
def process_func(type, param):
if type == 'Firefox':
# do Firefox stuff
else:
# do Chrome stuff
chrome_pool = Pool(40)
fox_pool = Pool(40)
chrome_function = partial(process_func, 'Chrome')
fox_function = partial(process_func, 'Firefox')
chrome_pool.map(chrome_func, params)
fox_pool.map(fox_func, params)
chrome_pool.close()
fox_pool.close()
chrome_pool.join()
fox_pool.join()
The `functools.partial()` function allows us to bind an argument to a specific
value, by returning a new function object that will always supply that
argument. This approach allows you to limit each "type" (for lack of a better
term) to 40 worker processes.
|
Python/xpath get instances of text in arbitrary element
Question: Given the following:
<table>
<tr>
<td>
<div>Text 1</div>
</td>
<td>
Text 2
</td>
<td>
<div>
<a href="#">Text 3</a>
</div>
</td>
</tr>
<tr>
...
</tr>
</table>
Given the above table, how would I extract all the text? Note that the number
of nested elements is arbitrary so I can't just look for the first sibling,
zero-th sibling, and second sibling.
I'm looking for a general way to extract the text.
In [1]: d="""<table>
...: <tr>
...: <td>
...: <div>Text 1</div>
...: </td>
...: <td>
...: Text 2
...: </td>
...: <td>
...: <div>
...: <a href="#">Text 3</a>
...: </div>
...: </td>
...: </tr>
...: <tr>
...: ...
...: </tr>
...: </table>"""
In [3]: from lxml import etree
In [4]: f = etree.HTML(d)
In [5]: f.xpath('normalize-space(string(/table))')
Out[5]: ''
In [6]: f.xpath('normalize-space(string(//table))')
Out[6]: 'Text 1 Text 2 Text 3 ...
Answer: I would use:
normalize-space(string(/table))
|
Python & JSON: ValueError: Unterminated string starting at:
Question: I have read multiple StackOverflow articles on this and most of the top 10
Google results. Where my issue deviates is that I am using one script in
python to create my JSON files. And the next script, run not 10 minutes later,
can't read that very file.
Short version, I generate leads for my online business. I am attempting to
learn python in order to have better analytics on these leads. I am scouring 2
years worth of leads with the intent being to retain the useful data and drop
anything personal - email addresses, names, etc. - while also saving 30,000+
leads into a few dozen files for easy access.
So my first script opens every single individual lead file - 30,000+ -
determines the date it was capture based on a timestamp in the file. Then it
saves that lead to the appropriate key in dict. When all the data has been
aggregated into this dict text files are written using json.dumps.
The dict's structure is:
addData['lead']['July_2013'] = { ... }
where the 'lead' key can be lead, partial, and a few others and the
'July_2013' key is obviously a date based key that can be any combination of
the full month and 2013 or 2014 going back to 'February_2013'.
The full error is this:
ValueError: Unterminated string starting at: line 1 column 9997847 (char 9997846)
But I've manually looked at the file and my IDE says there are only 76,655
chars in the file. So how did it get to 9997846?
The file that fails is the 8th to be read; the other 7 and all other files
that come after it read in via json.loads just fine.
Python says there is in an unterminated string so I looked at the end of the
JSON in the file that fails and it appears to be fine. I've seen some mention
about newlines being \n in JSON but this string is all one line. I've seen
mention of \ vs \ but in a quick look over the whole file I didn't see any .
Other files do have \ and they read in fine. And, these files were all created
by json.dumps.
I can't post the file because it still has personal info in it. Manually
attempting to validate the JSON of a 76,000 char file isn't really viable.
Thoughts on how to debug this would be appreciated. In the mean time I am
going to try to rebuild the files and see if this wasn't just a one off bug
but that takes a while.
* Python 2.7 via Spyder & Anaconda
* Windows 7 Pro
\--- Edit --- Per request I am posting the Write Code here:
from p2p.basic import files as f
from p2p.adv import strTools as st
from p2p.basic import strTools as s
import os
import json
import copy
from datetime import datetime
import time
global leadDir
global archiveDir
global aggLeads
def aggregate_individual_lead_files():
"""
"""
# Get the aggLead global and
global aggLeads
# Get all the Files with a 'lead' extension & aggregate them
exts = [
'lead',
'partial',
'inp',
'err',
'nobuyer',
'prospect',
'sent'
]
for srchExt in exts:
agg = {}
leads = f.recursiveGlob(leadDir, '*.cd.' + srchExt)
print "There are {} {} files to process".format(len(leads), srchExt)
for lead in leads:
# Get the Base Filename
fname = f.basename(lead)
#uniqID = st.fetchBefore('.', fname)
#print "File: ", lead
# Get Lead Data
leadData = json.loads(f.file_get_contents(lead))
agg = agg_data(leadData, agg, fname)
aggLeads[srchExt] = copy.deepcopy(agg)
print "Aggregate Top Lvl Keys: ", aggLeads.keys()
print "Aggregate Next Lvl Keys: "
for key in aggLeads:
print "{}: ".format(key)
for arcDate in aggLeads[key].keys():
print "{}: {}".format(arcDate, len(aggLeads[key][arcDate]))
# raw_input("Press Enter to continue...")
def agg_data(leadData, agg, fname=None):
"""
"""
#print "Lead: ", leadData
# Get the timestamp of the lead
try:
ts = leadData['timeStamp']
leadData.pop('timeStamp')
except KeyError:
return agg
leadDate = datetime.fromtimestamp(ts)
arcDate = leadDate.strftime("%B_%Y")
#print "Archive Date: ", arcDate
try:
agg[arcDate][ts] = leadData
except KeyError:
agg[arcDate] = {}
agg[arcDate][ts] = leadData
except TypeError:
print "Timestamp: ", ts
print "Lead: ", leadData
print "Archive Date: ", arcDate
return agg
"""
if fname is not None:
archive_lead(fname, arcDate)
"""
#print "File: {} added to {}".format(fname, arcDate)
return agg
def archive_lead(fname, arcDate):
# Archive Path
newArcPath = archiveDir + arcDate + '//'
if not os.path.exists(newArcPath):
os.makedirs(newArcPath)
# Move the file to the archive
os.rename(leadDir + fname, newArcPath + fname)
def reformat_old_agg_data():
"""
"""
# Get the aggLead global and
global aggLeads
aggComplete = {}
aggPartial = {}
oldAggFiles = f.recursiveGlob(leadDir, '*.cd.agg')
print "There are {} old aggregate files to process".format(len(oldAggFiles))
for agg in oldAggFiles:
tmp = json.loads(f.file_get_contents(agg))
for uniqId in tmp:
leadData = tmp[uniqId]
if leadData['isPartial'] == True:
aggPartial = agg_data(leadData, aggPartial)
else:
aggComplete = agg_data(leadData, aggComplete)
arcData = dict(aggLeads['lead'].items() + aggComplete.items())
aggLeads['lead'] = arcData
arcData = dict(aggLeads['partial'].items() + aggPartial.items())
aggLeads['partial'] = arcData
def output_agg_files():
for ext in aggLeads:
for arcDate in aggLeads[ext]:
arcFile = leadDir + arcDate + '.cd.' + ext + '.agg'
if f.file_exists(arcFile):
tmp = json.loads(f.file_get_contents(arcFile))
else:
tmp = {}
arcData = dict(tmp.items() + aggLeads[ext][arcDate].items())
f.file_put_contents(arcFile, json.dumps(arcData))
def main():
global leadDir
global archiveDir
global aggLeads
leadDir = 'D://Server Data//eagle805//emmetrics//forms//leads//'
archiveDir = leadDir + 'archive//'
aggLeads = {}
# Aggregate all the old individual file
aggregate_individual_lead_files()
# Reformat the old aggregate files
reformat_old_agg_data()
# Write it all out to an aggregate file
output_agg_files()
if __name__ == "__main__":
main()
Here is the read code:
from p2p.basic import files as f
from p2p.adv import strTools as st
from p2p.basic import strTools as s
import os
import json
import copy
from datetime import datetime
import time
global leadDir
global fields
global fieldTimes
global versions
def parse_agg_file(aggFile):
global leadDir
global fields
global fieldTimes
try:
tmp = json.loads(f.file_get_contents(aggFile))
except ValueError:
print "{} failed the JSON load".format(aggFile)
return False
print "Opening: ", aggFile
for ts in tmp:
try:
tmpTs = float(ts)
except:
print "Timestamp: ", ts
continue
leadData = tmp[ts]
for field in leadData:
if field not in fields:
fields[field] = []
fields[field].append(float(ts))
def determine_form_versions():
global fieldTimes
global versions
# Determine all the fields and their start and stop times
times = []
for field in fields:
minTs = min(fields[field])
fieldTimes[field] = [minTs, max(fields[field])]
times.append(minTs)
print 'Min ts: {}'.format(minTs)
times = set(sorted(times))
print "Times: ", times
print "Fields: ", fieldTimes
versions = {}
for ts in times:
d = datetime.fromtimestamp(ts)
ver = d.strftime("%d_%B_%Y")
print "Version: ", ver
versions[ver] = []
for field in fields:
if ts in fields[field]:
versions[ver].append(field)
def main():
global leadDir
global fields
global fieldTimes
leadDir = 'D://Server Data//eagle805//emmetrics//forms//leads//'
fields = {}
fieldTimes = {}
aggFiles = f.glob(leadDir + '*.lead.agg')
for aggFile in aggFiles:
parse_agg_file(aggFile)
determine_form_versions()
print "Versions: ", versions
if __name__ == "__main__":
main()
Answer: So I figured it out... I post this answer just in case someone else makes the
same error.
First, I found a work around but I'm wasn't sure why this worked. From my
original code, here is my file_get_contents function:
def file_get_contents(fname):
if s.stripos(fname, 'http://'):
import urllib2
return urllib2.urlopen(fname).read(maxUrlRead)
else:
return open(fname).read(maxFileRead)
I used it via:
tmp = json.loads(f.file_get_contents(aggFile))
This failed, over and over and over again. However, as I was attempting to get
Python to at least give me the JSON string to put through a JSON validator
(<http://jsonlint.com/>) I came across mention of json.load vs json.loads. So
I tried this instead:
a = open('D://Server Data//eagle805//emmetrics//forms//leads\July_2014.cd.lead.agg')
b = json.load(a)
While I haven't tested this output in my overall code this code chunk does in
fact read in the file, decode the JSON, and will even display the data without
crashing Spyder. The variable explorer in Spyder shows that b is a dict of
size 1465 and that is exactly how many records it should have. The portion of
the displayed text from the end of the dict all looks good. So overall I have
a reasonably high level confidence that the data was parsed correctly.
When I wrote the file_get_contents function I saw several recommendations that
I always provide a max number of bytes to read so as to prevent Python from
hanging on a bad return. The value of `maxReadFile` was `1E7`. When I manually
forced `maxReadFile` to be `1E9` everything worked fine. Turns out the file is
just under 1.2E7 bytes. So the resulting string from reading the file was not
the full string in the file and as a result was invalid JSON.
Normally I would think this is a bug but clearly when opening and reading a
file you need to be able to read just a chunk at a time for memory management.
So I got bit by my own shortsightedness with regards to the `maxReadFile`
value. The error message was correct but sent me off on a wild goose chase.
Hopefully this saves someone else some time.
|
python3 manage.py migrate exceptions
Question: I am new to django 1.7 and python3. I am using OSX. As I was following the
django 1.7 documentation online,
I tried
python3 manage.py migrate
and it resulted
Operations to perform:
Apply all migrations: auth, contenttypes, sessions, admin
Running migrations:
No migrations to apply.
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/contrib/contenttypes/models.py", line 44, in get_for_model
ct = self._get_from_cache(opts)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/contrib/contenttypes/models.py", line 34, in _get_from_cache
return self.__class__._cache[self.db][key]
KeyError: 'default'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/backends/mysql/base.py", line 128, in execute
return self.cursor.execute(query, args)
File "/Users/NAME/Library/Python/3.4/lib/python/site-packages/MySQLdb/cursors.py", line 184, in execute
self.errorhandler(self, exc, value)
File "/Users/NAME/Library/Python/3.4/lib/python/site-packages/MySQLdb/connections.py", line 37, in defaulterrorhandler
raise errorvalue
File "/Users/NAME/Library/Python/3.4/lib/python/site-packages/MySQLdb/cursors.py", line 171, in execute
r = self._query(query)
File "/Users/NAME/Library/Python/3.4/lib/python/site-packages/MySQLdb/cursors.py", line 330, in _query
rowcount = self._do_query(q)
File "/Users/NAME/Library/Python/3.4/lib/python/site-packages/MySQLdb/cursors.py", line 294, in _do_query
db.query(q)
_mysql_exceptions.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '%s AND `django_content_type`.`model` = %s) LIMIT 21' at line 1")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/contrib/contenttypes/models.py", line 50, in get_for_model
defaults={'name': smart_text(opts.verbose_name_raw)},
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/models/manager.py", line 92, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/models/query.py", line 422, in get_or_create
return self.get(**lookup), False
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/models/query.py", line 351, in get
num = len(clone)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/models/query.py", line 122, in __len__
self._fetch_all()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/models/query.py", line 966, in _fetch_all
self._result_cache = list(self.iterator())
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/models/query.py", line 265, in iterator
for row in compiler.results_iter():
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/models/sql/compiler.py", line 700, in results_iter
for rows in self.execute_sql(MULTI):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/models/sql/compiler.py", line 786, in execute_sql
cursor.execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/backends/utils.py", line 81, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/utils/six.py", line 549, in reraise
raise value.with_traceback(tb)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/backends/utils.py", line 65, in execute
return self.cursor.execute(sql, params)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/db/backends/mysql/base.py", line 128, in execute
return self.cursor.execute(query, args)
File "/Users/NAME/Library/Python/3.4/lib/python/site-packages/MySQLdb/cursors.py", line 184, in execute
self.errorhandler(self, exc, value)
File "/Users/NAME/Library/Python/3.4/lib/python/site-packages/MySQLdb/connections.py", line 37, in defaulterrorhandler
raise errorvalue
File "/Users/NAME/Library/Python/3.4/lib/python/site-packages/MySQLdb/cursors.py", line 171, in execute
r = self._query(query)
File "/Users/NAME/Library/Python/3.4/lib/python/site-packages/MySQLdb/cursors.py", line 330, in _query
rowcount = self._do_query(q)
File "/Users/NAME/Library/Python/3.4/lib/python/site-packages/MySQLdb/cursors.py", line 294, in _do_query
db.query(q)
django.db.utils.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '%s AND `django_content_type`.`model` = %s) LIMIT 21' at line 1")
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/core/management/base.py", line 338, in execute
output = self.handle(*args, **options)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/core/management/commands/migrate.py", line 164, in handle
emit_post_migrate_signal(created_models, self.verbosity, self.interactive, connection.alias)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/core/management/sql.py", line 268, in emit_post_migrate_signal
using=db)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/dispatch/dispatcher.py", line 198, in send
response = receiver(signal=self, sender=sender, **named)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/contrib/auth/management/__init__.py", line 83, in create_permissions
ctype = ContentType.objects.db_manager(using).get_for_model(klass)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/django/contrib/contenttypes/models.py", line 58, in get_for_model
" is migrated before trying to migrate apps individually."
RuntimeError: Error creating new content types. Please make sure contenttypes is migrated before trying to migrate apps individually.`
I don't know what it means and I am sure it is not right. Is the migration
successful? Please help. Thanks
Here is my
/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-
packages/django/contrib/contenttypes/models.py
from __future__ import unicode_literals
from django.apps import apps
from django.db import models
from django.db.utils import OperationalError, ProgrammingError
from django.utils.translation import ugettext_lazy as _
from django.utils.encoding import smart_text, force_text
from django.utils.encoding import python_2_unicode_compatible
class ContentTypeManager(models.Manager):
# Cache to avoid re-looking up ContentType objects all over the place.
# This cache is shared by all the get_for_* methods.
_cache = {}
def get_by_natural_key(self, app_label, model):
try:
ct = self.__class__._cache[self.db][(app_label, model)]
except KeyError:
ct = self.get(app_label=app_label, model=model)
self._add_to_cache(self.db, ct)
return ct
def _get_opts(self, model, for_concrete_model):
if for_concrete_model:
model = model._meta.concrete_model
elif model._deferred:
model = model._meta.proxy_for_model
return model._meta
def _get_from_cache(self, opts):
key = (opts.app_label, opts.model_name)
return self.__class__._cache[self.db][key]
def get_for_model(self, model, for_concrete_model=True):
"""
Returns the ContentType object for a given model, creating the
ContentType if necessary. Lookups are cached so that subsequent lookups
for the same model don't hit the database.
"""
opts = self._get_opts(model, for_concrete_model)
try:
ct = self._get_from_cache(opts)
except KeyError:
try:
ct, created = self.get_or_create(
app_label=opts.app_label,
model=opts.model_name,
defaults={'name': smart_text(opts.verbose_name_raw)},
)
except (OperationalError, ProgrammingError):
# It's possible to migrate a single app before contenttypes,
# as it's not a required initial dependency (it's contrib!)
# Have a nice error for this.
raise RuntimeError(
"Error creating new content types. Please make sure contenttypes" +
" is migrated before trying to migrate apps individually."
)
self._add_to_cache(self.db, ct)
return ct
def get_for_models(self, *models, **kwargs):
"""
Given *models, returns a dictionary mapping {model: content_type}.
"""
for_concrete_models = kwargs.pop('for_concrete_models', True)
# Final results
results = {}
# models that aren't already in the cache
needed_app_labels = set()
needed_models = set()
needed_opts = set()
for model in models:
opts = self._get_opts(model, for_concrete_models)
try:
ct = self._get_from_cache(opts)
except KeyError:
needed_app_labels.add(opts.app_label)
needed_models.add(opts.model_name)
needed_opts.add(opts)
else:
results[model] = ct
if needed_opts:
cts = self.filter(
app_label__in=needed_app_labels,
model__in=needed_models
)
for ct in cts:
model = ct.model_class()
if model._meta in needed_opts:
results[model] = ct
needed_opts.remove(model._meta)
self._add_to_cache(self.db, ct)
for opts in needed_opts:
# These weren't in the cache, or the DB, create them.
ct = self.create(
app_label=opts.app_label,
model=opts.model_name,
name=smart_text(opts.verbose_name_raw),
)
self._add_to_cache(self.db, ct)
results[ct.model_class()] = ct
return results
def get_for_id(self, id):
"""
Lookup a ContentType by ID. Uses the same shared cache as get_for_model
(though ContentTypes are obviously not created on-the-fly by get_by_id).
"""
try:
ct = self.__class__._cache[self.db][id]
except KeyError:
# This could raise a DoesNotExist; that's correct behavior and will
# make sure that only correct ctypes get stored in the cache dict.
ct = self.get(pk=id)
self._add_to_cache(self.db, ct)
return ct
def clear_cache(self):
"""
Clear out the content-type cache. This needs to happen during database
flushes to prevent caching of "stale" content type IDs (see
django.contrib.contenttypes.management.update_contenttypes for where
this gets called).
"""
self.__class__._cache.clear()
def _add_to_cache(self, using, ct):
"""Insert a ContentType into the cache."""
# Note it's possible for ContentType objects to be stale; model_class() will return None.
# Hence, there is no reliance on model._meta.app_label here, just using the model fields instead.
key = (ct.app_label, ct.model)
self.__class__._cache.setdefault(using, {})[key] = ct
self.__class__._cache.setdefault(using, {})[ct.id] = ct
@python_2_unicode_compatible
class ContentType(models.Model):
name = models.CharField(max_length=100)
app_label = models.CharField(max_length=100)
model = models.CharField(_('python model class name'), max_length=100)
objects = ContentTypeManager()
class Meta:
verbose_name = _('content type')
verbose_name_plural = _('content types')
db_table = 'django_content_type'
ordering = ('name',)
unique_together = (('app_label', 'model'),)
def __str__(self):
# self.name is deprecated in favor of using model's verbose_name, which
# can be translated. Formal deprecation is delayed until we have DB
# migration to be able to remove the field from the database along with
# the attribute.
#
# We return self.name only when users have changed its value from the
# initial verbose_name_raw and might rely on it.
model = self.model_class()
if not model or self.name != model._meta.verbose_name_raw:
return self.name
else:
return force_text(model._meta.verbose_name)
def model_class(self):
"Returns the Python model class for this type of content."
try:
return apps.get_model(self.app_label, self.model)
except LookupError:
return None
def get_object_for_this_type(self, **kwargs):
"""
Returns an object of this type for the keyword arguments given.
Basically, this is a proxy around this object_type's get_object() model
method. The ObjectNotExist exception, if thrown, will not be caught,
so code that calls this method should catch it.
"""
return self.model_class()._base_manager.using(self._state.db).get(**kwargs)
def get_all_objects_for_this_type(self, **kwargs):
"""
Returns all objects of this type for the keyword arguments given.
"""
return self.model_class()._base_manager.using(self._state.db).filter(**kwargs)
def natural_key(self):
return (self.app_label, self.model)
Answer: I believe it is the dependency with mysql. After I switched to postgresql,
everything is solved. I found the connector of python with mysql is only up to
python 3.3 and I am using python3.4. That is probably the reason and I could
not find a connector for mysql and python 3.4.
|
Redirect Fabric output to a file
Question: I use fabric.api.local directly in my script. For example,
fabric_test.py
from fabric.api import local
local('echo hello world')
local('ls')
If I execute it without any io redirection, everything is fine
$ python fabric_test.py
[localhost] local: echo hello world
hello world
[localhost] local: ls
fabric_test.py test.log
If I execute it and redirect the output to a file, the order of the output is
messed up. Why? This problem is really annoying when I use fabric in some
cronjob and send the output to logfiles.
$ python fabric_test.py > test.log
$ cat test.log
hello world
fabric_test.py
test.log
[localhost] local: echo hello world
[localhost] local: ls
Answer: This is because, fabric essentially executes your designated task as a
separate subprocess. Hence, the subprocess' output isn't the fabric file's
output. This is not a new issue and has been discussed
[before](https://github.com/fabric/fabric/issues/57), please check it out to
see workarounds.
|
Is there any way to make data corruption intentionally in python3?
Question: I'm now making an application which basically corrupts the data. That's what
it does. However, I cannot find ways to save the corrupted data into a
variable. I want the corrupted data saved in a python list like "holder=[]" to
make them accessible later. Is there any way to do that?
import random
import time
import threading
def example():
for counter in range(10):
print(counter)
def thread_set_1():
thread1=threading.Thread(target=example)
thread2=threading.Thread(target=example)
thread3=threading.Thread(target=example)
thread4=threading.Thread(target=example)
thread5=threading.Thread(target=example)
thread6=threading.Thread(target=example)
thread7=threading.Thread(target=example)
thread8=threading.Thread(target=example)
thread1.start()
thread2.start()
thread3.start()
thread4.start()
thread5.start()
thread6.start()
thread7.start()
thread8.start()
def thread_set_2():
thread1=threading.Thread(target=thread_set_1)
thread2=threading.Thread(target=thread_set_1)
thread3=threading.Thread(target=thread_set_1)
thread4=threading.Thread(target=thread_set_1)
thread5=threading.Thread(target=thread_set_1)
thread6=threading.Thread(target=thread_set_1)
thread7=threading.Thread(target=thread_set_1)
thread8=threading.Thread(target=thread_set_1)
thread1.start()
thread2.start()
thread3.start()
thread4.start()
thread5.start()
thread6.start()
thread7.start()
thread8.start()
Answer: If I understand you correctly, you would like to save your output? I think
that this won't be possible using a simple array, as the global interpreter
lock is going to prevent simultaneous access by multiple threads to a global
variable. However, if you write your output to a file, this will work. If you
change your function `example` like so:
def example():
for counter in range(10):
with open('testfile.txt','a') as fid:
fid.write(str(counter))
and then run:
open('testfile.txt','w') #create empty file
thread_set_1()
thread_set_2()
`testfile.txt` will contain seemingly random numbers. However, this is going
to be very slow, as the file is going to be opened and closed every time a
number is written.
The main reason to not use this procedure to create random numbers is that
they are not going to be truly random, as the numbers written to the file are
going to increase with time. You wrote that you plan on shuffling the numbers
afterwards; if you plan on using `random` for this (it is among your imports),
why do you want to additionally create your own random number generator?
|
Issue implementing git add/commit in sh python module
Question: I am experiencing the following strange behavior with git from sh python
module:
**Here is the python script:**
import sh
from datetime import datetime
now = str(datetime.now())
filename = "config.cfg"
file = filename
f = open(file, 'w')
f.write(now)
now = str(datetime.now())
comment = "-m " + "\"new version" + " registrered " + now + "\""
print(sh.sudo.git("add", filename))
print(sh.sudo.git("commit", comment))
**BEHAVIOR: Manually everything works fine:**
$ echo "new content" > config.cfg
$ git add config.cfg
$ sudo git commit -m "ospf configuration registrered 2014-10-24 15:30:33.531963"
[master 142cfd5] ospf configuration registrered 2014-10-24 15:30:33.531963
1 file changed, 1 insertion(+)
But not the implementation through python sh module
**I-### 1st execution is OK**
python sh1.py
[master 6d13519] "new version registrered 2014-10-24 15:37:42.534595"
1 file changed, 1 deletion(-)
**II-### any further execution gives the following:**
python sh1.py
Traceback (most recent call last):
File "sh1.py", line 24, in <module>
print(sh.sudo.git("commit", comment))
File "/home/user/anaconda/lib/python2.7/site-packages/sh.py", line 769, in __call__
return RunningCommand(cmd, call_args, stdin, stdout, stderr)
File "/home/user/anaconda/lib/python2.7/site-packages/sh.py", line 330, in __init__
self.wait()
File "/home/user/anaconda/lib/python2.7/site-packages/sh.py", line 334, in wait
self._handle_exit_code(self.process.wait())
File "/home/user/anaconda/lib/python2.7/site-packages/sh.py", line 348, in _handle_exit_code
self.process.stderr
sh.ErrorReturnCode_1:
RAN: '/usr/bin/sudo git commit -m "new version registrered 2014-10-24 15:38:02.528213"'
STDOUT:
On branch master
Untracked files:
sh1.py
nothing added to commit but untracked files present
STDERR:
**III- ### everything works fine after manual add and commit:**
$ sudo git add config.cfg
$ sudo git commit -m "new version registrered 2014-10-24 15:34:35.2415245"
[master 5d5fe35] new version registrered 2014-10-24 15:34:35.2415245
1 file changed, 1 insertion(+)
And now back to 1st behavior I, and so on...
* * *
The user is in sudoers and set NOPASSWD to avoid sudo password
Answer: You need to close the file object before you invoke the git commands.
f.close()
If you ran 'git status', you'd see that Git was complaining that after the
script ran, config.cfg still had changes that were unstaged for commit.
~~I have a feeling that python flushes an EOF character or something to the
file when it's closed, and therefore alters the file since you called 'git
add' from your script.~~ I don't know for sure unfortunately, only that the
issue was resolved for me when I closed the file prior to 'git add' 'git
commit' from the script.
Edit: Useless has corrected me and noted that the reason is because the file
is buffered.
|
Flask Blueprint AttributeError: 'module' object has no attribute 'name' error
Question: My API is being built to allow developers to extend it's functionality. My
plan is to do this by providing an "extensions" directory where they can drop
in Blueprints and they will be dynamically loaded. This is the code I am
utilizing to import (modifed from this
[tutorial](https://lextoumbourou.com/blog/posts/dynamically-loading-modules-
and-classes-in-python/))
from flask import Flask
import pkgutil
import sys
app = Flask(__name__)
EXTENSIONS_DIR = "extensions"
modules = pkgutil.iter_modules(path=[EXTENSIONS_DIR])
for loader, mod_name, ispkg in modules:
if mod_name not in sys.modules:
# It imports fine
loaded_mod = __import__(EXTENSIONS_DIR+"."+mod_name+"."+mod_name, fromlist=[mod_name])
# It does not register
app.register_blueprint(loaded_mod)
This is the directory layout of my project. The `extensions` directory is
where developers drop in their expanded functionality.
/root
/extensions
/extension1
__init__.py
extension1.py
/extension2
__init__.py
extension2.py
simple_example.py
The problem is that I get this error and am not sure what it is telling me.
>python simple_example.py
Traceback (most recent call last):
File "simple_example.py", line 14, in <module>
app.register_blueprint(loaded_mod)
File "C:\Python27\lib\site-packages\flask\app.py", line 62, in wrapper_func
return f(self, *args, **kwargs)
File "C:\Python27\lib\site-packages\flask\app.py", line 880, in register_blueprint
if blueprint.name in self.blueprints:
AttributeError: 'module' object has no attribute 'name'
A simple extension looks like this
from flask import Blueprint
extension1 = Blueprint('extension1', __name__)
@extension1.route("/my_route")
def treasure_list():
return "list of objects"
How do I solve the `AttributeError` in a way that allows my
`app.register_blueprint` call to succeed?
Answer: You are trying to register the _module_ and not the contained `Blueprint`
object.
You'll need to introspect the module to find `Blueprint` instances instead:
if mod_name not in sys.modules:
loaded_mod = __import__(EXTENSIONS_DIR+"."+mod_name+"."+mod_name, fromlist=[mod_name])
for obj in vars(loaded_mod).values():
if isinstance(obj, Blueprint):
app.register_blueprint(obj)
|
efficient different sized list comparisons
Question: I wish to compare around 1000 lists of varying size. Each list might have
thousands of items. I want to compare each pair of lists, so potentially
around 500000 comparisons. Each comparison consists of counting how many of
the smaller list exists in the larger list (if same size, pick either
list).Ultimately I want to cluster the lists using these counts. I want to be
able to do this for two types of data:
1. any textual data
2. strings of binary digits of the same length.
Is there an efficient way of doing this in python? I've looked at LShash and
other clustering related algorithms, but they seem to require same length
lists. TIA.
An example to try to clarify what I am aiming to do:
List A: car, dig, dog, the.
List B: fish, the, dog.
(No repeats in any list. Not sorted although I suppose they could be fairly
easily. Size of lists varies.)
Result:2, since 'dog' and 'the' are in both lists.
In reality the length of each list can be thousands and there are around 1000
such lists, each having to be compared with every other.
Continuing the example:
List C: dog, the, a, fish, fry.
Results: AB: 2 AC: 2 BC: 3
Answer: Nothing is going to be superfast, and there's a lot of data there (half a
million results, to start with), but the following should fit in your time and
space budget on modern hardware.
If possible, start by sorting the lists by length, from longest to shortest.
(I don't mean sort each lists; the order of elements in the list is
irrelevant. I mean, sort the collection of lists so that you can process the
longest list first.) The only point of doing this is to allow the similarity
metrics to be stored in a half-diagonal matrix instead of full matrix, which
saves half the matrix space. So if you don't know the length of the lists
before you start, it's not a crisis; it just means that you'll need a bit more
space.
**Note 1:** The important thing is that the metric you propose is completely
symmetric as long as no list has repeated elements.Without repeated elements,
the metric is simply `|A⋂B|`, regardless of whether `A` or `B` is longer, so
when you compute the size of the intersection of `A` and `B` you could fill in
the similarity matrix for both `(A,B)` and `(B,A)`.)
**Note 2:** The description of the algorithm seemed confusing to me when I
reread it so I changed the word "list" to "_list_ " when it refers to one of
the thousand input _lists_ , leaving "list" to mean an ordinary Python list.
Because lists can't be keys in Python dictionaries, working on the assumption
that _lists_ are implemented as lists, it's necessary to somehow identify each
_list_ with an identifier which can be used as a key. I hope that's clear.
### The Algorithm:
We need two auxiliary structures: one is the (half-diagonal) result matrix,
keyed by pairs of _list_ identifiers, which we initialize to all 0s. The other
one is a dictionary keyed by unique data element, mapping onto a list of
_list_ identifiers.
Then, taking each list in turn, for each element in that list we do the
following:
1. If the element is not yet present in the dictionary, add it, mapping to a single element list consisting of the current _list's_ identifier.
2. If the element is present in the dictionary but the last element in the corresponding list of ids is the current id, then we've found a repeated element. Since we don't expect repeated elements, either ignore it or issue an error message.
3. Otherwise, we've seen the element before and we have a list of identifiers of _lists_ in which the element appears. For each such identifier, increment the similarity count between the current identifier and the identifier in the list. (Note that if we scan _lists_ in reverse order by length, all the identifiers in the list correspond to _lists_ which are at least as long as the current _list_ , which is why we sorted the _lists_ in the first place.) Finally, append the current identifier to the end of the list, so that the next time that data element is found, the current _list_ will be present.
That's it. The space requirement is `O(N2 + M)` where `N` is the number of
_lists_ and `M` is the total size of all the lists. The time requirement is
essentially `O(M2)` in the worst case -- that being the case where every
_list_ has exactly one element and they are all the same element. (More
precisely, it's the sum of the squares of the frequencies of each unique
element.)
|
LXML escaped character conversion
Question: Ok so first of all I have a script which is operating on a dos file formatted
XML file. That is, the file has \r\n line terminations. Furthermore, the XML
file I am operating on has some newlines embedded within some attributes. The
XML editor which produced the XML encodes these newlines as: `
`
I am using LXML and some of the processing I am doing changes these text
attributes into XML elements. The problem I am seeing is that the text blocks
with the newlines in them end up as elements but with some crud just before
the newline. I.e. ` ` which, by the way is equivalent to `
` as I
understand it.
Now, this, to me seems to be an issue in that the script I am executing is
executing within a linux environment and it dumps out a linux file formatted
file.
It appears to me as if LXML is correctly seeing that `
` is an escaped
newline and changes this for an actual newline in the destination element. It
seems to be forgetting about the `
` though.
I created a test xml file:
<?xml version='1.0' encoding='UTF-8'?>
<element1>
<element2 value="0"/>
<element3 documentation="Some documentation.

Some more documentation"/>
</element1>
And here is a sample python file to do some manipulation:
#!/usr/bin/env python
import re
import argparse
import sys
import lxml.etree as ET
xml = ET.parse('test.xml')
root = xml.getroot()
elem = root.find('element3')
doc = ET.SubElement(elem, 'documenation')
doc.text = elem.get('documentation')
xml.write('out.xml', encoding='UTF-8', method="xml", pretty_print=True)
Here is the output:
<element1>
<element2 value="0"/>
<element3 documentation="Some documentation. Some more documentation"><documenation>Some documentation.
Some more documentation</documenation></element3>
</element1>
I guess my question is this, is this how you guys would expect this to work? I
would have thought that in the documentation element, we would no longer have
the references any more. It seems a bit inelegant if I have to now do some
grepping for these extraneous CR characters.
Answer: Yes, that is precisely how I'd expect it to work. While XML processors are
required to normalize line-endings on input, there is no requirement to
normalize line endings on output.
If I were in your shoes, I'd fix the code that generated that XML to avoid
putting carriage returns in attributes. Failing that, I'd try:
doc.text = elem.get('documentation').replace('\r','')
|
Nested Lists and their index calls
Question: I was wondering how I would go about assigning a value to a specific index in
a list that is inside another list. For example:
For a list parentList
for i in range(0,rows, 1):
parentList[i] = []
for j in range(0,cols,1):
parentList[i[j]] = 0
Not sure if that is even the right syntax for assignment like this.
I know I can do .append, but what I need to do for my problem is create a
sparse matrix, so everything should be 0 except for the values that the user
specifies in the form `(row) (col) (value)`.
So I thought using the indexes of the lists would help with assignment. If
there is a better way, please let me know!
EDIT: This is an example of what is being done
input:
1 2 2
1 3 3
2 1 4
3 3 5
0 0 0
1 3 7
2 2 8
3 2 9
3 3 6
0 0 0
The first matrix in the input is:
0 2 3
4 0 0
0 0 5
The second matrix is:
0 0 7
0 8 0
0 9 6
Their product is:
0 43 18
0 0 28
0 45 30
So the output should be:
1 2 43
1 3 18
2 3 28
3 2 45
3 3 30
0 0 0
I have to use a 1D array of linked lists to represent this in my code and all
input and output is done through the Python Shell.
Answer: It is easy to work with numpy arrays:
import numpy as np
a=np.empty((3,3))
for i in range(a.shape[0]):
for j in range(a.shape[1]):
a[i][j]=0
>>> a
array([[ 0., 0., 0.],
[ 0., 0., 0.],
[ 0., 0., 0.]])
|
How to receive UDP packets in Python without dropping
Question: I am writing simple software to parse MPEG-TS stream to check CC (cointinuity
counter) to see if any packets were dropped. When I run my script against file
it works flawlessly. But when using it on UDP stream it shows losses (which
are not confirmed by another software):
It is quite simple:
while True:
received = sock.recv(7 * 188)
parsepacket(received)
I left out `parsepacket` function for clarity. It is just using bitstring to
analyzee packet bit by bit.
In my understanding while I run `parsepacket` function other UDP packets are
just ignored, because I am not doing `sock.recv` quick enough (on 5 Mbps
stream it should parse about 500 packets pers second).
I tried using `sock.makefile` but no luck. I get same results.
My idea is to have receiving thread running in background and another thread
continously parsing what it receives. But honestly I have no idea how (besides
putting it all in memory which would run out very quickly).
Answer: Without seeing your script, or how you're actually doing any of this, we won't
be able to help you. However, this is trivial if you use Python's `twisted`
library. Here's a simple example, taken from their [examples
page](http://twistedmatrix.com/documents/13.2.0/core/howto/udp.html#auto4):
from twisted.internet.protocol import DatagramProtocol
from twisted.internet import reactor
class MulticastPingPong(DatagramProtocol):
def startProtocol(self):
"""
Called after protocol has started listening.
"""
# Set the TTL>1 so multicast will cross router hops:
self.transport.setTTL(5)
# Join a specific multicast group:
self.transport.joinGroup("228.0.0.5")
def datagramReceived(self, datagram, address):
print "Datagram %s received from %s" % (repr(datagram), repr(address))
if datagram == "Client: Ping":
# Rather than replying to the group multicast address, we send the
# reply directly (unicast) to the originating port:
self.transport.write("Server: Pong", address)
# We use listenMultiple=True so that we can run MulticastServer.py and
# MulticastClient.py on same machine:
reactor.listenMulticast(8005, MulticastPingPong(),
listenMultiple=True)
reactor.run()
You can parse each packet in the `def datagramReceived` function.
|
Linux : python : clear input buffer before raw_input()
Question: I have looked at a few thread about this, but it doesn't seem to solve my
problem. I am running linux and when I use raw_input(), with a pause between
each, it will take the data that I have pressed before, here is an example :
import time
a = raw_input("first input")
b = raw_input("second input")
time.sleep(5)
#flush junk?
a = raw_input("third input")
b = raw_input("fourth input")
if I press any keys followed by the enter during the 5 seconds, the two other
raw input will take the input. I would like to be able to flush the data and
let the user be prompted.
thank you.
Answer: Use keypress getch class of tty (linux) and msvcrt (windows) and use
sys.stdout.flush() function to flush the buffer
|
How to put string on two lines in Python?
Question: I am brand new to the world of Python, PyCharm, and Web API testing.
I am trying to test error message that gets displayed when an error is made in
Web API. This error message has two parts and are displayed on two separate
lines.
But somehow any string definition I generate for comparison is always
displayed on one line.
This is one of the things I tried - created a string with new line \n between
two parts.
wp_error = 'This page can\'t be saved.\n Some required information is missing.'
# create new workspace and save it without filling up any information.
self.test_mycode.click_and_wait(self.workspace_overview.new_workspace_button,
self.new_workspace._save_button_locator)
self.new_workspace.save_button.click()
self.message.check_message('error', wp_error)
But this didn't work and I got:
in check_message Assert.equal(message.message_body.text, message_text)
self = <class 'unittestzero.Assert'>
first = "This page can't be saved.
Some required information is missing."
second = "This page can't be saved.\n Some required information is missing."
.....
> assert first == second, msg
E AssertionError: None
So my question is how do I define the string to appropriately test the error
message that appears on two lines? Thank you.
Answer: If:
first = """This page can't be saved.
Some required information is missing."""
second = "This page can't be saved.\n Some required information is missing."
assert first == second
fails, then the problem is probably that:
first == "This page can't be saved.\nSome required information is missing."
second == "This page can't be saved.\n Some required information is missing."
I.e. there is an extra space in the second one, after the newline. (Note also
the triple quotes, to allow a string to span lines without the compiler
complaining,)
Solutions: You can either:
1. Be extremely careful with your test data.
2. Use a "shim" to allow "approximately equals". For example:
import re
FOLD_WHITESPACE = re.compile(r'\s+')
def oneline(s):
return FOLD_WHITESPACE.sub(" ", s)
assert oneline(first) == oneline(second)
I am not arguing that this particular transformation is the ideal one for all
string comparisons, but it's a simple one that gets at your need to not be
excessively concerned with whitespaces (including line breaks).
Similar "almost equals" or "transformed equals" tests are often convenient or
required for testing both string and floating point values.
Btw, if you're using the object call version of assert, it might be couched
as:
Assert.equal(oneline(message.message_body.text),
oneline(message_text))
|
What to pass when passing arguments where a list or tuple is required?
Question: Which of the following should I use and why?
import numpy as np
a = np.zeros([2, 3])
b = np.zeros((2, 3))
There are many cases where you can pass arguments in either way, I just wonder
if one is more Pythonic or if there are other reasons where one should be
preferred over the other.
I looked at [this question](http://stackoverflow.com/questions/8900166/whats-
the-difference-between-lists-enclosed-by-square-brackets-and-parentheses)
where people tried to explain the difference between a tuple and a list.
That's not what I'm interested in, unless there are reasons I should care
which I ignore of course!
UPDATE:
Although numpy was used as an example this pertains generally to python. A non
numpy example is as follows:
a = max([1, 2, 3, 5, 4])
b = max((1, 2, 3, 5, 4))
I'm not editing the above because some answers use numpy in their explanation
Answer: I'm answering this in the context of passing a literal iterable to a
constructor or function beyond which the type does not matter. If you need to
pass in a hashable argument, you need a tuple. If you'll need it mutated, pass
in a list (so that you don't add tuples to tuples thereby multiplying the
creation of objects.)
The answer to your question is that the better option varies situationally.
Here's the tradeoffs.
Starting with `list` type, which is mutable, it preallocates memory for future
extension:
a = np.zeros([2, 3])
**Pro** : It's easily readable.
**Con** : It wastes memory, and it's less performant.
Next, the `tuple` type, which is immutable. It doesn't need to preallocate
memory for future extension, because it can't be extended.
b = np.zeros((2, 3))
**Pro** : It uses minimal memory, and it's more performant.
**Con** : It's a little less readable.
My preference is to pass tuple literals where memory is a consideration, for
example, long-running scripts that will be used by lots of people. On the
other hand, when I'm using an interactive interpreter, I prefer to pass lists
because they're a bit more readable, the contrast between the square brackets
and the parenthesis makes for easy visual parsing.
You should only care about performance in a function, where the code is
compiled to bytecode:
>>> min(timeit.repeat('foo()', 'def foo(): return (0, 1)'))
0.080030765042010898
>>> min(timeit.repeat('foo()', 'def foo(): return [0, 1]'))
0.17389221549683498
Finally, note that the performance consideration will be dwarfed by other
considerations. You use Python for speed of development, not for speed of
algorithmic implementation. If you use a bad algorithm, your performance will
be much worse. It's also very performant in many respects as well. I consider
this only important insomuch as it may scale, if it can ameliorate heavily
used processes from dying a death of a thousand cuts.
|
Compiling C extension with anaconda on Travis-CI missing __log_finite symbol
Question: A C extension module that compiles fine on Travis-CI without anaconda fails
when installed with anaconda. It appears to install just fine, but when I try
to import it, I get the following error:
ImportError: /home/travis/anaconda/lib/python2.7/site-packages/quaternion/numpy_quaternion.so: undefined symbol: __log_finite
The full error can be seen [here](https://travis-
ci.org/moble/spherical_functions/jobs/38972357). Obviously, this looks like a
linker error, where it can't find glibc (which I believe is where
`__log_finite` is found). But why should it fail to find glibc?
When I run `nm` on that .so file (through Travis), it shows that __log_finite
is indeed undefined, but shouldn't it find it through the usual process?
I've tried installing `quaternion` through `pip` and I've tried installing it
by directly downloading it and running `python setup.py install`. Both seem to
work, in the sense that it looks like all the files are where they should be.
But both fail on import because they can't find that symbol.
I've even tried installing the _full_ version of anaconda (rather than just
miniconda, [which is recommended](http://conda.pydata.org/docs/travis.html)).
Nothing seems to work. How can I make Travis find that symbol, and is this
something I'll have to worry about ordinarily with my distribution?
Answer: It appears to be a problem with a `-ffast-math` flag in my `quaternion`
package. One thing that flag does is make the code assume that the numbers are
finite, so that instead of using the `log` function, it uses some `log_finite`
function, which for some reason Travis doesn't have --- or something. Anyway,
I have my numba package set an environment variable in Travis builds, which
the `quaternion` package then looks for on installation, and turns off fast-
math. This is unfortunate, because it means I'm not actually testing the code
as it's actually used. But it means my code builds and tests pass.
There seems to be about [one
mention](http://issues.numenta.org/browse/NPC-148?focusedCommentId=10145&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-
tabpanel#comment-10145) of this on the internet. Or not; I can't tell.
|
Python dynamic import and __all__
Question: I am facing a behaviour that I don't understand even through I feel it is a
very basic question...
Imagine you have a python package `mypackage` containing a module `mymodule`
with 2 files `__init__.py` and `my_object.py`. Last file contains a class
named `MyObject`.
I am trying to right an automatic import within the `__init__.py` that is an
equivalent to :
**__init__.py**
__all__ = ['MyObject']
from my_object import MyObject
in order to be able to do:
from mypackge.mymodule import MyObject
I came up with a solution that fill **all** with all classes' names. It uses
`__import__` (also tried the `importlib.import_module()` method) but when I
try to import `MyObject` from the `mymodule`, it keeps telling me:
ImportError: cannot import name MyObject
Here is the script I started with :
classes = []
for module in os.listdir(os.path.dirname(__file__)):
if module != '__init__.py' and module[-3:] == '.py':
module_name = module[:-3]
import_name = '%s.%s' % (__name__, module_name)
# Import module here! Looking for an equivalent to 'from module import MyObject'
# importlib.import_module(import_name) # Same behaviour
__import__(import_name, globals(), locals(), ['*'])
members = inspect.getmembers(sys.modules[import_name], lambda member: inspect.isclass(member) and member.__module__.startswith(__name__) )
names = [member[0] for member in members]
classes.extend(names)
__all__ = classes
Can someone help me on that point ?
Many thanks,
Answer: I think you don't need heavy script to perform what you are looking for. It
should work with simple basic python.
It seems there is a mistake in your `__init__.py` file within your package.
You should have something like :
from mymodule import MyObject
instead of
from my_object import MyObject
Here is a working simple example : <https://github.com/julozi/package-test>
CHeers,
Julien
|
How to use "from modname import *" to create and modify global names?
Question: I have read [the
documentation](https://docs.python.org/3.3/reference/simple_stmts.html#the-
import-statement) and also [this comprehensive
answer](http://stackoverflow.com/questions/710551/import-module-or-from-
module-import) but something doesn't work as I expect. I hoped to use `from
set_stuff_up import *` as basically an "include" statement to define some
boilerplate globals and functions, but something doesn't work as I'd expect.
Here is set_stuff_up.py:
public_name = None
def set_public_name():
global public_name # makes no diff
public_name = 'HELLO DERE'
However the following code:
from set_stuff_up import *
print('public_name is',public_name)
set_public_name()
print('public_name is',public_name)
produces the output:
public_name is None
public_name is None
In other words, `from set_stuff_up import *` did bind `public_name` "in the
local namespace for the scope where the import statement occurs" but somehow
the function `set_public_name` operates on a different public_name, regardless
of the global statement.
Can somebody clarify the way the namespaces are operating in this case? And is
there a way to have the imported function operate on the imported public name?
Answer: It's important to understand that Python variables are names referring to
values. The globals of a module are names that refer to values. When you use
`from set_stuff_up import *`, that means, make names in this module that refer
to the same values as the corresponding names in set_stuff_up. So now you have
two names, `set_stuff_up.public_name`, and `my_module.public_name`, both
referring to the same value (None in this case). When you run
`set_public_name`, you are reassigning `set_stuff_up.public_name` to refer to
a new value. Your other name, `my_module.public_name` is unaffected.
Names cannot refer to other names, only to values. Your situation here is no
different than:
a = 1
b = a
a = 2
You don't expect b to equal 2 at this point. Assigning to a doesn't affect any
other name that shares the same value as a.
For more detail on this, see [Facts and Myths about Python Names and
Values](http://nedbatchelder.com/text/names.html).
There isn't a way to organize your code to get the exact effect you are
looking for. You could use this:
import set_stuff_up
print('public_name is', set_stuff_up.public_name)
set_stuff_up.set_public_name()
print('public_name is', set_stuff_up.public_name)
|
group by function in sqlite3 or itertools.groupy in python, sorting and grouping middle values within a string
Question: I have the following connection to a SQLite database returning data in the
"phone1" column, I'm looking to skip the area code portion and order by the
exchange (middle value) portion of phone1 column. Implementing wildcards seems
to work if you're testing for a known exchange, however I'd like group by or
order by starting with the 5 string character.
first_name, last_name, company, address, city, county, state, zip, **phone1**
, phone2, email, web
(0, ('201-238-5688',))
(1, ('201-431-2989',))
(2, ('201-474-4924',))
(3, ('201-588-7810',))
(4, ('201-672-1553',))
.....
(495, ('973-943-3423',))
(496, ('978-626-2978',))
(497, ('978-697-6263',))
(498, ('979-718-8968',))
(499, ('985-890-7262',))
import os, csv, json, re
import sqlite3
conn = sqlite3.connect('US_500.sqlite')
conn.text_factory = str
cursor = conn.cursor()
reader = cursor.execute ("SELECT phone1 FROM SampleData ORDER BY substr(phone1, 5) BETWEEN 200 AND 300")
tabledata = cursor.fetchall()
for row in enumerate(tabledata):
print str(row)
conn.close()
Answer: I don't think that you need to group, ordering should be sufficient.
In a SQL query you can use `substr()` to select part of a field, and then
order by that. Your query could be:
SELECT phone1 FROM SampleData order by substr(phone1, 5, 3)
This extracts 3 characters starting at the 5th position (so characters 5, 6 &
7) and uses them for ordering.
Similarly you can also do it in Python by calling `sorted()` on the query
results and passing a key function that selects the range of characters by
which to sort:
tabledata = cursor.fetchall()
for row in enumerate(sorted(tabledata, key=lambda x: x[4:7])):
print str(row)
|
GDB printing STL data
Question: After following the instructions given on this site:
<https://sourceware.org/gdb/wiki/STLSupport> GDB is still unable to print the
contents of stl containers like vectors, other than printing out a huge amount
of useless information. When GDB loads, I also get the following errors, which
I think are related to the Python that I put into `~/.gdbinit`
Traceback (most recent call last):
File "<string>", line 4, in <module>
File "/Users/mayankp/gdb_printers/python/libstdcxx/v6/printers.py", line 1247, in register_libstdcxx_printers
gdb.printing.register_pretty_printer(obj, libstdcxx_printer)
File "/usr/local/share/gdb/python/gdb/printing.py", line 146, in register_pretty_printer
printer.name)
RuntimeError: pretty-printer already registered: libstdc++-v6
/Users/mayankp/.gdbinit:6: Error in sourced command file:
Error while executing Python code.
Answer: > When GDB loads, I also get the following errors...
It looks like instructions you followed on
<https://sourceware.org/gdb/wiki/STLSupport> are invalid now. If you look at
`svn log` you will see that registering of pretty printers was added in
`__init__.py` recently:
------------------------------------------------------------------------
r215726 | redi | 2014-09-30 18:33:27 +0300 (Вт., 30 сент. 2014) | 4 lines
2014-09-30 Siva Chandra Reddy <[email protected]>
* python/hook.in: Only import libstdcxx.v6.
* python/libstdcxx/v6/__init__.py: Load printers and xmethods.
------------------------------------------------------------------------
And therefore second registration throws error. You can remove it or comment
out:
#register_libstdcxx_printers (None)
> GDB is still unable to print the contents of stl containers
You have probably mismatched pretty printers with your gcc. See
<http://stackoverflow.com/a/9108404/72178> for details.
|
Python Wake On Lan
Question: I'm trying to write a script that checks if a host on my intranet is up. if
so, wait 10 seconds and test again. if it is down, send a wake on lan packet
to the host, then test again in 10 seconds. The code compiles but doesn't seem
to be working. Any help is appreciated.
import os
import socket
def main():
hostname = "10.0.0.5"
response = os.system("ping -c 1 " + hostname)
if response == 0:
print ("Host " + hostname + "is up.")
Time.Sleep(10)
main()
else:
print("Host " + hostname + "is down.")
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.sendto('\xff'*6 + '\x00\x21\x6A\xC7\x1A\x42'*16, ('10.0.0.5', 80))
Time.Sleep(10)
main()
**Update:** I changed the if conditional to != with the 10.0.0.5 host on to
test if it's sending the packet, and it's not (confirmed with wireshark). I
don't know if it's even running my code tbh.
New Code seems to be working, only issue is that it's ignoring time.sleep and
just repeating once the ping finishes
import os
import socket
import subprocess
import time
from time import sleep
x = 0
while x < 1:
hostname = "10.0.0.5"
output = subprocess.Popen(["ping.exe",hostname],stdout = subprocess.PIPE).communicate()[0]
if ('unreachable' in output):
print hostname, 'is down.'
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.sendto('\xff'*6 + '\x00\x21\x6A\xC7\x1A\x42'*16, ('10.0.0.255', 80))
time.sleep(10)
else:
print hostname, 'is up.'
time.sleep(10)
x = x + 0
Answer: The indentations are off in your updated code.also, looking for 'unreachable'
in the output wouldn't be the best thing to do.what if it timed out or showed
another error ? i would use the return code instead.
Here is an updated version.make sure to preserve the indentations.
import os
import time
import socket
import subprocess
hostname = "10.0.0.5"
while 1:
sp = subprocess.Popen(["ping.exe", hostname], stdout = subprocess.PIPE)
sp.wait() # Wait for ping.exe to terminate.
return_code = sp.returncode # Get the return code
if return_code != 0:
print hostname, 'is down.'
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
#I'm not that familiar with this part.assuming yours is correct.
s.sendto('\xff'*6 + '\x00\x21\x6A\xC7\x1A\x42'*16, ('10.0.0.255', 80))
else:
print hostname, 'is up.'
time.sleep(10) # Sleep for 10 seconds
|
Decoding/Encoding Href Links
Question: How can I get the result of `expected` to return a readable string? In other
words, when given `/wiki/Cookbook:Cao_l%E1%BA%A7u`, it should return
`/wiki/Cookbook:Cao_lầu`.
**Note** : I'm running on Python 2.7.2
import urllib
test_array = [
'/wiki/Cookbook:Bulgarian_Meatball_Soup_(Supa_Topcheta)',
'/wiki/Cookbook:Campfire_S%27mores',
'/wiki/Cookbook:Candied_Almonds_(Br%C3%A4nda_mandlar)',
'/wiki/Cookbook:Chicken_%26_Pasta_Alfredo',
'/wiki/Cookbook:Cozido_%C3%A0_Portuguesa'
]
actual = [urllib.unquote(i).decode('utf-8') for i in test_array]
assert '/wiki/Cookbook:Bulgarian_Meatball_Soup_(Supa_Topcheta)' == actual[0]
assert "/wiki/Cookbook:Campfire_S'mores" == expected[1]
assert '/wiki/Cookbook:Candied_Almonds_(Brända_mandlar)' == actual[2]
assert '/wiki/Cookbook:Chicken_&_Pasta_Alfredo' == actual[3]
assert '/wiki/Cookbook:Cozido_à_Portuguesa' == actual[4]
Answer: You need to specified the unicode literals (prefixing `u`) instead of string
literals, because the
[`str.decode`](https://docs.python.org/2/library/stdtypes.html#str.decode)
returns `unicode` object.
assert u'/wiki/Cookbook:Bulgarian_Meatball_Soup_(Supa_Topcheta)' == expected[0]
assert u"/wiki/Cookbook:Campfire_S'mores" == expected[1]
assert u'/wiki/Cookbook:Candied_Almonds_(Brända_mandlar)' == expected[2]
assert u'/wiki/Cookbook:Chicken_&_Pasta_Alfredo' == expected[3]
assert u'/wiki/Cookbook:Cozido_à_Portuguesa' == expected[4]
BTW, I would give `expected` a different name like `actual` or `got`. (String
literals are expected outcome, right?)
|
Python number game with changing feeback
Question: User guesses four digit number and feedback needs to be 'F' if a number is
correct but not in the right place, 'X' if the number is not in the number at
all and if the digit is correct and in the right position it displays the
digit. Code below shows my attempt but it is giving me the error: expected str
instance, list found
from random import randint
def check_char(a, b): #Function making output display F, number and X depending on user input
if a == b:
return randomNumber #Number is displayed when correctly guessed
elif b == randomNumber:
return 'F' #F means number is somewhere in the randomNumber
elif b != randomNumber:
return 'X' #X means number is nowhere in the randomNumber
guessesTaken = 1
randomNumber = [str(randint(1, 9)) for _ in range(4)] # create list of random nums
print(randomNumber)
while guessesTaken < 10:
guesses = list(input("Guess Number: ")) # create list of four digits
check = ''.join([check_char(int(a), int(b)) for a, b in zip(guesses, randomNumber)])
if check == "YYYY": # if check has four Y's we have a correct guess
print("Congratulations, you are correct, it took you", guessesTaken, "guesses.")
break
else:
guessesTaken += 1 # else increment guess count and ask again
print(check)
if guessesTaken == 10:
print("You lose")
Answer: You will be replacing X with F if the user has guessed a number and it is in
randomNumber at another position it will be replaced with an ``F:
from random import randint
guessesTaken = 1
randomNumber = [str(randint(1, 9)) for _ in range(4)] # create list of random nums
def check_char(a, b): #Function making output display F, number and X depending on user input
if a == b:
return b #Number is displayed when correctly guessed
elif a in randomNumber and a != b: # if
return 'F'
elif a != b:
return 'X'
while guessesTaken < 10:
guesses = list(raw_input("Guess Number: ")) # create list of four digits
check = ''.join([check_char(a, b) for a, b in zip(guesses, randomNumber)])
if check.isdigit(): # if check is all digits, number is guessed
print("Congratulations, you are correct, it took you", guessesTaken, "guesses.")
break
else:
guessesTaken += 1 # else increment guess count and ask again
print(check)
else:
print("You lose")
You don't need to cast as ints in `check_char(a, b)` as strings will compare
just fine `"1" == "1" is True`
You are returning the list `return randomNumber` not the substring `b` so you
get an error trying to `join` as it expects a string not a list, also `b`
would never be equal to `randomNumber` .
|
Amazon MapReduce with my own reducer for streaming
Question: I wrote a simple map and reduce program in python to count the numbers for
each sentence, and then group the same number together. i.e suppose sentence 1
has 10 words, sentence 2 has 17 words and sentence 3 has 10 words. The final
result will be:
10 \t 2
17 \t 1
The mapper function is:
import sys
import re
pattern = re.compile("[a-zA-Z][a-zA-Z0-9]*")
for line in sys.stdin:
word = str(len(line.split())) # calculate how many words for each line
count = str(1)
print "%s\t%s" % (word, count)
The reducer function is:
import sys
current_word = None
current_count = 0
word = None
for line in sys.stdin:
line = line.strip()
word, count = line.split('\t')
try:
count = int(count)
word = int(word)
except ValueError:
continue
if current_word == word:
current_count += count
else:
if current_word:
print "%s\t%s" % (current_word, current_count)
current_count = count
current_word = word
if current_word == word:
print "%s\t%s" %(current_word, current_count)
I tested on my local machine with the first 200 lines of the file : head -n 200 sentences.txt | python mapper.py | sort | python reducer.py The results are correct. Then I used Amazon MapReduce streaming service, it failed at the reducer step. So I changed the print in the mapper function to:
print "LongValueSum" + word + "\t" + "1"
This fits into the default aggregate in mapreduce streaming service. In this
case, I don't need the reducer.py function. I get the final results from the
big file sentences.txt. But I don't know why my reducer.py function failed.
Thank you!
Answer: Got it! A "stupid" mistake. When I tested it, I use something like python
mapper.py. But for mapreduce, I need make it executable. So just add
# !/usr/bin/env python
in the beginning.
|
Force Nosetests to Use Python 2.7 instead of 3.4
Question: I've been learning Python using version 3.4. I recently started learning
Web.py so have been using Python 2.7 for that, since web.py not supported in
Python 3.4. I have nose 1.3.4 module installed for both Python 3.4 and 2.7. I
need to run the nosetests command on some Python code written in 2.7 that uses
the Web.py module. However, when I type nosetests command it automatically
uses Python 3.4, so is throwing an error as unable to import the Web.py module
in my Python code. Is there a way to force nosetests to use Python 2.7?
Macbook Pro running OS X Yosemite.
Answer: Try python2.7 -m nose instead of running nosetests. – dano 38 mins ago
|
How to plot error bars in polar coordinates in python?
Question: I have the following problem: I want to plot some data points in polar
coordinates in python, which is easy, using some code like
import numpy as np
import matplotlib.pyplot as plt
r = 1e04 * np.array([5.31,5.29,5.25,5.19,5.09,4.92,4.67,4.27,3.75,3.56])
theta = 2*np.pi/360 * np.array(list(range(0, 100, 10)))
plt.polar(theta, r, "ro")
plt.show()
but I want to add error bars and I don't find any sufficient solution. Is
there already some prebuild matplotlib-code? Or does anyone know how to define
the error bars properly? As I understand it, the r-error is just a straight
line while the theta-error should be a segment of a circle.
Answer: On limitation of errorbar is that the caps are drawn with `hline` and `vline`
collections so the caps to not properly rotate in polar coordinates (there is
an issue open for this, <https://github.com/matplotlib/matplotlib/issues/441>
). An approximate workaround is to just make the caps have zero size:
import numpy as np
import pylab as plt
fig = plt.figure()
ax = plt.axes(polar=True)
r = np.array([5.31,5.29,5.25,5.19,5.09,4.92,4.67,4.27,3.75,3.56])
theta = 2*np.pi/360 * np.array(list(range(0, 100, 10)))
ax.plot(theta, r, "ro")
ax.errorbar(theta, r, yerr=1, xerr=.1, capsize=0)
plt.show()

If you want the theta error bars to be circular you will have to implement
that your self. The easiest way is
th_err = 1
for th, _r in zip(theta, r):
local_theta = np.linspace(-th_err, th_err, 15) + th
local_r = np.ones(15) * _r
ax.plot(local_theta, local_r, color='k', marker='')
plt.show()
For small errors this won't really make a difference, but will matter for
large errors.
|
Flask-Admin pages inaccessible in production
Question: I am trying to deploy my Flask app using Apache and mod_wsgi on an Ubuntu
server.
It seems to work fine for restful requests that I have implemented, but I
can't access my Flask-Admin pages (which I can access in development).
Here is the structure of my app (simplified for the purpose of this question)
:
- MyApp/
- main.py
- myapp/
- __init__.py
- Views.py
- files/
- wsgi/
myapp.wsgi
When in development, I simply run by using python main.py and everything works
fine.
Here is the wsgi file :
import sys
import os
##Virtualenv Settings
activate_this = '/var/www/code/MyApp/venv/bin/activate_this.py'
execfile(activate_this, dict(__file__=activate_this))
##Replace the standard out
sys.stdout = sys.stderr
##Add this file path to sys.path in order to import settings
sys.path.insert(0, os.path.join(os.path.dirname(os.path.realpath(__file__)), '../..'))
##Add this file path to sys.path in order to import app
sys.path.append('/var/www/code/MyApp/')
from myapp import app as application
Here is the configuration file for Apache, I'm using Apache 2.2 :
<VirtualHost *:443>
WSGIScriptAlias /myapp /var/www/code/MyApp/wsgi/myapp.wsgi
WSGIScriptReloading On
WSGIPassAuthorization On
SSLEngine on
SSLCertificateFile /var/www/code/MyApp/ssl.crt
SSLCertificateKeyFile /var/www/code/MyApp/ssl.key
SSLVerifyClient None
SSLOptions +StdEnvVars
<Directory /var/www/code/MyApp/wsgi>
Order deny,allow
Allow from all
</Directory>
</VirtualHost>
Here is how I instantiate the Flask app in my __ _init_ __.py :
app = Flask(__name__, static_folder='files')
Here is how I create the Admin interface in my main.py :
# Create admin
admin = admin.Admin(app, 'App Administration')
I also have a link to the admin page in my Views.py :
@app.route('/')
def index():
return '<a href="/admin/">Go to admin page</a>'
When in development, I can access the admin interface at mysite.com/admin/ .
In production, I have my app at mysite.com/myapp/ and I can't access the admin
interface, which I expected to be at mysite.com/myapp/admin/ .
I think there's a problem with the way I instantiate flask-admin. I kept the
default "admin/" url but maybe I need to declare a specific url when in
production ?
Thanks for any help.
**EDIT :**
I checked the Apache error log but I don't get any error.
Answer: Did you try to update your route with "myapp"? Looks like you need to update
your route for production. May even be able to drop the '/' on production. Try
it.
@app.route('/')
@app.route('/myapp/')
def index():
|
pickle module doesn't work for this simple code
Question: when i run this code in Python 3.4.2(win7-64) it doesn't work! it creates file
but nothing in it.(0 bytes) I don't know what is the problem? Help- Thanks
Windo
import pickle
f=open ("G:\\database.txt","wb")
pickle.dump (12345,f)
Answer: You have to close the file object that you have opened. So just add the line
f.close()
at the end and it will work.
As an alternative, you can also use the `with` statement to open the file,
then it will automatically close the file for you when it's done:
import pickle
with open("G:\\database.txt", "wb") as f:
pickle.dump( 12345, f )
|
Can I use aspects in python without changing a method / function's signature?
Question: I've been using python-aspectlib to weave an aspect to certain methods -
unfortunately this changes the methods signature to `Argspec(args=[],
varargs='args', keywords='kwargs', default=None)`, which creates problems when
working with libraries that depend on `inspect` returning the proper
signature(s).
Is there a way to use python-aspectlib without changing a method's signature?
If not, are there other python aspect libraries that can do this?
I've looked at the decorator module, which explicitly mentions the problem of
changing a method signature:
<http://micheles.googlecode.com/hg/decorator/documentation.html#statement-of-
the-problem> , but I was hoping to find a solution where I don't need to
modify the methods I want to weave (since they are part of a third party
library).
I'm using python 2.7.6
Answer: I've managed to 'fix' this for my specific use case with the following piece
of code:
from decorator import decorator
from Module1 import Class1
from Module2 import Class2
def _my_decorator(func, *args, **kwargs):
#add decorator code here
return func(*args, **kwargs)
def my_decorator(f):
return decorator(_my_decorator, f)
methods_to_decorate = [
'Class1.method1',
'Class2.method2',
]
for method in methods_to_decorate:
exec_str = method + '= my_decorator('+method+'.im_func)'
exec(exec_str)
This probably doesn't handle all of the issues mentioned in the [How you
implement your Python decorator is
wrong](https://github.com/GrahamDumpleton/wrapt/tree/master/blog) blog posts,
but it full fills the criteria most important to me: correct method
signatures.
|
Looking a instance's class hierarchy in Python
Question: Taking `zip` for example. I just want to know is it an `Iterable` or
`Iterator` or `Generator` .
so I put this:
zip(x,y).__class__
it prints: zip
Why class name are lowercase?
import inspect
inspect.getmro(zip)
zip.__base__
It prints: (zip, object)
That's impossible. Python doc says `zip` returns an `iterator` (or
`generator`?), but `zip` apparently is not inherited from Iterator or alikes
according what `getmro` prints.
So the two questions ,thanks for help.
Answer: Nothing in Python inherits from `Iterator` or `Iterable`, as they embody a
_protocol_. Python looks for the [`__iter__` and `__next__`
methods](https://docs.python.org/3/library/stdtypes.html#typeiter), not for a
specific base class. Any object can be an iterable or iterator, simply by
implementing those methods.
`zip()` is a built-in function defined in C code, and it follows the naming
convention for all built-ins; these are always in lowercase; the type it
returns is not all that relevant here and follows the function name.
The
[`collections.abc.Iterable`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Iterable)
and
[`collections.abc.Iterator`](https://docs.python.org/3/library/collections.abc.html#collections.abc.Iterable)
classes are _abstract base classes_ ; they implement [special
hooks](https://docs.python.org/3/reference/datamodel.html#customizing-
instance-and-subclass-checks) that essentially return `True` if the instance
or subclass against which you are test implement the required methods.
`zip()` is both an iterable (it has an `__iter__` method) and an iterator
(`__iter__` returns the object itself and it has a `__next__` method):
>>> from collections.abc import Iterator, Iterable
>>> zip_instance = zip('')
>>> type(zip_instance)
<class 'zip'>
>>> isinstance(zip_instance, Iterator)
True
>>> isinstance((zip_instance, Iterable)
True
>>> zip_instance.__iter__
<method-wrapper '__iter__' of zip object at 0x10f2d8348>
>>> zip_instance.__iter__() is zip_instance
True
>>> zip_instance.__next__
<method-wrapper '__next__' of zip object at 0x10f2d8348>
`zip()` is not a generator, as it doesn't have any of the [special generator
methods](https://docs.python.org/3/reference/expressions.html#generator-
iterator-methods):
>>> hasattr(zip_instance, 'send')
False
>>> hasattr(zip_instance, 'throw')
False
>>> hasattr(zip_instance, 'close')
False
|
Python win32com getting the size of email
Question: I'm trying to generate a certain amount of email traffic using Python 3 and
win32com with Outlook. I need to know the size of the emails that I'm creating
to make sure the correct amount of traffic is generated. I've been trying to
use the mailItem.Size property to get this information, but it is always
returning 0. The code I'm using is:
from win32com.client import Dispatch
outlook = Dispatch("Outlook.Application")
message = outlook.createItem(0)
message.To = "[email protected]"
message.Subject = "Test"
message.HtmlBody = "Test of win32com in Python"
message.Attachments.Add(os.getcwd() + "\attachment.jpg")
print(message.Size)
message.Send
And all I get as output is 0. Regardless of whether I vary the body text or
remove the attachment. However, if I get the size of each component
individually, the Size property returns normally. But, the added up size isn't
the same as if I read from the Sent Items folder after waiting a few seconds.
Should I not be using the Size property of the mailItem?
Answer: You need to call MailItem.Save before reading the Size property.
|
Get the name of a python variable
Question:
def f(s)
print <name of s> = s
I wish to output "hello=10" for `f(hello)`, given that the variable hello has
value 10.
The problem is how to get the variable name of the variable, i.e., `<name of
s>`?
This is for debugging purpose. Say given a new statement `s=2`
`f(s)` before the new statement will print `s=0` if `s` is initially 0, `f(s)`
after the new statement will print `s=2`.
I can easily write:
def f(name,value)
print "%s=%s"%(name,str(value))
and use it as `f("s",s)`, but that would need me to input two arguments, which
is more cumbersome.
Answer: I am not sure if it is actually worth it, but using the information from
frames for simple function calls with positional arguments you can do
something like this:
import inspect
def hello(s1, s2, s3, s4):
args = inspect.getargspec(hello).args
line = inspect.getouterframes(inspect.currentframe())[1][4][0]
actual_args = map(str.strip, line[line.index('(')+1: line.index(')')].split(','))
for x, y in zip(args, actual_args):
print "value of {} is {}".format(y, locals()[x])
a, b, c, d = 1, 2, 3, 4
hello(a, b, c, d)
### output:
value of a is 1
value of b is 2
value of c is 3
value of d is 4
|
Parse json file with ironpython 2.5
Question: I am using IronPython 2.5 (inside TIBCO Spotfire) and would like to parse a
json file.
The json library is not available in this version of IronPython. simplejson
doesn't work either. Is there another library i can use for this? It can be
.Net or Python, doesn't matter.
Thanks in advance!
Answer: I have also done it with `JavaScriptSerializer`. As follows:
peopleJson = '''[
{"name":"Jon", "age": "29", canSing:false, favColors:["red", "white"]},
{"name":"Lilly", "age": "55", "canSing": true}
]'''
#convert json to string
import clr
clr.AddReference('System.Web.Extensions')
from System.Web.Script.Serialization import JavaScriptSerializer
people = JavaScriptSerializer().DeserializeObject(peopleJson)
for person in people:
# each line is a dictionary
print person['name'] + ' ' + person['age']
The output is:
Jon 29
Lilly 55
|
Extracting unsigned char from array of numpy.uint8
Question: I have code to extract a numeric value from a python sequence, and it works
well in most cases, but not for a numpy array.
When I try to extract an unsigned char, I do the following
unsigned char val = boost::python::extract<unsigned char>(sequence[n]);
where sequence is any python sequence and n is the index. I get the following
error:
TypeError: No registered converter was able to produce a C++ rvalue of type
unsigned char from this Python object of type numpy.uint8
How can I successfully extract an unsigned char in C++? Do I have to
write/register special converters for numpy types? I would rather use the same
code that I use for other python sequences, and not have to write special code
that uses the `PyArrayObject*`.
Answer: One can register a custom from-python converter with Boost.Python that handles
conversions from NumPy array scalars, such as `numpy.uint8`, to C++ scalars,
such as `unsigned char`. A custom from-python converter registration has three
parts:
* A function that checks if a `PyObject` is convertible. A return of `NULL` indicates that the `PyObject` cannot use the registered converter.
* A construct function that constructs the C++ type from a `PyObject`. This function will only be called if `converter(PyObject)` does not return `NULL`.
* The C++ type that will be constructed.
Extracting the value from the NumPy array scalar requires a few NumPy C API
calls:
* [`import_array()`](http://docs.scipy.org/doc/numpy/reference/c-api.array.html#import_array) must be called within the initialization of an extension module that is going to use the NumPy C API. Depending on how the extension(s) are using the NumPy C API, other requirements for importing may need to occur.
* [`PyArray_CheckScalar()`](http://docs.scipy.org/doc/numpy/reference/c-api.array.html#PyArray_CheckScalar) checks if a `PyObject` is a NumPy array scalar.
* [`PyArray_DescrFromScalar()`](http://docs.scipy.org/doc/numpy/reference/c-api.array.html#PyArray_DescrFromScalar) gets the [data-type-descriptor](http://docs.scipy.org/doc/numpy/reference/c-api.types-and-structures.html#PyArray_Descr) object for an array scalar. The data-type-descriptor object contains information about how to interpret the underlying bytes. For example, its [`type_num`](http://docs.scipy.org/doc/numpy/reference/c-api.types-and-structures.html#PyArray_Descr.type_num) data member contains an [enum value](http://docs.scipy.org/doc/numpy/reference/c-api.dtype.html#enumerated-types) that corresponds to a C-type.
* [`PyArray_ScalarAsCtype()`](http://docs.scipy.org/doc/numpy/reference/c-api.array.html#PyArray_ScalarAsCtype) can be used to extract the C-type value from a NumPy array scalar.
* * *
Here is a complete example demonstrating using a helper class,
`enable_numpy_scalar_converter`, to register specific NumPy array scalars to
their corresponding C++ types.
#include <boost/cstdint.hpp>
#include <boost/python.hpp>
#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
#include <numpy/arrayobject.h>
// Mockup functions.
/// @brief Mockup function that will explicitly extract a uint8_t
/// from the Boost.Python object.
boost::uint8_t test_generic_uint8(boost::python::object object)
{
return boost::python::extract<boost::uint8_t>(object)();
}
/// @brief Mockup function that uses automatic conversions for uint8_t.
boost::uint8_t test_specific_uint8(boost::uint8_t value) { return value; }
/// @brief Mokcup function that uses automatic conversions for int32_t.
boost::int32_t test_specific_int32(boost::int32_t value) { return value; }
/// @brief Converter type that enables automatic conversions between NumPy
/// scalars and C++ types.
template <typename T, NPY_TYPES NumPyScalarType>
struct enable_numpy_scalar_converter
{
enable_numpy_scalar_converter()
{
// Required NumPy call in order to use the NumPy C API within another
// extension module.
import_array();
boost::python::converter::registry::push_back(
&convertible,
&construct,
boost::python::type_id<T>());
}
static void* convertible(PyObject* object)
{
// The object is convertible if all of the following are true:
// - is a valid object.
// - is a numpy array scalar.
// - its descriptor type matches the type for this converter.
return (
object && // Valid
PyArray_CheckScalar(object) && // Scalar
PyArray_DescrFromScalar(object)->type_num == NumPyScalarType // Match
)
? object // The Python object can be converted.
: NULL;
}
static void construct(
PyObject* object,
boost::python::converter::rvalue_from_python_stage1_data* data)
{
// Obtain a handle to the memory block that the converter has allocated
// for the C++ type.
namespace python = boost::python;
typedef python::converter::rvalue_from_python_storage<T> storage_type;
void* storage = reinterpret_cast<storage_type*>(data)->storage.bytes;
// Extract the array scalar type directly into the storage.
PyArray_ScalarAsCtype(object, storage);
// Set convertible to indicate success.
data->convertible = storage;
}
};
BOOST_PYTHON_MODULE(example)
{
namespace python = boost::python;
// Enable numpy scalar conversions.
enable_numpy_scalar_converter<boost::uint8_t, NPY_UBYTE>();
enable_numpy_scalar_converter<boost::int32_t, NPY_INT>();
// Expose test functions.
python::def("test_generic_uint8", &test_generic_uint8);
python::def("test_specific_uint8", &test_specific_uint8);
python::def("test_specific_int32", &test_specific_int32);
}
Interactive usage:
>>> import numpy
>>> import example
>>> assert(42 == example.test_generic_uint8(42))
>>> assert(42 == example.test_generic_uint8(numpy.uint8(42)))
>>> assert(42 == example.test_specific_uint8(42))
>>> assert(42 == example.test_specific_uint8(numpy.uint8(42)))
>>> assert(42 == example.test_specific_int32(numpy.int32(42)))
>>> example.test_specific_int32(numpy.int8(42))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
Boost.Python.ArgumentError: Python argument types in
example.test_specific_int32(numpy.int8)
did not match C++ signature:
test_specific_int32(int)
>>> example.test_generic_uint8(numpy.int8(42))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: No registered converter was able to produce a C++ rvalue of type
unsigned char from this Python object of type numpy.int8
A few things to note from the interactive usage:
* Boost.Python was able to extract `boost::uint8_t` from both `numpy.uint8` and `int` Python objects.
* The `enable_numpy_scalar_converter` does not support promotions. For instance, it should be safe for `test_specific_int32()` to accept a `numpy.int8` object that is promoted to a larger scalar type, such as `int`. If one wishes to perform promotions:
* `convertible()` will need to check for compatible `NPY_TYPES`
* `construct()` should use [`PyArray_CastScalarToCtype()`](http://docs.scipy.org/doc/numpy/reference/c-api.array.html#PyArray_ScalarAsCtype) to cast the extracted array scalar value to the desired C++ type.
|
Displaying the value of a variable in PyClips
Question: **I have been trying to print the value of a variable in PyClips without any
success.Any help would be appreciated. Here is the code.**
**Instead of "Are you observative" it prints "Are you ?name"**
def clips_raw_input(prompt):
return clips.String(raw_input(prompt))
clips.RegisterPythonFunction(clips_raw_input, "input")
clips.Assert("(quality observative) ")
clips.Assert("(quality communicative) ")
clips.Assert("(quality emotionally-stable) ")
clips.Assert("(has human true)")
r1 = clips.BuildRule(
"what-are-qualities",
"""(quality ?name)
(not (has ?name ?))""",
"""(bind ?response (python-call input Are you ?name))
(assert (has ?name ?response))""")
**Any help will be appreciated**
Answer:
>>> import clips
>>> def clips_raw_input(prompt):
... return clips.String(raw_input(prompt))
...
>>> clips.RegisterPythonFunction(clips_raw_input, "input")
>>> r1 = clips.BuildRule(
... "what-are-qualities",
... """(quality ?name)
... (not (has ?name ?))""",
... """(bind ?response (python-call input (str-cat "Are you " ?name "? ")))
... (assert (has ?name ?response))""")
>>> clips.Reset()
>>> clips.Assert("(quality observative) ")
<Fact 'f-1': fact object at 0x10450b330>
>>> clips.Assert("(quality communicative) ")
<Fact 'f-2': fact object at 0x10450b360>
>>> clips.Assert("(quality emotionally-stable) ")
<Fact 'f-3': fact object at 0x10450b3f0>
>>> clips.Assert("(has human true)")
<Fact 'f-4': fact object at 0x10450b450>
>>> clips.Run()
Are you emotionally-stable? yes
Are you communicative? no
Are you observative? yes
3
>>> clips.PrintFacts()
f-0 (initial-fact)
f-1 (quality observative)
f-2 (quality communicative)
f-3 (quality emotionally-stable)
f-4 (has human true)
f-5 (has emotionally-stable "yes")
f-6 (has communicative "no")
f-7 (has observative "yes")
For a total of 8 facts.
>>>
|
QtMediaPlayer issue
Question: I am working with PyQt5 and trying to play a video on my Python application.
I am using Python 3.4.0 and PyQt 5.2.1 and running the application on Ubuntu
14.04, but it id important to make codes cross-platform.
When I run these codes, I get the error of
`defaultServiceProvider::requestService(): no service found for - "org.qt-
project.qt.mediaplayer"`
My codes:
from PyQt5.QtCore import *
from PyQt5.QtWidgets import *
import PyQt5.QtMultimedia as M
class Form(QWidget):
def __init__(self, parent=None):
super(Form, self).__init__(parent)
self.setWindowTitle("IKA Control Panel")
url= QUrl.fromLocalFile("./some.mp3")
content= M.QMediaContent(url)
self.player = M.QMediaPlayer(self)
self.player.setMedia(content)
self.player.play()
self.player.stateChanged.connect( app.quit )
if __name__ == '__main__':
import sys
app = QApplication(sys.argv)
screen = Form()
screen.show()
sys.exit(app.exec_())
How can I solve this problem?
Answer: Install Qt5 plugins if you are using Ubuntu
sudo apt-get install libqt5multimedia5-plugins
|
How do I create a button in Python Tkinter to increase integer variable by 1 and display that variable?
Question: I am trying to create a Tkinter program that will store an int variable, and
increase that int variable by 1 each time I click a button, and then display
the variable so I can see that it starts out as 0, and then each time I click
the button it goes up by 1. I am using python 3.4.
import sys
import math
from tkinter import *
root = Tk()
root.geometry("200x200")
root.title("My Button Increaser")
counter = 0
def nClick():
counter + 1
def main_click():
mLabel = Label(root, text = nClick).pack()
mButton1 = Button(text = "Increase", command = main_click, fg = "dark green", bg = "white").pack()
root.mainloop()
Answer: You could use Tkinter variables. They are specially useful when you need to
modify a data that other widgets might interact with. Here is a look alike
code to the one in the question, but instead of defining `counter` as a normal
variable, it is a variable from Tkinter.
import tkinter
import sys
root = tkinter.Tk()
root.geometry("200x200")
root.title("His Button Increaser")
counter = tkinter.IntVar()
def onClick(event=None):
counter.set(counter.get() + 1)
tkinter.Label(root, textvariable=counter).pack()
tkinter.Button(root, text="Increase", command=onClick, fg="dark green", bg = "white").pack()
root.mainloop()
Instead of passing the value this variable holds to the `text` attribute of
the `Label`, we assign the variable to `textvariable` attribute, so when the
value of the variable gets updated, `Label` would update the displayed text
accordingly.
When you want to change the value of the variable, you'd need to call the
`set()` method of the variable object (see `onClick`) instead of assigning the
value directly to it.
|
How would I separate my Python File to multiple plugins?
Question: So first thing I want to say: I have been looking into modules and such, I
just don't quiet know how I would rewrite it to fit this in.
Project: What I have is a Skype Bot using the Skype4Py module. I have about 11
commands I noticed the one script is getting a little large.
I'm trying to think about how to link one main.py file to a Plugin Folder,
which contains each and every bot function in it's own respectable Python
file. It sounds simple an all, except when it comes to how the function is
called.
Here is just a basic look at my Skype bot, missing some of the larger
functions.
import Skype4Py, random
class SkypeBot():
def __init__(self):
self.skype = Skype4Py.Skype()
if self.skype.Client.IsRunning == False:
self.skype.Client.Start()
self.skype.Attach()
self.results = ['Yes', 'No', 'Maybe', 'Never']
def main(self):
print ' Skype Bot currently running on user: %s' % self.skype.CurrentUserHandle
print "\n\nCommands Called:\n"
while True:
self.skype.OnMessageStatus = self.RunFunction
def RunFunction(self, Message, Status):
if Status == 'SENT' or Status == 'RECEIVED':
cmd = Message.Body.split(' ')[0]
if cmd in self.functions.keys():
self.context = Message
self.caller = self.context.FromHandle
self.functions[cmd](self)
def ping(self):
print " %s : Ping" % self.caller
self.context.Chat.SendMessage('Pong')
def say(self):
try:
response = self.context.Body.split(' ', 1)
if response[1] == "-info":
print " %s : say -info" % self.caller
self.context.Chat.SendMessage("Resends the message entered. \n"
"Usage: !say Hello. \n"
"Example: Bot: Hello.")
else:
say = response[1]
print " %s : Say [%s]" % (self.caller, say)
self.context.Chat.SendMessage(say)
except:
self.context.Chat.SendMessage("Please use -info to properly use the !say command")
def eightball(self):
try:
question = self.context.Body.split(' ', 1)
if question[1] == "-info":
print " %s : 8Ball -info" % self.caller
self.context.Chat.SendMessage("Responds with an answer.\n"
"Usage: !8ball 'Do I have swag?'\n"
"Example: !8Ball Response: 'Yes'")
else:
random.shuffle(self.results)
answer = self.results[3]
print " %s : 8Ball [%s]" % (self.caller, question[1])
self.context.Chat.SendMessage("!8Ball Response: %s" % answer)
except:
self.context.Chat.SendMessage("Please use -info to properly use the !8ball command")
#FUNCTIONS LIST
#********************
functions = {
"!ping": ping,
"!say": say,
"!8ball": eightball,
}
if __name__ == "__main__":
snayer = SkypeBot()
snayer.main()
So basically, what I am wondering, how can I change
self.skype.OnMessageStatus = self.RunFunction
so that it'll run functions from another file?
Answer: For a program of this size it's not really _necessary_ to put your command
functions into separate files, but I guess it _is_ good organization. And good
practice for when you write a program that has thousands of lines of code. :)
One way to do this is to create a basic SkypeBot class without any command
methods and then import the command methods from your plugins directory and
add them to the class. It's easy enough to add new attributes to an existing
class, and it doesn't matter if the new attributes are properties or methods,
the syntax to add them is identical. (With a tiny bit more work it's even
possible to add new attributes to an instance, so you can have multiple
instances, each with their own individual set of commands. But I guess that's
not necessary here, since a program that uses the SkypeBot class will normally
only create a single instance).
So we can break your question into two parts:
1. How to add methods to an existing class.
2. How to import those methods from other source files.
As I said, 1) is easy. 2) is quite easy as well, but I've never done it
before, so I had to do a little bit of research and testing, and I can't
promise that what I've done is best practice, but it works. :)
I don't know much about Skype, and I don't have that Skype4Py module, and as
you said, the code above is not the complete program, so I've written some
fairly simple code to illustrate the process of adding plugin methods from
separate files to an existing class.
The name of the main program is "plugin_demo.py". To keep things neat, it
lives in its own directory, "plugintest/", which you should create somewhere
in your Python path (eg where you normally keep your Python programs). This
path **must** be specified in your PYTHONPATH environment variable.
"plugintest/" has the following structure:
plugintest/
__init__.py
plugin_demo.py
plugins/
__init__.py
add.py
multiply.py
The `__init__.py` files are used by Python's `import` machinery to let it know
that a directory contains a Python package, see [6.4.
Packages](https://docs.python.org/2/tutorial/modules.html#packages) in the
Python docs for further details.
Here are the contents of those files. Firstly, the files that go into
"plugintest/" itself:
**__init__.py**
__all__ = ['plugin_demo', 'plugins']
from plugintest import *
**plugin_demo.py**
#! /usr/bin/env python
#A simple class that will get methods added later from plugins directory
class Test(object):
def __init__(self, data):
self.data = data
def add_plugins(cls):
import plugins
print "Adding plugin methods to %s class" % cls.__name__
for name in plugins.__all__:
print name
plug = getattr(plugins, name)
print plug
method = getattr(plug, name)
print method
setattr(cls, name, method)
print
print "Done\n"
add_plugins(Test)
def main():
#Now test it!
t = Test([1, 2, 3]); print t.data
t.multiply(10); print t.data
t.add(5); print t.data
if __name__ == '__main__':
main()
And now the contents of the "plugintest/plugins/" directory:
**__init__.py**
__all__ = ['add', 'multiply']
from plugintest.plugins import *
**add.py**
#A method for the Test class of plugin_demo.py
def add(self, m):
self.data = [m + i for i in self.data]
**multiply.py**
#A method for the Test class of plugin_demo.py
def multiply(self, m):
self.data = [m * i for i in self.data]
If you `cd` to the directory containing the "plugintest/" folder, you should
be able to run it with
`python plugintest/plugin_demo.py`
and if you `cd` to "plugintest/" itself
`python plugin_demo.py`
Also, in the interpreter (or another Python program), you should be able to do
`import plugintest`
and then run the `main()` function of "plugin_demo.py" with
`plugintest.plugin_demo.main()`
The other usual variations of `from ... import ...` etc should also work as
expected.
The function in "plugin_demo.py" that performs the magic of adding the
imported methods to the `Test` class is `add_plugins()`. When it runs it
prints out each method name, its module, and its function. This could be handy
during development, but you'd probably comment out some of those print
statements once the program's working properly.
I hope this helps, and if you have any questions please don't hesitate to ask.
|
Write a very basic copy from one file into another using command line arguments
Question: I am new to python and trying to make a basic copy one file into another file
program. My code right now is
import sys
if len(sys.argv) !=3:
print 'usage: filecopy source destination'
else:
try:
infile = open(sys.argv[1])
outfile = open(sys.argv[2], 'w')
except IOError:
print 'source file does not exist'
getline(infile, line)
infile.close()
outfile.close()
As you can hopefully see I am trying to output why the program won't work if
the user tries to use it wrong.
I recently wrote a c++ program doing the same thing as this, and it worked
just fine, but now I have to transfer the same logic into a different syntax.
Answer: > I am trying to write a line of infile into the string line and then write
> that into an output file.
Don't try to "write C++" in Python. For the task at hand:
import sys
if len(sys.argv) !=3:
print('usage: filecopy source destination')
else:
try:
with open(sys.argv[1], 'r') as inf, open(sys.argv[2], 'w') as outf:
for line in inf:
outf.write(line)
except IOError:
print('{} does not exist or cannot be read'.format(sys.argv[1]))
|
The 'Hangman' game python code testing
Question: I have written this code for the Hangman game in which the opponent is the
computer. But I keep getting errors that I do not know how to solve. please
take a look for me. For example, my current error is:
Traceback (most recent call last):
File "C:\Users\User\Desktop\Python code\hangmangame_test.py", line 112, in <module>
hangman(a)
File "C:\Users\User\Desktop\Python code\hangmangame_test.py", line 91, in hangman
if isWordGuessed(secretWord, lettersGuessed) == "True":
File "C:\Users\User\Desktop\Python code\hangmangame_test.py", line 12, in isWordGuessed
if i in lettersGuessed:
TypeError: argument of type 'NoneType' is not iterable
* * *
import string
def isWordGuessed(secretWord, lettersGuessed):
'''
secretWord: string, the word the user is guessing
lettersGuessed: list, what letters have been guessed so far
returns: boolean, True if all the letters of secretWord are in lettersGuessed;
False otherwise
'''
# FILL IN YOUR CODE HERE...
new = ""
for i in secretWord:
if i in lettersGuessed:
new += i
if new == secretWord:
return True
else:
return False
def getGuessedWord(secretWord, lettersGuessed):
'''
secretWord: string, the word the user is guessing
lettersGuessed: list, what letters have been guessed so far
returns: string, comprised of letters and underscores that represents
what letters in secretWord have been guessed so far.
'''
# FILL IN YOUR CODE HERE...
result = list(secretWord)
for i in result:
if i not in lettersGuessed:
result[result.index(i)] = " _ "
transtring = ''.join(result)
return transtring
def getAvailableLetters(lettersGuessed):
'''
lettersGuessed: list, what letters have been guessed so far
returns: string, comprised of letters that represents what letters have not
yet been guessed.
'''
# FILL IN YOUR CODE HERE...
Alletters = string.ascii_lowercase
result = list(Alletters)
for i in lettersGuessed:
if i in result:
result.remove(i)
transtring = ''.join(result)
return transtring
def hangman(secretWord):
'''
secretWord: string, the secret word to guess.
Starts up an interactive game of Hangman.
* At the start of the game, let the user know how many
letters the secretWord contains.
* Ask the user to supply one guess (i.e. letter) per round.
* The user should receive feedback immediately after each guess
about whether their guess appears in the computers word.
* After each round, you should also display to the user the
partially guessed word so far, as well as letters that the
user has not yet guessed.
Follows the other limitations detailed in the problem write-up.
'''
# FILL IN YOUR CODE HERE...
print("Welcome to the Hangman game!")
print('\n')
print("My word has " + str(len(secretWord)) + " letters!")
guesses = 8 # No. of guesses
lettersGuessed = [] # Creating empty list
Alletters = string.ascii_lowercase # String containing all the lowercase letters
while guesses > 0: # Game starts
print("You have " + str(guesses) + " guesses left!")
print("Available letters: " + str(Alletters))
letters = input("Please guess a letter: ")
if type(letters) != str:
print("Invalid input! please enter one letter!")
else:
letterslower = letters.lower() # Transfering input into lowercase
lettersGuessed = lettersGuessed.append(letterslower) # Inserting inputs into a list
if letterslower not in Alletters:
print("Opps! you have already guessed that letter: " + getGuessedWord(secretWord, lettersGuessed))
else:
if isWordGuessed(secretWord, lettersGuessed) == "True":
print("Congradualation! you won!")
else:
print("Good guess: " + getGuessedWord(secretWord, lettersGuessed))
guesses -= 1
Alletters = getAvailableLetters(lettersGuessed)
print("You have ran out of guess, the word is " + str(secretWord))
# a = "puck"
# b = ["f", "a", "c", "t"]
# print(isWordGuessed(a, b))
# print(getGuessedWord(a, b))
# print(getAvailableLetters(b))
a = "junior"
hangman(a)
Answer: line 87 : `lettersGuessed.append(letterslower)` instead of `lettersGuessed =
lettersGuessed.append(letterslower)`
Append does not return anything
|
How to detect what element of a nested list has changed? (python)
Question: I have a large 2D (list of lists) list, each element containing a list of
ints, strings and dicts. I would like to be able to determine the 'path' (eg.
[2][3][2]["items"][2] at the worst!) of any element that is modified, and have
this fire upon modification. This list is to big to scan through and see what
has changed! Ideally, I also want a copy of the new element although this can
be found out later.
My first attempt was to create a class, and override its `__setattr__` method:
class Notify():
def __setattr__(self, name, value):
self.__dict__[name] = value #Actually assign the value
print name, value #This will tell us when it fires
However, the `__setattr__` method only fires when setting a variable that is
not accessed by an index (or key), because this seems to outsource the call to
the contained list()/dict() class not our class.
>>> test = Notify()
>>> test.var = 1 #This is detected
var 1
>>> test.var = [1,2,3] #Now let's set it to a list
var [1, 2, 3]
>>> test.var[0] = 12 #But when we assign a value via an index, it doesn't fire
>>> test.var
[12, 2, 3] #However it still assigns the value, so it must be talking to the list itself!
So, to summarise, I want (any method really) that tells me at what (list of
indexes/keys) a change has occurred, and this needs to happen as it occurs
because it is too expensive to scan through the whole list. I also can't rely
on the code that modifies the list to provide the details. If this is not
possible to the nth nested list, I could use something that only gave the
first two indexes as the data there would not be too large to scan. Thank you
in advance for your help!
EDIT: Still no joy, though this question [Track changes to lists and
dictionaries in python?](https://stackoverflow.com/questions/8858525/track-
changes-to-lists-and-dictionaries-in-python) seems to be close to what I need.
Unfortunately, I'm not very good with classes and need the help of someone who
is!
EDIT: Had a look at this [Python: Right way to extend
list](https://stackoverflow.com/questions/21196912/python-right-way-to-extend-
list?rq=1) got me thinking that inheriting `list` might be a bad idea. I have
come up with the following code using a proxy class instead. However the
original problem remains, that modifications to nested lists don't log. Would
class composition, not inheritance be a good idea?
from UserList import UserList
class NotifyList(UserList):
def __init__(self, initlist=None):
self.data = []
if initlist is not None:
if type(initlist) is list:
self.data[:] = initlist
elif isinstance(initlist, NotifyList):
self.data[:] = initlist.data[:]
else:
self.data = list(initlist)
def __setitem__(self, key, item):
if type(item) is list:
self.data[key] = NotifyList(item)
else:
self.data[key] = item
print key, item
def append(self, item):
if type(item) is list:
self.data.append(NotifyList(item))
else:
self.data.append(item)
print self.index(item), item
Answer: You need to create a reporting chain within your (trackable) list of trackable
lists, where each list reports modifications to its parent. In your
`NotifyList` class, add to the constructor an argument for the parent, and an
argument for the ID by which the parent will know the new item -- when the
parent is a list, this will be a list index:
class NotifyList(UserList):
def __init__(self, inilist=None, parent=None, id=None):
self.parent = parent
self.id = id
# remainder of __init__()...
When a modifications happens, the parent should be notified. For example in
`__setitem__`:
def __setitem__(self, key, item):
if type(item) is list:
self.data[key] = NotifyList(item, self, str(key)) # Don't forget the new item's parent
else:
self.data[key] = item
self.alertParent(str(key), str(item)) # Report up the chain instead of printing
`alertParent()` is:
def alertParent(self, key, item):
strChange = "[{0}] = {1}".format(key, item)
self.parent.notifyChange(self.id, strChange)
How does `notifyChange()` work?
def notifyChange(self, childKey, strChangeInChild):
strChange = "[{0}]{1}".format(childKey, strChangeInChild)
self.parent.notifyChange(self.id, strChange)
It just propagates the notification up the chain, adding its own ID to the
message.
The only missing link is, what happens at the top of the reporting chain? The
change message should finally be printed. Here is an easy trick to accomplish
this by reusing `alertParent()`:
def alertParent(self, key, item):
if self.parent is None: # I am the root
print "[{0}]{1}".format(key, item)
else:
# remainder of alertParent() shown above...
...
def notifyChange(self, childKey, strChangeInChild):
if self.parent is None: # I am the root
self.alertParent(childKey, strChangeInChild) # Actually just prints a change msg
else:
# remainder of notifyChange() shown above...
I coded this up, the complete version is available
[here](https://drive.google.com/file/d/0B3cAvQF-
GRG3QXRqOFJyb2dJemc/view?usp=sharing) [Google Doc] (there are a couple of
trivial bug fixes with respect to what I have presented above). In action:
>>> from test import NotifyList
>>> top = NotifyList([0]*3, None, None) # Now we have [0 0 0]
>>> print top
NList-[0, 0, 0]
>>> top[0] = NotifyList([0]*3, top, 0) # Now we have [ [0 0 0] 0 0 ]
[0] = NList-[0, 0, 0] #-------------- The tracking msg is fired
>>> print top
NList-[<test.NotifyList object at 0x0000000002163320>, 0, 0]
>>> top[0][1] = NotifyList([0]*3, top[0], 1) # Now we have [ [[0 0 0] 0 0] 0 0 ]
[0][1] = NList-[0, 0, 0] #----------- The tracking msg fired again
>>> top[0][1][2] = "this is a string" # Now we have [ [[0 0 "this is a string] 0 0] 0 0 ]
[0][1][2] = this is a string #------- And another tracking msg
>>> print top[0][1][2]
this is a string
>>> print top[0][1]
NList-[0, 0, 'this is a string']
|
Slicing array in regions - Python
Question: I have to "divide" an `n x m` array in regions using a mask input.
For example, suppose I have a `20 x 20` array. My mask is the following (`5 x
5`) -- always:

where the numbers represent the regions in which the cells take part. This
mask is not an input, that is just an `ndarray`. This mask just represent how
I should slice my `20 x 20` at every `5 x 5` neighborhood.
For example, the first region comprehend the indices:
> (0,0), (1,0), (1,1), (2,0), (2,1), (2,2)
For each `5 x 5`neighborhood of my `20 x 20` array, I should return the values
that are in each of the `8`regions.
I know how to do that with a "standard code", but I wondering if there is a
_Pythonic_ way of do that, possible with a concise code.
As a code example, I could do something like:
def slice_in_regions(data, x_dim, y_dim):
for x in xrange(0, x_dim, 5):
for y in xrange(0, y_dim, 5):
neighbors = data[x:x+4, y:y+4]
region1 = [neighbors[0,0], neighbors[1,0], neighbors[1,1], neighbors[2,0], neighbors[2,1], neighbors[2,2]]
# region2, region3...
However, that doesn't seem to be a good way to do that. Moreover, I'm counting
on that my data will dimension be multiple of `5`.
Thank you.
Answer: It seems you could probably just resize your mask, e.g. if you're already
using `numpy`,
mask = mask.repeat(4, axis=0).repeat(4, axis=1)
# Then you apply the mask using
values = data[mask]
Otherwise,
import numpy as np
mask = np.repeat(mask, 4, axis=0).repeat(4, axis=1)
# Then you apply the mask using
values = np.array(data)[mask]
**Individual regions**
If you need to access each region individually, you could precede the previous
by using a labelled mask; as the labels will be grown into labelled regions
you can then use, e.g.
values = [ data[mask==l] for l in range(1, mask.max()+1)]
Here values will be a list of arrays where each item corresponds to a labelled
region in `mask`.
**Generating the labelled mask**
For completeness, to get from a binary mask to a labelled mask where every on
pixel has it's own label you could use `scipy.ndimage.label`
mask = ndimage.label(mask, [[0,0,0],[0,1,0],[0,0,0]])
or if using a region labelling function is overkill, you can achieve a similar
result using
mask[mask] = range(1,mask.sum()+1)
|
Maya 2015 PyQt AttributeError
Question:
import maya.OpenMaya as om
import maya.OpenMayaUI as omUI
import sip
from PyQt4 import QtGui, QtCore, uic
import rpIcons_rc
import maya.cmds as cmds
import maya.mel as mel
def getMayaWindow():
# 'Get the maya main window as a QMainWindow instance'
ptr = omUI.MQtUtil.mainWindow()
return sip.wrapinstance(long(ptr), QtCore.QObject)
def toQtObject(mayaName):
'''
Given the name of a Maya UI element of any type,
return the corresponding QWidget or QAction.
If the object does not exist, returns None
'''
ptr = omUI.MQtUtil.findControl(mayaName)
if ptr is None:
ptr = omUI.MQtUtil.findLayout(mayaName)
if ptr is None:
ptr = omUI.MQtUtil.findMenuItem(mayaName)
if ptr is not None:
return sip.wrapinstance(long(ptr), QtCore.QObject)
uiFile = ('D:/rpGUI.ui')
form_class, base_class = uic.loadUiType(uiFile)
class myUIClass(form_class, base_class):
def __init__(self, parent=getMayaWindow()):
super(myUIClass, self).__init__(parent)
self.setupUi( self )
#methods
self.connectSignals()
def connectSignals(self):
"""Connect all the UI signals"""
print "Connect signals"
def runUI():
global app
global win
app=QtGui.qApp
win = myUIClass()
win.show()
runUI()
Above code is giving Error
# Error: AttributeError: file C:\Program
Files\Autodesk\Maya2015\Python\lib\site-packages\PyQt4\uic__init__.py line
215: 'module' object has no attribute 'QMainWindow' #
So tell me wht's wrong going on ?? Thank you
Answer: Try using Pyside. Maya 2015 ships with Pyside.
|
Product of subset of numbers in python
Question: I'm going through project euler, and i'm getting stuck on this question.
I'm going to post my code with comments, so everyone can follow my thinking
and see where I went wrong. All suggestions are appreciated :)
# need to find the largest product in a series
import time # brings time into the code
start = time.time() # creates a start time for the code
list = [] # where I'm going to store the multipules
# the '\ takes the grid and converts it into one long number'
num = '\
73167176531330624919225119674426574742355349194934\
96983520312774506326239578318016984801869478851843\
85861560789112949495459501737958331952853208805511\
12540698747158523863050715693290963295227443043557\
66896648950445244523161731856403098711121722383113\
62229893423380308135336276614282806444486645238749\
30358907296290491560440772390713810515859307960866\
70172427121883998797908792274921901699720888093776\
65727333001053367881220235421809751254540594752243\
52584907711670556013604839586446706324415722155397\
53697817977846174064955149290862569321978468622482\
83972241375657056057490261407972968652414535100474\
82166370484403199890008895243450658541227588666881\
16427171479924442928230863465674813919123162824586\
17866458359124566529476545682848912883142607690042\
24219022671055626321111109370544217506941658960408\
07198403850962455444362981230987879927244284909188\
84580156166097919133875499200524063689912560717606\
05886116467109405077541002256983155200055935729725\
71636269561882670428252483600823257530420752963450'
i = 0 # start of the ticker
for i in range(0, 1000, 1): # the length of the large block of numbers is 1000
hold_num = int(num[i])*int(num[i+1])*int(num[i+2]) # im creating a number of every three consecutive numbers ... this is the subset
list.append(hold_num) # storing the products in a list
i =+ 1
print max(list) # finding the max of the products in the stored list
I end up getting this error:
Traceback (most recent call last):
File "/Users/robertdefilippi/Documents/Python/Euler/8eu.py", line 37, in <module>
hold_num = int(num[i])*int(num[i+1])*int(num[i+2])
IndexError: string index out of range
[Finished in 0.0s with exit code 1]
Any suggestions on what I'm doing wrong?
Answer: When you are doing `for i in range(0, 1000, 1)`, reaching 999, `num[i+1]` and
`num[i+2]` will be out range.
You have to use `range(0,998,1)` instead. Which can be shortened to
`range(998)`. And doing this, `i` will be incremented and starts at 0. You
don't need to do `i+=1` or `i=0`.
|
HDF5 file created with h5py can't be opened by h5py
Question: I created an HDF5 file apparently without any problems, under Ubuntu 12.04
(32bit version), using Anaconda as Python distribution and writing in ipython
notebooks. The underlying data are all numpy arrays. For example,
import numpy as np
import h5py
f = h5py.File('myfile.hdf5','w')
group = f.create_group('a_group')
group.create_dataset(name='matrix', data=np.zeros((10, 10)), chunks=True, compression='gzip')
If I try to open this file from a new iypthon notebook, though, I get an error
message:
f = h5py.File('myfile.hdf5', "r")
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
<ipython-input-4-b64ac5089cd4> in <module>()
----> 1 f = h5py.File(file_name, "r")
/home/sarah/anaconda/lib/python2.7/site-packages/h5py/_hl/files.pyc in __init__(self, name, mode, driver, libver, userblock_size, **kwds)
220
221 fapl = make_fapl(driver, libver, **kwds)
--> 222 fid = make_fid(name, mode, userblock_size, fapl)
223
224 Group.__init__(self, fid)
/home/sarah/anaconda/lib/python2.7/site-packages/h5py/_hl/files.pyc in make_fid(name, mode, userblock_size, fapl, fcpl)
77
78 if mode == 'r':
---> 79 fid = h5f.open(name, h5f.ACC_RDONLY, fapl=fapl)
80 elif mode == 'r+':
81 fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
/home/sarah/anaconda/lib/python2.7/site-packages/h5py/h5f.so in h5py.h5f.open (h5py/h5f.c:1741)()
IOError: Unable to open file (Unable to find a valid file signature)
Can you tell me what that missing file signature is? Did I miss something when
I created the file?
Answer: Since we resolved the issue in the comments on my question, I'm writing the
results out here to mark it as solved.
The main problem was that I forgot to close the file after I created it. There
would have been two simple options, either:
import numpy as np
import h5py
f = h5py.File('myfile.hdf5','w')
group = f.create_group('a_group')
group.create_dataset(name='matrix', data=np.zeros((10, 10)), chunks=True, compression='gzip')
f.close()
or, my favourite because the file is closed automatically:
import numpy as np
import h5py
with h5py.File('myfile.hdf5','w') as f:
group = f.create_group('a_group')
group.create_dataset(name='matrix', data=np.zeros((10, 10)), chunks=True, compression='gzip')
|
Indentation error
Question: I receive an indentation error that I cannot figure out the reason.
The error is
('unexpected indent', ('C:/Hamid/Failure_index.py',15,1,'\tSDV2=xyList[0]\n')).
My code is
from abaqusConstants import *
from odbAccess import *
from visualization import *
#---------------------------------------------------------------------------
out_file= 'C:\Hamid\Stochastic\Python_script_for_Monte_Carlo_simulation\Microtensile/Failure_index.dat'
fid = open(out_file,'w')
for i in range(1,50):
odb_path = 'C:\Hamid\Stochastic\Python_script_for_Monte_Carlo_simulation\Microtensile/Microtens-'+str(i)+'_xs.odb'
session.openOdb(name=odb_path)
odb = session.odbs[odb_path]
session.viewports['Viewport: 1'].setValues(displayedObject=odb)
xyList = session.xyDataListFromField(odb=odb, outputPosition=INTEGRATION_POINT, variable=(('SDV2', INTEGRATION_POINT), ), elementSets=(' ALL ELEMENTS', ))
SDV2 = xyList[0]
fid.write(SDV2+'\n')
odb.close()
fid.close()
Answer: You are mixing tabs and spaces in your source code, and Python's algorithm for
expanding tabs to spaces causes `SDV2 = xyList[0]` to be indented 8 spaces,
not 4 like the other lines in the `for` loop.
|
How to add try exception in scrapy spider?
Question: I build a simple crawler application by using urllib2 and beautifulsoup, now i
am planning to change it into scrapy spider, but how i can handle errors while
running crawler, My current application have some code like this,
error_file = open('errors.txt','a')
finish_file = open('finishlink.txt','a')
try:
#Code for process each links
#if sucessfully finished link store into 'finish.txt' file
except Exception as e:
#link write into 'errors.txt' file with error code
so when i am processing thousands of links, the successfully processed links
will store into finish.txt and error's will be in errors.txt, so i can run
links in errors later until successfully processed. So how i can accomplish
these in these code,
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
filename = response.url.split("/")[-2]
with open('filename+'.txt', 'wb') as f:
f.write(response.body)
Answer: You can create a [spider
middleware](http://scrapy.readthedocs.org/en/latest/topics/spider-
middleware.html#spider-middleware) and override the
[`process_spider_exception()`](http://scrapy.readthedocs.org/en/latest/topics/spider-
middleware.html#scrapy.contrib.spidermiddleware.SpiderMiddleware.process_spider_exception)
method, saving the links in a file there.
A spider middleware is just a way for you to extend Scrapy's behavior. Here is
a full example that you can modify as needed for your purpose:
from scrapy import signals
class SaveErrorsMiddleware(object):
def __init__(self, crawler):
crawler.signals.connect(self.close_spider, signals.spider_closed)
crawler.signals.connect(self.open_spider, signals.spider_opened)
@classmethod
def from_crawler(cls, crawler):
return cls(crawler)
def open_spider(self, spider):
self.output_file = open('somefile.txt', 'a')
def close_spider(self, spider):
self.output_file.close()
def process_spider_exception(self, response, exception, spider):
self.output_file.write(response.url + '\n')
Put this in a module and set it up in settings.py:
SPIDER_MIDDLEWARES = {
'myproject.middleware.SaveErrorsMiddleware': 1000,
}
This code will run together with your spider, triggering the open_spider(),
close_spider(), process_spider_exception() methods when appropriated.
**Read more:**
* [Spider Middlewares](http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html#spider-middleware)
* [Signals in Scrapy](http://scrapy.readthedocs.org/en/latest/topics/signals.html)
* [Example middleware in Scrapy source code](https://github.com/scrapy/scrapy/blob/master/scrapy/spidermiddlewares/httperror.py)
|
Grouping ips from list in python
Question: I have a text file containing 100s of comma separated IPs with just spaces
between them.
I need to take them 10 at a time and put them in another block of code. So,
for IPs:
`1.1.1.1, 2.2.2.2, 3.3.3.3, 4.4.4.4, ... 123.123.123.123, 124.124.124.124,
125.125.125.125`
I would need:
codethings [1.1.1.1, 2.2.2.2, ... 10.10.10.10] code code code
codethings [11.11.11.11, 12.12.12.12, ... 20.20.20.20] code code code
codethings [21.21.21.21, 22.22.22.22, ... 30.30.30.30] code code code
etc
I'm pretty sure I could do it with RegEx but I can't help but think there are
simpler ways to do it.
Any and all help appreciated. Thank you!
Answer: Split on comma, strip excessive whitespace from each element:
txt = '''1.1.1.1, 2.2.2.2, 3.3.3.3, 4.4.4.4,
123.123.123.123, 124.124.124.124, 125.125.125.125'''
ip_list = map(str.strip, txt.split(','))
As for pagination, see answers for: [Paging python lists in slices of 4
items](http://stackoverflow.com/questions/3950079/paging-python-lists-in-
slices-of-4-items) OR [Is this how you paginate, or is there a better
algorithm?](http://stackoverflow.com/questions/3744451/is-this-how-you-
paginate-or-is-there-a-better-algorithm)
I would also advise (just to be sure) to filter out invalid IP adresses, for
example using a generator and `socket` module:
from __future__ import print_function
import sys
import socket
txt = '''1.1.1.1, 2.2.2.2, 3.3.3.3, 4.4.4.4,
123.123.123.123, 124.124.124, 555.125.125.125,'''
def iter_ips(txt):
for address in txt.split(','):
address = address.strip()
try:
_ = socket.inet_aton(address)
yield address
except socket.error as err:
print("invalid IP:", repr(address), file=sys.stderr)
print(list(iter_ips(txt)))
|
Capture standard output for Python child spawned by external package
Question: I would like to capture in a file the standard output by a child Process
spawned by an external package.
I can NOT simply redirect sys.stdout to a file, as this does not capture the
output of new processes ([How can I capture the stdout output of a child
process?](http://stackoverflow.com/questions/923079/how-can-i-capture-the-
stdout-output-of-a-child-process)).
But unfortunately, I also can NOT simply use subprocess or the terminal to
capture stdout, since I am not spawning the process -- it is spawned by the
code in an external Python package.
Specifically, I am using the API to the package pystan. Some of the functions
in this package spawn child MCMC runs that write to standard output.
Answer: I only see two ways to do this:
* monkey-patch `pystan` (might take several patches)
* monkey-patch `subprocess` (might get more info than you want)
I would go with the `subprocess` monkey-patch: write your own version of
`Popen` (or whatever `pystan` is using, and replace the `subprocess` version
with your own. Your version would track `STDOUT` and save it somewhere you
could get at it.
Roughly, something like this:
import subprocess
original_Popen = subprocess.Popen
def my_Popen(..., stdout=PIPE, ...): # or somesuch
...
subprocess.Popen = my_Popen
# call pystan here
subprocess.Popen = original_Popen # if needed
|
detail view cant find pk
Question: Fairly new to Django and Python, I am trying to build a detail view and a list
view for a bunch of pictures I have uploaded. My list view works and shows all
the pictures I have but I cannot figure out how to create a detailview that
would let me look at only one picture.
In my "mysite" directory I have a url.py containing
urlpatterns = patterns('',
url(r'^photo/', include('photo.urls', namespace = "photo")),
)
Then in my "photo" directory I have
from photo import views
urlpatterns = patterns('',
url(r'$', views.ListImage.as_view(),name ='Photo-List',),
url(r'^/image/(?P<pk>\d+)/$', views.ImageView.as_view(),name='image-view',),
)
I have uploaded a bunch of pictures and I can see them when I visit my local
website 800local../photo. But if I want to see one picture only, the address
../photo/image/1 returns a 404.
My Folder can be found at <https://github.com/henrigeek/Django>
Answer: List view will return object_list and Detail view return object.
{% if object %}
{{object.title}}
................
{% else %}
{% for img_obj in object_list %}
{{ img_obj.title }}
................
{% empty %}
No images found
{% endfor %}
{% endif %}
You can refer here [Class based
view](https://docs.djangoproject.com/en/dev/ref/class-based-views/generic-
display/)
|
Python - Learn Python The Hard Way exercise 41 confused?
Question: I read all the answers related to that section but still didn't understand the
1 part. What exactly the following code does?
random.sample(WORDS, snippet.count("%%%"))
I know it means the number of occurrences of "###" in snippet but didn't get
what exactly it means.
Here is the whole code if it helps:
import random
from urllib import urlopen
import sys
WORD_URL = "http://learncodethehardway.org/words.txt"
WORDS = []
PHRASES = {
"class %%%(%%%):":
"Make a class named %%% that is-a %%%.",
"class %%%(object):\n\tdef __init__(self, ***)" :
"class %%% has-a __init__ that takes self and *** parameters.",
"class %%%(object):\n\tdef ***(self, @@@)":
"class %%% has-a function named *** that takes self and @@@ parameters.",
"*** = %%%()":
"Set *** to an instance of class %%%.",
"***.***(@@@)":
"From *** get the *** function, and call it with parameters self, @@@.",
"***.*** = '***'":
"From *** get the *** attribute and set it to '***'."
}
# do they want to drill phrases first
if len(sys.argv) == 2 and sys.argv[1] == "english":
PHRASE_FIRST = True
else:
PHRASE_FIRST = False
# load up the words from the website
for word in urlopen(WORD_URL).readlines():
WORDS.append(word.strip())
def convert(snippet, phrase):
class_names = [w.capitalize() for w in
random.sample(WORDS, snippet.count("%%%"))]
other_names = random.sample(WORDS, snippet.count("***"))
results = []
param_names = []
for i in range(0, snippet.count("@@@")):
param_count = random.randint(1,3)
param_names.append(', '.join(random.sample(WORDS, param_count)))
for sentence in snippet, phrase:
result = sentence[:]
# fake class names
for word in class_names:
result = result.replace("%%%", word, 1)
# fake other names
for word in other_names:
result = result.replace("***", word, 1)
# fake parameter lists
for word in param_names:
result = result.replace("@@@", word, 1)
results.append(result)
return results
# keep going until they hit CTRL-D
try:
while True:
snippets = PHRASES.keys()
random.shuffle(snippets)
for snippet in snippets:
phrase = PHRASES[snippet]
question, answer = convert(snippet, phrase)
if PHRASE_FIRST:
question, answer = answer, question
print question
raw_input("> ")
print "ANSWER: %s\n\n" % answer
except EOFError:
print "\nBye"
Answer: The code gets [random
elements](https://docs.python.org/2/library/random.html#random.sample) from
`WORDS`. Since you might have more than one `%%%` placeholder in the string,
it gets as many items from that list as there are occurrences of `%%%` in
`snippet`.
|
python rumps not working on OS X 10.10 - AttributeError: 'module' object has no attribute 'App'
Question: I am trying to run some demo from
<http://rumps.readthedocs.org/en/latest/examples.html> using Ridiculously
Uncomplicated Mac os x Python Statusbar apps and while importing rumps i get:
`AttributeError: 'module' object has no attribute 'App'`
Mac-28cfe915100b-2:Desktop andi$ pip install rumps
Requirement already satisfied (use --upgrade to upgrade): rumps in /Library/Python/2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): pyobjc-core in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/PyObjC (from rumps)
Cleaning up...
Mac-28cfe915100b-2:Desktop andi$ python
Python 2.7.6 (default, Sep 9 2014, 15:04:36)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.39)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import rumps
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "rumps.py", line 4, in <module>
class AwesomeStatusBarApp(rumps.App):
AttributeError: 'module' object has no attribute 'App'
Answer: Nothing to do with `rumps` but rather that you copied the demo into a a file
named "rumps.py". The same error will happen in any other module you are
trying to import from a file named the same thing.
$ echo "import math; math.sqrt(42)" > math.py
$ python
Python 2.7.5 (default, Mar 9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import math
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "math.py", line 1, in <module>
import math; math.sqrt(42)
AttributeError: 'module' object has no attribute 'sqrt'
|
how to get the variables from the configuration file?
Question: please help to solve the problem for python2.7 / django1.6.
I need a console to display the value of the variable SITE_ID of the file
settings.py. For this I use the command:
python manage.py shell
but what to do I do not know.
Answer: You can import the settings:
>>> from django.conf import settings
>>> print(settings.SITE_ID)
|
GAE Python Datastore - query on nested class table
Question: I have a Java class that contains some nested classes, as follow:
public class OuterClass{
@PrimaryKey
private String id;
@Persistent
private String name;
// --- blah blah
static class MyNestedClass{
//--- properties declaration
}
}
This class works fine, basically within just one structure (OuterClass) I can
store _n_ nested structures, and each structure works as a datastore table.
Inspecting my ds with `datastore viewer` I see that I have now a table called
`OuterClass$MyNestedClass`, I can successfully run GQL queries on this table
like this:
//--- note the wrapping quotation marks on the table name
SELECT * FROM "OuterClass$MyNestedClass" where something = '100'
Till now everything is OK. Now I need to create a `google app engine python`
method that empties that nested class table. I already did this with other
tables/classes, but never with nested classes.
This is my python code:
import cgi
import datetime
import urllib
import webapp2
from google.appengine.ext import db
class OuterClass$MyNestedClass(db.Model):
name = db.StringProperty()
surname= db.StringProperty()
age = db.StringProperty()
class emptyMyNestedClassHandler(webapp2.RequestHandler):
def get(self):
nestedclass= db.GqlQuery("SELECT __key__ FROM \"OuterClass$MyNestedClass\"")
count = 0
for p in nestedclass:
count += 1
db.delete(nestedclass)
self.response.out.write("Deleted SMyNestedClass Entities: " + str(count))
app = webapp2.WSGIApplication([
('/emptyMyNestedClass', emptyMyNestedClassHandler)
], debug=True)
and this is the error I'm getting
class OuterClass$MyNestedClass(db.Model):
^
SyntaxError: invalid syntax
I tried to change the class name from `OuterClass$MyNestedClass` to
`MyNestedClass`, but I get this error:
KindError: No implementation for kind 'OuterClass$MyNestedClass'
Which name do I have to assign to my python class in order to make it work?
How can I handle the $ issue?
hope I've been clear enough, thank you
* * *
## Solution
Following [Matt's suggestion](http://stackoverflow.com/a/26629206/1054151), my
JAVA class now looks like this:
public class OuterClass{
@PrimaryKey
private String id;
@Persistent
private String name;
// --- blah blah
@PersistenceCapable( table="NestedClass")
@Embedded
static class NestedClass{
//--- properties declaration
}
}
And this is the Python code:
import cgi
import datetime
import urllib
import webapp2
from google.appengine.ext import db
class NestedClass(db.Model):
name = db.StringProperty()
surname = db.StringProperty()
age = db.StringProperty()
class emptyNestedClassHandler(webapp2.RequestHandler):
def get(self):
nestedclass= db.GqlQuery("SELECT __key__ FROM NestedClass")
count = 0
for p in nestedclass:
count += 1
db.delete(nestedclass)
self.response.out.write("Deleted NestedClassEntities: " + str(count))
app = webapp2.WSGIApplication([
('/emptyNestedClass', emptyNestedClassHandler)
], debug=True)
Note that I **had** to declare class `NestedClass`.
Thanks!
Answer: Take a look at this resource for [JDO
annotations](http://db.apache.org/jdo/annotations.html).
you may want to annotate your class like this:
public class OuterClass{
@PrimaryKey
private String id;
@Persistent
private String name;
// --- blah blah
@PersistenceCapable( table="NestedClass")
@Embedded
static class NestedClass{
//--- properties declaration
}
}
With which from GQL you should be able to do this:
SELECT * FROM "OuterClass" where NestedClass.something = '100'
|
Read a CSV file in Python and print out unique dates from a column that has date and time
Question: I need to be able to read a csv file and sum a few columns per day and then
generate a new csv file with the solutions. I am brand new to Python and I
have figured out how to read the csv but now I must figure out how to sum the
columns based on the date/time column.
CSV:
tag,date,symbol,exch,volume,price,side,ind
1058,20140612 13:29:59.042,BRK/B,NQBX,1000,61.25,SELL_SHORT,A
1059,20140612 13:29:59.043,JNJ,NQBX,185,31.94,SELL_SHORT,A
1153,20140612 13:30:00.117,AAPL,NQBX,77,43.64,SELL,A
1201,20140612 13:30:00.190,WFC,NQBX,100,49.92,SELL,A
1720,20140612 13:30:04.003,JPM,NQBX,100,50.16,SELL,A
1738,20140613 13:30:04.254,PFE,NQBX,600,43.89,SELL_SHORT,A
108167,20140613 13:30:04.809,VZ,NSDQ,2000,61.23,SELL_SHORT,R
1799,20140613 13:30:05.252,MSFT,NQBX,11,43.76,BUY,A
1879,20140612 13:30:06.393,CVX,NQBX,40,70.58,BUY,A
1908,20140612 13:30:06.803,INTC,NQBX,100,56.52,SELL_SHORT,A
1989,201406117 13:30:08.003,GE,NQBX,100,50.14,SELL,A
2008,20140619 13:30:08.169,JNJ,NQBX,97,15.18,SELL,A
2021,20140619 13:30:08.393,PFE,NQBX,38,43.89,SELL_SHORT,A
2197,20140619 13:30:10.599,WFC,NQBX,100,30.34,BUY,A
2302,20140620 13:30:12.002,GE,NQBX,100,50.14,SELL,A
2368,20140620 13:30:12.931,INTC,NQBX,500,31.44,SELL,A
I need to sum the volume column per day and then create a new csv with the
summary.
Answer: You can use `csv.DictReader` with `itertools.groupby` to achieve what you
want.
import csv
import itertools
def sum_volumes_by_date(yourcsvfile, writetocsv):
# it will read all your data and pairing the header to values into a dictionary
results = [line for line in csv.DictReader(open(yourcsvfile))]
with open(writetocsv, 'w') as f:
f.write("Date,Sum(Vols)\n")
# use groupby to group a sorted list of the dictionary by its 'date'
for k, g in itertools.groupby(sorted(results, key=lambda x: x['date']), \
lambda each: each['date'][:8]):
# then sum its relative 'volume' values
f.write("{},{}\n".format(k, sum([int(each['volume']) for each in g])))
Usage:
>>> sum_volumes_by_date('in.csv', 'out.csv')
>>> cat out.csv
Date,Sum(Vols)
20140611,100
20140612,1602
20140613,2611
20140619,235
20140620,600
|
Conda Cython Build PYD/SO Files
Question: I have built a module using "conda build packagename".
However, the built module ends up in "\Anaconda\conda-bld\work".
The module can only be imported (using "import packagename")if I cd into this
directory, then run Python. I have tried placing the files in
"\Anaconda\conda-bld\work" in "\Anaconda\Lib\site-packages", however I am not
able to import the module from any directory; I must be in
"\Anaconda\Lib\site-packages".
Is the only solution to put the .PYD file/ .SO file next to the executable
Python file or is there a way to let Python know there is a new module
installed?
Thank you for your help.
Answer: In the conda build script, you need to install the files, not just build them.
For Python, this typically means running `python setup.py install` in the
build.sh, and including `python` in your build dependencies so that the
`python` will install into the build environment.
|
Plot (x, y, z) triples on 2d plane with a colormap
Question: I'm using `python2` and `matplotlib`.
I have lots of triples `(x, y, z)`. I need to plot them as a kind of
**histogram/heatmap** on a 2d plane with the axes `x` and `y` and a color
indicating `z`.
The main problem is that x and y are not on any kind of grid but vary freely,
so some kind of interpolation is needed.
I tried `contour(X, Y, Z)`, but this takes Z as a 2d array, which means it
needs the values on a grid.
Answer: I like using `hexbin` with it's `C` input option for this sort of thing. For
example, if you've parsed your data from tuple-triples into `x_arr`, `y_arr`
and `z_arr` arrays, you could do:
from matplotlib.pyplot import hexbin
hexbin(x_arr, y_arr, C=z_arr)
This implicitly averages the data over the hexagonal bins. You can change the
`reduce_C_function` option if you want something different than a simple
average. You can change the number of bins with the `gridsize` option.
|
django.db.migrations.graph.CircularDependencyError in Django 1.7.1
Question: Just upgraded to Django 1.7.1 and am trying to setup a fresh dev environment.
I ran a _users_ migration OK, but when I try to run a _tweets_ migration, I
get
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/Users/libbyh/Documents/virtualenvs/fyl/lib/python2.7/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line
utility.execute()
File "/Users/libbyh/Documents/virtualenvs/fyl/lib/python2.7/site-packages/django/core/management/__init__.py", line 377, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/Users/libbyh/Documents/virtualenvs/fyl/lib/python2.7/site-packages/django/core/management/base.py", line 288, in run_from_argv
self.execute(*args, **options.__dict__)
File "/Users/libbyh/Documents/virtualenvs/fyl/lib/python2.7/site-packages/django/core/management/base.py", line 338, in execute
output = self.handle(*args, **options)
File "/Users/libbyh/Documents/virtualenvs/fyl/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 106, in handle
plan = executor.migration_plan(targets)
File "/Users/libbyh/Documents/virtualenvs/fyl/lib/python2.7/site-packages/django/db/migrations/executor.py", line 49, in migration_plan
for migration in self.loader.graph.forwards_plan(target):
File "/Users/libbyh/Documents/virtualenvs/fyl/lib/python2.7/site-packages/django/db/migrations/graph.py", line 55, in forwards_plan
return self.dfs(node, lambda x: self.dependencies.get(x, set()))
File "/Users/libbyh/Documents/virtualenvs/fyl/lib/python2.7/site-packages/django/db/migrations/graph.py", line 105, in dfs
raise CircularDependencyError()
django.db.migrations.graph.CircularDependencyError
So I'm trying to track down the circular dependency. Here are the models ready
to migrate:
_users/models.py_
from django.db import models
# Create your models here.
class UsersTweets(models.Model):
id = models.IntegerField(primary_key=True)
user = models.ForeignKey('users.User')
tweet = models.ForeignKey('tweets.Tweet')
source = models.BooleanField() # sent the tweet
target = models.BooleanField() # received a reply in the tweet
class Meta:
managed = True
class User(models.Model):
id = models.IntegerField(primary_key=True)
twitter_id = models.CharField(max_length=21, unique=True)
twitter_name = models.CharField(max_length=55, unique=True)
fullname = models.CharField(max_length=45)
followers = models.IntegerField()
following = models.IntegerField()
favorites = models.IntegerField()
tweets = models.IntegerField()
timezone = models.CharField(max_length=45, blank=True)
legislator = models.ForeignKey('users.Legislator', blank=True, null=True)
class Meta:
managed = True
class Legislator(models.Model):
id = models.IntegerField(primary_key=True)
PARTY = (
('Democrat','Democrat'),
('Republican','Republican'),
('Independent','Independent'),
)
GENDER = (
('M','Man'),
('F','Woman'),
)
TYPE = (
('sen','Senator'),
('rep','Representative')
)
STATE = (
('AK', 'Alaska'),
('AL', 'Alabama'),
('AR', 'Arkansas'),
('AS', 'American Samoa'),
('AZ', 'Arizona'),
('CA', 'California'),
('CO', 'Colorado'),
('CT', 'Connecticut'),
('DC', 'District of Columbia'),
('DE', 'Delaware'),
('FL', 'Florida'),
('GA', 'Georgia'),
('GU', 'Guam'),
('HI', 'Hawaii'),
('IA', 'Iowa'),
('ID', 'Idaho'),
('IL', 'Illinois'),
('IN', 'Indiana'),
('KS', 'Kansas'),
('KY', 'Kentucky'),
('LA', 'Louisiana'),
('MA', 'Massachusetts'),
('MD', 'Maryland'),
('ME', 'Maine'),
('MI', 'Michigan'),
('MN', 'Minnesota'),
('MO', 'Missouri'),
('MP', 'Northern Mariana Islands'),
('MS', 'Mississippi'),
('MT', 'Montana'),
('NA', 'National'),
('NC', 'North Carolina'),
('ND', 'North Dakota'),
('NE', 'Nebraska'),
('NH', 'New Hampshire'),
('NJ', 'New Jersey'),
('NM', 'New Mexico'),
('NV', 'Nevada'),
('NY', 'New York'),
('OH', 'Ohio'),
('OK', 'Oklahoma'),
('OR', 'Oregon'),
('PA', 'Pennsylvania'),
('PR', 'Puerto Rico'),
('RI', 'Rhode Island'),
('SC', 'South Carolina'),
('SD', 'South Dakota'),
('TN', 'Tennessee'),
('TX', 'Texas'),
('UT', 'Utah'),
('VA', 'Virginia'),
('VI', 'Virgin Islands'),
('VT', 'Vermont'),
('WA', 'Washington'),
('WI', 'Wisconsin'),
('WV', 'West Virginia'),
('WY', 'Wyoming'),
)
last_name = models.CharField(max_length=17, blank=True)
first_name = models.CharField(max_length=11, blank=True)
gender = models.CharField(max_length=1, blank=True)
chamber = models.CharField(max_length=3, blank=True)
state = models.CharField(max_length=2, blank=True)
party = models.CharField(max_length=11, blank=True)
url = models.CharField(max_length=36, blank=True)
address = models.CharField(max_length=55, blank=True)
phone = models.CharField(max_length=12, blank=True)
contact_form = models.CharField(max_length=103, blank=True)
rss_url = models.CharField(max_length=106, blank=True)
facebook = models.CharField(max_length=27, blank=True)
facebook_id = models.IntegerField(blank=True, null=True)
youtube = models.CharField(max_length=20, blank=True)
youtube_id = models.IntegerField(blank=True, null=True)
bioguide_id = models.IntegerField(blank=True, null=True)
thomas_id = models.IntegerField(blank=True, null=True)
opensecrets_id = models.IntegerField(blank=True, null=True)
lis_id = models.IntegerField(blank=True, null=True)
cspan_id = models.IntegerField(blank=True, null=True)
govtrack_id = models.IntegerField(blank=True, null=True)
votesmart_id = models.IntegerField(blank=True, null=True)
ballotpedia_id = models.IntegerField(blank=True, null=True)
washington_post_id = models.IntegerField(blank=True, null=True)
icpsr_id = models.IntegerField(blank=True, null=True)
wikipedia_id = models.CharField(max_length=40, blank=True)
def _get_name_with_honor(self):
return '%s. %s %s (%s-%s)' % ((self.chamber).title(), self.first_name, self.last_name, self.party[:1], self.state)
honor_name = property(_get_name_with_honor)
def __unicode__(self):
return self.last_name
class Meta:
managed = True
_tweets/models.py_
from django.db import models
# Create your models here.
class Tweet(models.Model):
tweet_id = models.CharField(primary_key=True, max_length=21)
created_at = models.DateTimeField()
text = models.CharField(max_length=255)
source = models.CharField(max_length=255, blank=True)
location_geo = models.TextField(blank=True) # This field type is a guess.
location_geo_0 = models.DecimalField(max_digits=14, decimal_places=10, blank=True, null=True)
location_geo_1 = models.DecimalField(max_digits=14, decimal_places=10, blank=True, null=True)
iso_language = models.CharField(max_length=3)
user = models.ManyToManyField('users.User', through = "users.UsersTweets")
class Meta:
managed = True
I checked [another StackOverflow question about this
error](http://stackoverflow.com/questions/25292873/circular-dependency-error-
when-running-migrations-in-django-1-7c2), but I don't get any info from my
error about where the dependency might be. Tried to follow [this
advice](https://code.djangoproject.com/ticket/22932#comment:4) but wasn't sure
how to edit the `swappable_dependency`. Any ideas? Thanks!
Answer:
1. In tweets/models.py comment #user = models.ManyToManyField('users.User', through = "users.UsersTweets")
2. makemigrations tweets
3. makemigrations users
4. UNcomment in tweets/models.py user = models.ManyToManyField('users.User', through = "users.UsersTweets")
5. makemigrations tweets
6. migrate
|
Calculate "age" MongoDB document DateTimeField Flask
Question: I'm using python 3.4 and flask 0.10 with MongoDB 2.6 standard for my
application. With mongo Document. I want to calculate the "age" or "years"
from Persons just with its Birthday. I have this code:
import datetime
from Personal import db
class Person(db.Document):
ID = db.StringField(required=True, primary_key=True, unique=True, max_length=6)
name = db.StringField(required=True)
birthday = db.DateTimeField(required=True)
age = ########
I tried with
age = int(datetime.datetime.now.year) - int(birthday).year
And I know it's wrong. I already read the Mongo Documents guide, didn't help:
<http://docs.mongoengine.org/guide/defining-documents.html>
Please, Indeed help with DateTime operations
Thanks
Answer: The feature you're looking for could be in python instead of mongodb, using a
`property` you can achieve that quite easily:
class Person(db.Document):
ID = db.StringField(required=True, primary_key=True, unique=True, max_length=6)
name = db.StringField(required=True)
birthday = db.DateTimeField(required=True)
@property
def age(self):
return ((datetime.datetime.now() - self.birthday).days) / 365
with that class, you're allowed to do this:
>>> p = Person(name='rafael', birthday=datetime.datetime(year=1990, day=1, month=1))
>>> p.age
24
But you won't be able to query by age, since it's not kept on your database.
|
Receiving end of socket splits data when printed
Question: So while programming sockets using Java and Python, I stumbled upon something
weird.
When sending a message using Java to the receiving end of the Python socket,
it splits the message into 2 parts, even though this was not intended.
I probably made a mistake somewhere that's causing this problem, but I really
don't know what it is.
You can see that Java sends "Test1" in one command and Python only receives
parts of that message:
<http://i.imgur.com/tbwa7C5.png>
Pyhton Server Socket Source:
'''
Created on 23 okt. 2014
@author: Rano
'''
#import serial
import socket
HOST = ''
PORT = 1234
running = True;
skt = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
skt.bind((HOST, PORT))
skt.listen(1)
conne, addr = skt.accept()
#ser = serial.Serial('/dev/tty.usbmodem411', 9600)
while running == True:
data = conne.recvall(1024)
if(data == "quit"):
running = False
break
rawrecvstring = data + ""
recvstring = rawrecvstring.split("|")
print(recvstring[0])
#_______________________ABOVE IS RECEIVE_______________UNDER IS SEND_______________________#
# sendstring = ser.readline()
# if sendstring != "":
# conne.sendall(sendstring)
conne.close()
#ser.close()
And the Java Socket send function:
private String message;
private DataOutputStream out;
private BufferedReader in;
private Socket socket;
private boolean socketOnline;
public SocketModule(String IP, int Port){
try {
socket = new Socket(IP, Port);
out = new DataOutputStream(socket.getOutputStream());
in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
} catch (UnknownHostException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
};
void setMessage(String s){
try {
out.writeBytes(s);
out.flush();
System.out.println("message '" + s + "' sent!\n");
} catch (IOException e) {
e.printStackTrace();
}
};
Any ideas as to why the message is being split?
Answer: TCP is a stream protocol, not a message protocol.
As far as TCP is concerned, `s.send("abd"); s.send("def");` is exactly the
same thing as `s.send("abcdef")`. At the other end of the socket, when you go
to receive, it may return as soon as the first send arrives and give you
`"abc"`, but it could just as easily return `"abcdef"`, or `"a"`, or `"abcd"`.
They're all perfectly legal, and your code has to be able to deal with all of
them.
If you want to process entire messages separately, it's up to you to build a
protocol that delineates messages—whether that means using some separator that
can't appear in the actual data (possibly because, if it does appear in the
actual data, you escape it), or length-prefixing each message, or using some
self-delineating format like JSON.
It looks like you're part-way to building such a thing, because you've got
that `split('|')` for some reason. But you still need to add the rest of
it—loop around receiving bytes, adding them to a buffer, splitting any
complete messages off the buffer to process them, and holding any incomplete
message at the end for the next loop. And, of course, sending the `|`
separators on the other side.
For example, your Java code can do this:
out.writeBytes(s + "|");
Then, on the Python side:
buf = ""
while True:
data = conne.recvall(1024)
if not data:
# socket closed
if buf:
# but we still had a leftover message
process_message(buf)
break
buf += data
pieces = buf.split("|")
buf = pieces.pop()
for piece in pieces:
process_message(piece)
That `process_message` function can handle the special "quit" message, print
out anything else, whatever you want. (And if it's simple enough, you can
inline it into the two places it's called.)
From a comment, it sounds like you wanted to use that `|` to separate fields
within each message, not to separate messages. If so, just pick another
character that will never appear in your data and use that in place of `|`
above (and then do the `msg.split('|')` inside `process_message`). One really
nice option is `\n`, because then (on the Python side) you can use
`socket.makefile`, which gives you a file-like object that does the buffering
for you and just yields lines one by one when you iterate it (or call
`readline` on it, if you prefer).
For more detail on this, see [Sockets are byte streams, not message
streams](http://stupidpythonideas.blogspot.com/2013/05/sockets-are-byte-
streams-not-message.html).
As a side note, I also removed the `running` flag, because the only time
you're ever going to set it, you're also going to `break`, so it's not doing
any good. (But if you _are_ going to test a flag, just use `while running:`,
not `while running == True:`.)
|
Python Hangman Game finishing touches
Question: Here is my code:
def randWord():
"""opens a file of words and chooses a random word from the file"""
infile = open('dictionary.txt','r')
wordList = infile.read()
wordList2 = wordList.split('\n')
infile.close()
randWord = str(random.choice(wordList2))
return randWord
def hangman():
"""initiates the game by explaining the rules and terminates when game is over"""
global roundsWon
global roundsPlayed
print('\nWelcome to hangman! The rules are simple: A word will be chosen at random and will be represented by a sequence of blanks. Each blank constitutes a letter in the word. You will be asked to enter a letter and if the letter is contained in the word you will be notified. You can only make an incorrect guess 8 times before you lose the round. To win the round you must guess all the letters and reveal the word. Good luck!\n\n')
word = randWord()
while True:
guess = letterGuess(word)
if checkGuess(guess,word):
roundsWon += 1
roundsPlayed +=1
print('\nYou won! The word is {}.'.format(word))
break
elif guessesLeft == 0:
print("\nI'm sorry, but you have run out of guesses. The word was {}.".format(word))
roundsPlayed +=1
break
def letterGuess(word):
"""asks the user to guess a letter and prints the number of guesses left"""
blankedWord(word)
guess = input('\nGuess a letter: ')
return guess
def blankedWord(word):
"""converts the random word into the proper blanked form based on the letter guessed and lets the user know if their letter is in the word"""
displayWord=''
for letter in word:
if guessedLetters.find(letter) > -1:
displayWord = displayWord + letter #checks if the letter guessed is contained in the random word string by index.
print('\n{} is contained in the word!'.format(letter))
else:
displayWord = displayWord + '-'
print(displayWord)
def checkGuess(guess,word):
"""checks if the user enters a single letter guess or the full word"""
if len(guess) > 1 and len(guess) == len(word):
return completeWordGuess(guess,word)
else:
return oneLetterGuess(guess, word)
def completeWordGuess(guess,word):
"""analyzes the complete word guess to check if is correct"""
global guessesLeft
if guess.lower() == word.lower(): #kept it lower case for simplicity
return True
else:
guessesLeft -=1
return False
def oneLetterGuess(guess,word):
"""checks to see if the single letter guess is included in the whole word"""
global guessedLetters
global guessesLeft
global guessesMade
if word.find(guess) == -1: #checks for failure on .find function
guessesLeft -= 1
guessesMade += 1
print('\nThat letter is not in the word. You have made {} incorrect guesses and have {} guesses left.'.format(guessesMade,guessesLeft))
guessedLetters = guessedLetters + guess.lower()
if allGuessedLetters(word):
return True
return False
def allGuessedLetters(word):
"""checks if all of the letters in the word have been uncovered/guessed"""
for letter in word:
if guessedLetters.find(letter) == -1: #checks for failure on .find function
return False
return True
def gameStats():
"""prints the final statistics of a play session"""
print('\nYou have played {} games and you have won {} rounds!'.format(roundsPlayed,roundsWon))
import random
guessesMade = 0
guessesLeft = 8
roundsPlayed = 0
roundsWon = 0
guessedLetters = ''
userMode = 1
while userMode==1:
if userMode == 1:
hangman()
guessesLeft = 8
guessedLetters = ''
userMode = eval(input('\nEnter 1 to play again, type 0 to end the game: '))
else:
break
gameStats()
The program seemingly runs well except for one part: If the user guesses the
first letter of the word the program treats it as the complete right word and
counts it as a win. So if I the word was 'rhino' and I guessed an 'r' it would
show as a win. I don't see the mistake, however I have a feeling it is in the
function completeWordGuess and I am not sure if I should be returning True for
the first conditional. Any help is appreciated.
Answer: I think the main problem - and I say I think because I haven't had a chance to
tear your code apart looking for other ways it's not working, and I probably
won't - is that you've put your return False call in allGuessedLetters in the
wrong place. This is what you had:
def allGuessedLetters(word):
"""checks if all of the letters in the word have been uncovered/guessed"""
for letter in word:
if guessedLetters.find(letter) == -1: #checks for failure on .find function
return False
return True
The problem is that, if the first letter does not return False, the control
flow moves to return True, given that "return True" is a part of every
iteration of the for loop. The only way that method doesn't return True is if
the first letter hasn't been guessed yet.
If you alter it like so:
def allGuessedLetters(word):
"""checks if all of the letters in the word have been uncovered/guessed"""
for letter in word:
if guessedLetters.find(letter) == -1: #checks for failure on .find function
return False
return True
the method works as intended, because the control flow only moves to return
True once the entirety of the for loop has been encountered and every letter
has been evaluated, resulting in a termination if any of them don't match.
|
python import from the sub-directories
Question: I have the following directories:
|-- project
| |-- __init__.py
| |-- proj1
| | |-- file.py
| | |-- file.py~
| | `-- __init__.py
| `-- proj2
| |-- call.py
| |-- call.py~
| `-- __init__.py
And the call.py has:
from proj1.file import hello
hello('nishant')
And the file.py has.
def hello(arg):
print 'hello ,' + arg
When i am trying to call outside the project directory `python
project/proj2/call.py` i get the import Error: No module named proj1.file
Any Idea ..?
Answer: This is because imports are relative to paths in your PATH environment
variable (or `sys.path` in python). By default this var only contains the
current directory, which is the directory you run from.
Run you project from within the `project` directory: `python proj2/call.py`
Alternatively, you can append your project root to `sys.path` before doing the
imports:
import sys
sys.path.append(YOUR_PROJECT_ROOT)
|
Exporting in CSV file format for Python
Question: I am trying to export some content in CsV format..but not able to do.
Below is the arbitrary idea that i am doing in python.
script , file = argv
emails = open(file,"r") #opens the file
results = csv.writer(open("Results.txt" , "w")) #Creates a new file to show results.
resultsList = []
here is the complete data and all
for results in resulsList:
results.writeline(result)
results.close()
emails.close()
Now...i need to to save all the data which is stored in results.txt in CSV
format.
Please provide your feedback for this. thanks.
Answer: You could use numpy
[`genfromtxt`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html)
and
[`savetxt`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.savetxt.html):
import numpy as np
np.savetxt('results.csv', np.genfromtxt(file,delimiter='',dtype=None),delimiter=',')
Note you must set the appropriate delimiter in `genfromtxt`
|
Get data from 2 columns from a CSV file. Check whether the entries are same and add the currency amount
Question: I have a data.CSV file containg the following data
> column1,column2,amount, column3
>
> name1,empId1,**34.12** ,241682-27638-CIGGNT
>
> name2,empId2,**22.14** ,241682-27638-OCGGINT
>
> name3,empId3,**18.98** ,241942-37190-GGDIV
>
> name4,empId4,**8.98** ,241942-37190-GGDIV
>
> name5,empId5,**234.098** ,241942-37190-CHYOF
The **bold** entries is the currency amount.I want to add the **column3** and
**amount** entries into a dictionary in python.But if the entry in the column3
repeates it should add the currency amount and insert it into the dictionary.
The output should be as below.
{'241682-27638-CIGGNT':'34.12','241682-27638-OCGGINT':'22.14','241942-37190-GGDIV':'27.92','241942-37190-CHYOF':'234.098'}
I tried by the below code but it is just updating(overwriting the currency
amount) the previous entry(instead of adding it to the previous amount)
import csv
path = "file_path"
file=open( path +"data.CSV", "r")
reader = csv.reader(file)
counterlist = {}
for row in reader :
#print row[3]
#print row[2]
if counterlist.has_key(row[3]):
row[2]+=row[3].value
counterlist.update({row[3]:row[2]})
else:
counterlist.update({row[3]:row[2]})
print counterlist
Thanks in advance.
Answer: Use a
[defaultdict](https://docs.python.org/3/library/collections.html#collections.defaultdict):
from collections import defaultdict
# ...
counterlist = defaultdict(float)
for row in reader:
counterlist[row[3]] += float(row[2])
|
How to execute jar file in python multithreading
Question: In my project, I have a jar file (which was written by other developer) to
copy content from a pdf to text file. Using python multi threading concept, I
tried to execute this jar.
After I ran this script, I can able to see the text files are created. but the
file size is 0KB. Why the contents are not copied to this file. However I
tried running this jar in command line, this works as expected. Can someone
tell please provide the solution?
from threading import Thread
import os
import sys
import time
import urllib2
from lxml import etree, html
import re
import Queue
import traceback
def createfile(x):
try:
file="test_"+str(x)
print "java -jar tika-app-1.1.jar -t --encoding=utf8 \"%s\" > \"%s\" "%("C:\\samplefile.pdf",file)
os.system("java -jar tika-app-1.1.jar -t --encoding=utf8 \"%s\" > \"%s\" "%("C:\tmp\samplefile.pdf",file))
except Exception,e:
print "excet",traceback.format_exc()
def process():
try:
result = Queue.Queue()
threads = [Thread(target=createfile, args=(x,)) for x in range(1,5)]
for t in threads:
t.start()
for t in threads:
t.join()
except:
print "exception",traceback.format_exc()
pass
end_time = time.time()
print "Estimate time", end_time - start_time
if __name__ == '__main__':
process()
My output:
Exception in thread "main" java.net.MalformedURLException: unknown protocol: c
at java.net.URL.<init>(Unknown Source)
at java.net.URL.<init>(Unknown Source)
at java.net.URL.<init>(Unknown Source)
at org.apache.tika.cli.TikaCLI.process(TikaCLI.java:393)
at org.apache.tika.cli.TikaCLI.main(TikaCLI.java:101)
Exception in thread "main" java.net.MalformedURLException: unknown protocol: c
at java.net.URL.<init>(Unknown Source)
at java.net.URL.<init>(Unknown Source)
at java.net.URL.<init>(Unknown Source)
at org.apache.tika.cli.TikaCLI.process(TikaCLI.java:393)
at org.apache.tika.cli.TikaCLI.main(TikaCLI.java:101)
Exception in thread "main" java.net.MalformedURLException: unknown protocol: c
at java.net.URL.<init>(Unknown Source)
at java.net.URL.<init>(Unknown Source)
at java.net.URL.<init>(Unknown Source)
at org.apache.tika.cli.TikaCLI.process(TikaCLI.java:393)
at org.apache.tika.cli.TikaCLI.main(TikaCLI.java:101)
Exception in thread "main" java.net.MalformedURLException: unknown protocol: c
at java.net.URL.<init>(Unknown Source)
at java.net.URL.<init>(Unknown Source)
at java.net.URL.<init>(Unknown Source)
at org.apache.tika.cli.TikaCLI.process(TikaCLI.java:393)
at org.apache.tika.cli.TikaCLI.main(TikaCLI.java:101)
Estimate time 1.73799991608
Answer: You tell the Java application to read this file: `C: mpsamplefile.pdf` because
`\t` is turned into a Tab characters in a Python string. The Java application
then sees `C:` without a `/` or `\` after it and assumes this must be a URL
(like `http:` or `ftp:`). But when it asks around, no URL protocol handler
supports it, hence the exception.
To avoid problems like this one, use `os.path.join()`:
inputFile = os.path.join('C:', 'tmp', 'samplefile.pdf')
Or use `/` instead of `\`; Java on Windows will convert these delimiters when
accessing files.
|
how to call `getattr()` to get a Python MySQLCursor method?
Question: What do I need to do do before, to make this Python call succeed:
>>>getattr(MySQLCursor, "fetchall")
If I just make this call at the beginning of a script, it fails. I have a
cursor and I need to programmatically obtain one of it's methods, such as
`fetchall()` from the string, such as `"fetchall"` I don't understand how to
setup this call so it succeeds.
Answer: `getattr(MySQLCursor, "fetchall")` does work:
>>> from mysql.connector.cursor import MySQLCursor
>>> getattr(MySQLCursor, 'fetchall')
<unbound method MySQLCursor.fetchall>
So there it is, an unbound method within the _class_ MySQLCursor.
If you have an _instance_ of the cursor, then you can get a bound method,
which you can call:
>>> from mysql.connector.cursor import MySQLCursor
>>> cursor = MySQLCursor()
>>> cursor
<mysql.connector.cursor.MySQLCursor object at 0x7f9368a86350>
>>> getattr(cursor, 'fetchall')
<bound method MySQLCursor.fetchall of <mysql.connector.cursor.MySQLCursor object at 0x7f9368a86350>>
>>> getattr(cursor, 'fetchall')()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.