text
stringlengths 226
34.5k
|
---|
Time - get yesterdays date
Question: I'm trying to get todays and yesterdays time using Python's `time` module.
This for me works for todays date:
dt = time.strptime(time.strftime("%d/%m/%Y"),'%d/%m/%Y')
But I don't know how to get yesterdays date. I've found many tutorials where
the `datetime` module is used but nothing where `time` module is used.
How can I do that? And is there a better way to get todays date
(`struct_time`)?
Answer: To get yesterday's `struct_time`, use any of many [existing `datetime`
solutions](http://stackoverflow.com/a/15345064/4279) and call `.timetuple()`
to get `struct_time` e.g.:
#!/usr/bin/env python
from datetime import date, timedelta
today = date.today()
yesterday = today - timedelta(1)
print(yesterday.timetuple())
# -> time.struct_time(tm_year=2015, tm_mon=4, tm_mday=22, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=2, tm_yday=112, tm_isdst=-1)
It produces the correct day in the local timezone even around DST transitions.
See [How can I subtract a day from a python
date?](http://stackoverflow.com/a/25427822/4279) if you want to find the
corresponding UTC time (get yesterday as an aware `datetime` object).
* * *
You could also get yesterday using only `time` module (but less directly):
#!/usr/bin/env python
import time
def posix_time(utc_time_tuple):
"""seconds since Epoch as defined by POSIX."""
# from https://gist.github.com/zed/ff4e35df3887c1f82002
tm_year = utc_time_tuple.tm_year - 1900
tm_yday = utc_time_tuple.tm_yday - 1
tm_hour = utc_time_tuple.tm_hour
tm_min = utc_time_tuple.tm_min
tm_sec = utc_time_tuple.tm_sec
# http://pubs.opengroup.org/stage7tc1/basedefs/V1_chap04.html#tag_04_15
return (tm_sec + tm_min*60 + tm_hour*3600 + tm_yday*86400 +
(tm_year-70)*31536000 + ((tm_year-69)//4)*86400 -
((tm_year-1)//100)*86400 + ((tm_year+299)//400)*86400)
now = time.localtime()
yesterday = time.gmtime(posix_time(now) - 86400)
print(yesterday)
# -> time.struct_time(tm_year=2015, tm_mon=4, tm_mday=22, tm_hour=22, tm_min=6, tm_sec=16, tm_wday=2, tm_yday=112, tm_isdst=0)
It assumes that `time.gmtime()` accepts POSIX timestamp on the given platform
([Python's stdlib breaks otherwise](http://bugs.python.org/issue22356) e.g.,
if non-POSIX `TZ=right/UTC` is used). `calendar.timegm()` could be used
instead of `posix_time()` but the former may use `datetime` internally.
Note: `yesterday` represents local time in both solutions (`gmtime()` is just
a simple way to implement subtraction here). Both solutions use naive
timezone-unaware time objects and therefore the result may be ambiguous or
even non-existent time though unless the local timezone has skipped yesterday
(e.g., Russia had skipped several days in February 1918) then the date is
correct anyway.
|
Python use dictionary keys as function names
Question: I would like to be able to use dictionary keys as function names, but I'm not
sure if it's possible. As a quick example, instead of `class().dothis(dictkey,
otherstuff)`, I'd like to have an option for `class().dictkey(otherstuff)`.
Here's a not working code example to give an idea of what I was thinking of.
class testclass:
def __init__(self):
self.dict = {'stuff':'value', 'stuff2':'value2'}
#I know this part won't work, but it gives the general idea of what I'd like to do
for key, value in self.dict.iteritems():
def key():
#do stuff
return value
>>> testclass().stuff()
'value'
Obviously each key would need to be checked that it's not overriding anything
important, but other than that, I'd appreciate a bit of help if it's possible
to get working.
Basically, my script is to store other scripts in the headers of the Maya
scene file, so you may call a command and it'll execute the matching script.
It stores the scripts in text format in a dictionary, where I've done a
wrapper like thing so you can input args and kwargs without much trouble, and
because you can only enter and execute the scripts personally, there's
virtually no danger of anything being malicious unless you do it to yourself.
The list is pickled and base64 encoded as it all needs to be in string format
for the header, so each time the function is called it decodes the dictionary
so you can edit or read it, so ideally I'd need the functions built each time
it is called.
A couple of examples from the run function:
Execute a simple line of code
>>> SceneScript().add("MyScript", "print 5")
>>> SceneScript().run("MyScript")
5
Execute a function with a return
>>> SceneScript().add("MyScript", "def test(x): return x*5")
>>> SceneScript().run("MyScript", "test(10)", "test('c')")
[50, 'ccccc']
Pass a variable to a function command
>>> SceneScript().run("MyScript", 'test(a+b)', a=10, b=-50)
[-200]
Execute a function without a return
>>> SceneScript().add("MyScript", "def test(x): print x*5")
>>> SceneScript().run("MyScript", "test(10)", "test('c')")
50
ccccc
[None, None]
Pass a variable
>>> SceneScript().add("MyScript", "print x")
>>> SceneScript().run("MyScript", x=20)
20
So as this question is asking, in terms of the above code, I'd like to have
something like `SceneScript().MyScript( "test(10)" )`, just to make it easier
to use.
Answer: The only "correct" way I can think of to do this looks like this:
class SomeClass(object):
def __init__(self, *args, **kwargs):
funcs = {'funcname': 'returnvalue', ...}
for func, ret_val in funcs.iteritems():
setattr(self, func, self.make_function(ret_val))
@staticmethod
def make_function(return_value):
def wrapped_function(*args, **kwargs):
# do some stuff
return return_value
return wrapped_function
This should allow you do to:
>>> foo = SomeClass()
>>> foo.funcname()
'returnvalue'
Of course the question of _why_ you'd want to do something like this remains,
as yet, unanswered :)
**_EDIT_** per updated question:
The problem lies in the fact that you cannot safely assign the method to the
function signature. I'm not sure how `SceneScript().add` works currently, but
that's essentially going to have to tie into this somehow or another.
|
Pass a numpy array to a C function in cython
Question: I'm wrapping a third party camera library interface with Cython so I can call
it from a python program. For the most part things work very well, but I've
hit a snag in my acquireImage() function. I've tried to create a contiguous
numpy array, pass that to the library, and pick up the result after the
library has finished populating it. Finally, I want to copy that array,
reshape it and return it. (The reshape stuff hasn't been coded yet)
Here is my code:
cpixis.pxd:
ctypedef bint rs_bool
ctypedef unsigned short uns16
ctypedef short int16
ctypedef unsigned int uns32
ctypedef unsigned int* uns32_ptr
ctypedef void* void_ptr
ctypedef char* char_ptr
ctypedef short* int16_ptr
ctypedef struct rgn_type:
short s1
short s2
short sbin
short p1
short p2
short pbin
ctypedef rgn_type* rgn_const_ptr
ctypedef rgn_type* rgn_ptr
#cdef CAM_NAME_LEN 32
cdef extern from "/usr/local/pvcam/examples/pvcam.h":
cdef enum cam_open:
OPEN_EXCLUSIVE
cdef enum exposure:
TIMED_MODE, STROBED_MODE, BULB_MODE, TRIGGER_FIRST_MODE, FLASH_MODE, VARIABLE_TIMED_MODE, INT_STROBE_MODE
cdef enum readout:
READOUT_NOT_ACTIVE, EXPOSURE_IN_PROGRESS, READOUT_IN_PROGRESS, READOUT_COMPLETE,
FRAME_AVAILABLE = READOUT_COMPLETE, READOUT_FAILED, ACQUISITION_IN_PROGRESS, MAX_CAMERA_STATUS
rs_bool pl_pvcam_init()
rs_bool pl_pvcam_uninit()
rs_bool pl_cam_get_name(int16 cam_num, char_ptr camera_name)
rs_bool pl_cam_open(char_ptr camera_name, int16_ptr hcam, int16 o_mode)
rs_bool pl_cam_close(int16 hcam)
rs_bool pl_pvcam_uninit()
rs_bool pl_exp_init_seq()
rs_bool pl_exp_setup_seq (int16 hcam, uns16 exp_total,
uns16 rgn_total, rgn_const_ptr rgn_array,
int16 exp_mode, uns32 exposure_time,
uns32_ptr exp_bytes)
rs_bool pl_exp_start_seq (int16 hcam, void_ptr pixel_stream)
rs_bool pl_exp_check_status (int16 hcam, int16_ptr status, uns32_ptr bytes_arrived)
int16 pl_error_code()
rs_bool pl_exp_finish_seq (int16 hcam, void_ptr pixel_stream, int16 hbuf)
rs_bool pl_exp_uninit_seq ()
pixis.pyx:
cimport cpixis
from cpython.mem cimport PyMem_Malloc, PyMem_Free
cimport numpy as np
import ctypes
cdef class Camera:
cdef cpixis.rgn_type* _region
cdef cpixis.int16 _cam_selection
cdef cpixis.int16 _num_frames
cdef cpixis.uns32 _exp_time
cdef char _cam_name[32]
cdef cpixis.int16 _hCam
cdef cpixis.uns32 _size
cdef cpixis.int16 _status
cdef cpixis.uns32 _notNeeded
#cdef cpixis.uns16* _frame
def __cinit__(self):
self._region = <cpixis.rgn_type *> PyMem_Malloc(sizeof(cpixis.rgn_type))
if self._region is NULL:
raise MemoryError()
#self._frame = <cpixis.uns16 *> PyMem_Malloc( self._size *2 )
if self._region is NULL:
raise MemoryError()
self._cam_selection = 0
self._num_frames = 1
self._exp_time = 100
def __dealloc__(self):
cpixis.pl_cam_close(self._hCam)
cpixis.pl_pvcam_uninit()
if self._region is not NULL:
PyMem_Free(self._region)
#if self._frame is not NULL:
# PyMem_Free(self._frame)
def initCamera(self):
if cpixis.pl_pvcam_init() == False:
print "Camera failed to init"
quit()
if cpixis.pl_cam_get_name(self._cam_selection, self._cam_name) == False:
print "Didn't get camera name"
quit()
if cpixis.pl_cam_open(self._cam_name, &self._hCam, cpixis.OPEN_EXCLUSIVE ) == False:
print "Camera did not open"
quit()
def setRegionOfInterest(self, s1, s2, sbin, p1, p2, pbin):
self._region.s1 = s1
self._region.s2 = s2
self._region.sbin = sbin
self._region.p1 = p1
self._region.p2 = p2
self._region.pbin = pbin
def acquireImage(self, exposureTime):
cdef np.ndarray[np.uint16_t, ndim=1] _frame
self._exp_time = exposureTime
if cpixis.pl_exp_init_seq() == False:
print "init_seq Failed"
if cpixis.pl_exp_setup_seq( self._hCam, 1, 1, self._region, cpixis.TIMED_MODE, self._exp_time, &self._size ) == False:
print "Experiment failed"
self.image = np.ndarray(shape=(self._size), dtype=np.uint16, order='C')
self._frame = np.ascontiguousarray(self.image, dtype=ctypes.c_ushort)
cpixis.pl_exp_start_seq( self._hCam, &self._frame[0] ); # start image acqusition
while cpixis.pl_exp_check_status(self._hCam, &self._status, &self._notNeeded) \
and (self._status != cpixis.READOUT_COMPLETE and self._status != cpixis.READOUT_FAILED):
pass
cpixis.pl_exp_finish_seq(self._hCam, &self._frame[0], 0)
cpixis.pl_exp_uninit_seq()
self.image = np.copy(self._frame)
return self.image
Lines 64 and 66 have the same error, which is "**Cannot take address of Python
variable.** " I've been working from another similar question
[here](https://stackoverflow.com/questions/23435756/passing-numpy-integer-
array-to-c-code).
Is there a way to make this work, or should I approach this problem
differently?
Thanks!
Answer: In the acquireImage() function, I was using the correct idea, but forgot to
remove the "self" from the self._frame references. I had started out with
_frame declared as a non local variable, but then forgot to update the
references in the function when I moved it to a local scope. I removed the
extra selfs and the problem went away.
|
Read all lines in file and execute in a random order
Question: I would like to create a python script that reads an input file in a random
order rather than sequentially while running through a loop, so each time the
script runs it would always run in a random order.
Is this possible?
Answer: This is certainly possible. I'd use `File.readlines` to get the lines of the
file and then shuffle it in-place with `random.shuffle`. Your script might
look something like
import random as r
lines = file("/path/to/input/file", 'r').readlines()
r.shuffle(lines)
#Proceed with script
|
Python base deployment techniques/package libraries for different environment like development, production or staging
Question: I am familiar with environment settings for Node.js using npm packages like
["settings](https://www.npmjs.com/package/settings)". This package allowed me
to import different environment settings based on what the NODE_ENV variable
is set to.
I've been searching for something similar for python but most of the
environments settings tutorial is catered towards python developed on Django
framework.
The only one that I found close to what I want is
<https://pypi.python.org/pypi/yconf>
However the config settings for different environment should not be limited to
just development, production and staging. Was wondering if anyone can suggest
similar alternatives or maybe even argue whether using Django framework is
relevant in my case.
Answer: I think `virtualenv` is what you needed. Click [here](http://docs.python-
guide.org/en/latest/dev/virtualenvs/) for more info.
|
Getting the drawing off the main thread
Question: I remember seeing a post somewhere about being able to get the drawing in
python off the main thread, but I can't seem to find it. My first attempt goes
something like this but it doesn't work. It doesn't crash initially (it does
eventually) but no drawing takes place. The idea is that `options` is a map of
drawing functions each of which draws to a pyqtgraph, or a QTWidget, etc
from threading import *
from Queue import *
anObject1 = DrawingObject()
anObject2 = DrawingObject()
anObject3 = DrawingObject()
options = {
0 : anObject1.drawing_func,
1 : anObject2.drawing_func,
2 : anObject3.drawing_func,
3 : updateNon,
}
def do_work(item): #item is a tuple with the item at 0 is the index to which function
#print str(item) + "\n"
options[item[0]](item)
def worker():
while True:
item = q.get()
do_work(item)
q.task_done()
q = Queue()
#This function is a callback from C++
def callback(s, tuple):
#options[tuple[0]](tuple) #this works
q.put(tuple) #this does not
num_worker_threads = 3
for i in range(num_worker_threads):
t = Thread(target=worker)
t.daemon = True
t.start()
Answer: My understanding is that it is not possible to draw to a QWidget outside the
main GUI thread. You can find many references to this in the Qt forums and
documentation. However, it is possible to start a subprocess that draws into
an image in shared memory, and then display the image in the main process.
This is the approach taken by `pyqtgraph/widgets/RemoteGraphicsView.py`; see
`examples/RemoteSpeedTest.py` for an example of this.
|
Sort Average In A file
Question: I have a file with 3 scores for each person. Each person has their own row. I
want to use these scores, and get the average of all 3 of them. There scores
are separated by tabs and in descending order. For example:
tam 10 6 11
tom 3 7 3
tim 5 4 6
these people would come out with an average of:
tam 9
tom 5
tim 4
I want these to be able to print to the python shell, however not be saved to
the file.
with open("file.txt") as file1:
d = {}
count = 0
for line in file1:
column = line.split()
names = column[0]
average = (int(column[1].strip()) + int(column[2].strip()) + int(column[3].strip()))/3
count = 0
while count < 3:
d.setdefault(names, []).append(average)
count = count + 1
for names, v in sorted(d.items()):
averages = (sum(v)/3)
print(names,average)
averageslist=[]
averageslist.append(averages)
My code only finds the first persons average and outputs it for all of them. I
also want it to be descending in order of averages.
Answer: You can use the following code that parses your file into a list of (name,
average) tuples and prints every entry of the by average sorted list:
import operator
with open("file.txt") as f:
data = []
for line in f:
parts = line.split()
name = parts[0]
vals = parts[1:]
avg = sum(int(x) for x in vals)/len(vals)
data.append((name, avg))
for person in sorted(data, key=operator.itemgetter(1), reverse=True):
print("{} {}".format(*person))
|
Maya Python: How do i call on and edit attributes of an object in one function when that object was created in another function?
Question: # The Plan:
I've been working on a script for measuring the distance between 2 vertices
that the user selects and scaling up or down that the object based on a
desired length between those vertices.
# The Problem:
Error states that it cannot locate the textFieldButtonGrp object that I create
in one of my functions.
i basically put my window format stuff into a single function:
def window_presets():
'''
presets for UI window
'''
if mc.window("Distance Scale Tool", exists=True):
mc.deleteUI("Distance Scale Tool")
mc.window("Distance Scale Tool", t="Distance Based Scale Tool")
mc.rowColumnLayout(numberOfColumns=2,
columnAttach=(1, 'left', 0),
columnWidth=[(1,100), (2,300)])
mc.text(l="Current Length")
current_length = mc.textFieldButtonGrp("Current Length",
editable=False,
text="{0}".format(refresh_current_dist()),
buttonLabel="Refresh",
buttonCommand=refresh_current_dist)
mc.text(l="Desired Length")
desired_length = mc.textFieldButtonGrp("Desired Length",
buttonLabel="Scale",
buttonCommand=scale_dist,
tcc=refresh_scale_factor)
mc.showWindow()
i want the refresh button to call another function that edits the
textFieldButtonGrp that i created:
def refresh_textfield(distance):
if mc.textFieldButtonGrp("Current Length", exists=True):
mc.textFieldButtonGrp("Current Length",
edit=True,
text="{0}".format(distance))
else:
print "Current Length dont exist"
but "Current Length".... it doesnt seem to exist....
same with "Desired Length"....
Heres the full script:
## ((Ax - Bx)**2 + (Ay - By)**2 + (Az - Bz)**2)**0.5
import maya.cmds as mc
import math
def window_presets():
'''
presets for UI window
'''
if mc.window("Distance Scale Tool", exists=True):
mc.deleteUI("Distance Scale Tool")
mc.window("Distance Scale Tool", t="Distance Based Scale Tool")
mc.rowColumnLayout(numberOfColumns=2,
columnAttach=(1, 'left', 0),
columnWidth=[(1,100), (2,300)])
mc.text(l="Current Length")
current_length = mc.textFieldButtonGrp("Current Length",
editable=False,
text="{0}".format(refresh_current_dist()),
buttonLabel="Refresh", buttonCommand=refresh_current_dist)
mc.text(l="Desired Length")
desired_length = mc.textFieldButtonGrp("Desired Length",
buttonLabel="Scale",
buttonCommand=scale_dist,
tcc=refresh_scale_factor)
mc.showWindow()
def get_object_name():
selPoints = mc.ls(sl=True)
obj_name = selPoints[0].split('.')[0]
return obj_name
def get_coordinates():
'''
Gets coordinates of selected points and gets distance between them
'''
selPoints = mc.ls(sl=True)
obj_name = get_object_name()
print obj_name
vtxCoordList = mc.xform(selPoints,
query=True,
translation=True,
ws=True)
Ax, Ay, Az = vtxCoordList[:-3]
Bx, By, Bz = vtxCoordList[3:]
return (Ax, Bx, Ay, By, Az, Bz)
def calculate_distance(Ax, Bx, Ay, By, Az, Bz):
'''
Determines distance between 2 coordinates on single mesh.
Below are formulas for distance based on single axis:
dx = ((Ax - Bx)**2)**0.5
print "Distance on X axis is: {0}".format(dx) #distance on X axis
dy = ((Ay - By)**2)**0.5
print "Distance on Y axis is: {0}".format(dy) #distance on Y axis
dz = ((Az - Bz)**2)**0.5
print "Distance on Z axis is: {0}".format(dz) #distance on Z axis
'''
distance = math.sqrt((Ax - Bx)**2 + (Ay - By)**2 + (Az - Bz)**2)
print "the distance between points is {0}".format(distance)
return distance
def refresh_textfield(distance):
if mc.textFieldButtonGrp("Current Length", exists=True):
mc.textFieldButtonGrp("Current Length",
edit=True,
text="{0}".format(distance))
else:
print "Current Length dont exist"
def refresh_current_dist():
'''
returns current distance
'''
current_coordinates = get_coordinates()
current_distance = calculate_distance(*current_coordinates)
refresh_textfield(current_distance)
return current_distance
def refresh_scale_factor(sf):
'''
returns factor by which object will be scaled
'''
current_distance = refresh_current_dist()
scale_factor = (float(sf))/(float(current_distance))
print "dist btwn pnts is d: {0}".format(current_distance)
print "sf is {0}".format(sf)
print "user input is {0}".format(sf)
print "scale factor is {0}".format(scale_factor)
print "-"*10
return scale_factor
def scale_dist():
'''
scale object to match measurement
'''
user_input = float(mc.textFieldButtonGrp("Desired Length",
query=True,
text=True))
scale_factor = refreshScaleFactor(user_input)
mc.makeIdentity(get_object_name(),
apply=True,
translate=1,
rotate=1,
scale=1,
normal=0,
preserveNormals=1)#freeze transformations
mc.DeleteAllHistory()
mc.scale(scale_factor, scale_factor, scale_factor, get_object_name())
print "you scaled by {0}".format(scale_factor)
mc.makeIdentity(get_object_name(),
apply = True,
translate=1,
rotate=1,
scale=1,
normal=0,
preserveNormals=1)#freeze transformations
if __name__ == '__main__':
window_presets()
Answer: **Solution:**
Remove the space in `"Current Length"` and this will fix your error.
**Naming notes:**
Consider applying labels the same naming as the one you are using on
functions. I usually name them this way:
`"<mine or company initials>_<ToolName>_<WidgetName>"`
In you case this will be something like
`"ak_VertexDistance_InputCurrentLength"`.
**Why this naming?** Few months ago I was writing a script to save a Maya
scene somewhere on the network. I was trying to add some items to an
`optionMenu` to my window, but whatever I was trying, the `optionMenu`
remained empty. After two hours of unsuccessful researches, I realised that
the item were added to an other `optionMenu` in an other of my tools. The
widgets had the same generic name.
Your initials are optionnal but adding a `<ToolName>` is in my opinion
mandatory if you want to differenciate the widgets of your different tools.
|
XLRD - Extract Data from two columns into a dict with multiple values
Question: I have an excel sheet with data that looks like this.
Column1 Column2 Column3
1 23 1
1 5 2
1 2 3
1 19 5
2 56 1
2 22 2
3 2 4
3 14 5
4 59 1
5 44 1
5 1 2
5 87 3
What I'd like to do is to extract Column1 and Column3 into a dictionary with
keys that have multiple values. Something like this:
1: 1,2,3,5
2: 1,2
3: 4,5
4: 1
5: 1,2,3
I'm new to Python, so any help you can provide would be greatly appreciated!
I can extract the data from the two columns and put it into a dict. But the
multiple values per key I'm unsure about. And also unsure how to group all of
the 1's, 2's, 3's from column1 into a single entry.
for rownum in range(sheet.nrows):
results = dict((sheet.cell_value(rownum, 0), sheet.cell_value(rownum, 2)) for rownum in range(sheet.nrows))
return results
EDIT: Thanks to the help of this website, this is where I currently sit.
xl = pandas.read_excel(r"e:\py_projects\py_test\test_data.xlsx", sheetname='stockTestColumn1')
grouped = xl.groupby("columnid")
myData = grouped["volumeid"].apply(lambda x: [e for e in x])
What I'm now hoping to do is check for the presence of x for any given key.
So, if this is the output:
1: 1,2,3,4,5
2: 1,2
3: 4,5
4: 1
5: 1,2,3
I'm hoping to do something like:
check = myData.get('1')
if '4' in check:
print "do something"
Answer: this is a typical usecase of pandas [groupby](http://pandas.pydata.org/pandas-
docs/stable/groupby.html),
#!/usr/bin/python
import pandas as pd
from StringIO import StringIO
s = """Column1 Column2 Column3
1 23 1
1 5 2
1 2 3
1 19 5
2 56 1
2 22 2
3 2 4
3 14 5
4 59 1
5 44 1
5 1 2
5 87 3
"""
sio = StringIO(s)
df = pd.read_table(sio, sep=r"\s+")
grouped = df.groupby("Column1")
print grouped["Column3"].apply(lambda x: [e for e in x])
output:
1 [1, 2, 3, 5]
2 [1, 2]
3 [4, 5]
4 [1]
5 [1, 2, 3]
check the `sum` signature, `apply` actually passed in a iterator. basically,
pandas is the right tool to do such tabular data processing.
|
plain Password is logged via my python script
Question: I have a sample python script: sample.py. The script takes the argument as the
user name and password to connect to some remote server. When I run the script
sample.py --username --password , the password is being logged in linux
messages files. I understand this is a linux behavior, but wondering if we can
do anything within my script to avoid this logging. One way I can think is to
provide password in an interactive way. Any other suggestions?
Answer: Python has the
[getpass](https://docs.python.org/2/library/getpass.html#module-getpass)
module to handle both user and password information, though these are provided
in an interactive way, as you suggested, but with an additional layer of
security. It's also quite simple to use, having the `getpass` and `getuser`
methods available.
Example:
import getpass
password = getpass.getpass() # Prompts "Password:" automatically
|
Default parameter doesn't work
Question: I got the following problem in Python code.
The error is:
Traceback (most recent call last): File "cmd.py", line 16, in <module>
func(b="{cmd} is entered ...") # Error here
File "cmd.py", line 5, in func
exceptMsg = b.format(errStr=errStr, debugStr=debugStr)
KeyError: 'cmd'
The code:
import re
def func(errStr = "A string", b = "{errStr}\n{debugStr}"):
debugStr = "Debug string"
exceptMsg = b.format(errStr=errStr, debugStr=debugStr)
raise ValueError(exceptMsg)
try:
'''
Case 1: If user invokes func() like below, error produced.
Possible explanation: Paramter b of func() is looking keyword
'errStr' further down in func() body, but I am passing it keyword
'cmd' instead. What to change to make the code work?
'''
#cmd = "A crazy string" # Comment, make code pass
#func(b="{cmd} is entered ...") # Error here
# Case 2: If user invokes func() like below, OK.
errStr = "A crazy string"
func(b="{errStr} is entered")
except ValueError as e:
err_msg_match = re.search('A string is entered:', e.message)
print "Exception discovered and caught!"
1) If the function interface func() is preserved, what code to change?
2) If I must modify the function interface, how'd I go about making it a clean
code change?
Answer: `b.format(errStr=errStr, debugStr=debugStr)` only passes `errStr` and
`debugStr` to replace placeholders. If `b` contains any other placeholder
variables it will fail.
You have:
b = "{cmd} is entered ..."
There is nothing to match `{cmd}`
If you wanted to pass `cmd` to `func`, you can do it with [keyword
arguments](http://stackoverflow.com/questions/1769403/understanding-kwargs-in-
python):
def func(errStr = "A string", b = "{errStr}\n{debugStr}", **kwargs):
debugStr = "Debug string"
exceptMsg = b.format(errStr=errStr, debugStr=debugStr, **kwargs)
raise ValueError(exceptMsg)
And use as:
func(b="{cmd} is entered ...", cmd="A crazy string")
|
How to construct SOAP message with pysimplesoap?
Question: I'm trying to call a SOAP service from the Dutch government land register
([WSDL here](http://www1.kadaster.nl/1/schemas/kik-
inzage/20141101/verzoekTotInformatie-2.1.wsdl)) with
[PySimpleSoap](https://github.com/pysimplesoap/pysimplesoap). So far I did
this to connect:
from pysimplesoap.client import SoapClient
client = SoapClient(wsdl='http://www1.kadaster.nl/1/schemas/kik-inzage/20141101/verzoekTotInformatie-2.1.wsdl')
and with the help of [an awesome answer by Plamen
Petrov](http://stackoverflow.com/a/29842991/1650012) I now understand I need
to send the xml below using the `client.VerzoekTotInformatie()` method.
What I do not understand however, is how I can get the desired XML (see
below). I can of course build it manually, but I've got the feeling that there
is a smarter/more pythonic way of constructing that. Can I use pysimplesoap to
construct this message xml?
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns="http://www.kadaster.nl/schemas/kik-inzage/20141101" xmlns:v20="http://www.kadaster.nl/schemas/kik-inzage/ip-aanvraag/v20141101">
<soapenv:Header/>
<soapenv:Body>
<ns:VerzoekTotInformatieRequest>
<v20:Aanvraag>
<v20:berichtversie>?</v20:berichtversie>
<v20:klantReferentie>ABC</v20:klantReferentie>
<v20:productAanduiding>?</v20:productAanduiding>
<v20:Ingang>
<v20:Object>
<v20:IMKAD_KadastraleAanduiding>
<v20:gemeente>Amsterdam</v20:gemeente>
<v20:sectie>123</v20:sectie>
<v20:perceelnummer>456</v20:perceelnummer>
<v20:appartementsindex>789</v20:appartementsindex>
<v20:deelperceelnummer>10</v20:deelperceelnummer>
<v20:AKRKadastraleGemeenteCode>20</v20:AKRKadastraleGemeenteCode>
</v20:IMKAD_KadastraleAanduiding>
</v20:Object>
</v20:Ingang>
</v20:Aanvraag>
</ns:VerzoekTotInformatieRequest>
</soapenv:Body>
</soapenv:Envelope>
[EDIT]
Following the examples in [the
docs](https://code.google.com/p/pysimplesoap/wiki/SoapClient) I now try adding
the VerzoekTotInformatieRequest with a `berichtversie` in it, after which I
tried to do a request to the soap-service. But as you can see below, the body
still only has an empty `<VerzoekTotInformatie>` (no `Request` in it), plus I
get a massive error. Any ideas how I can build the message above?
>>> client['VerzoekTotInformatieRequest'] = {'Aanvraag': {'berichtversie': 'yay'}}
>>> c.VerzoekTotInformatie()
INFO:pysimplesoap.client:POST https://service1.kadaster.nl/kik/inzage/20141101/VerzoekTotInformatieService
DEBUG:pysimplesoap.client:SOAPAction: "VerzoekTotInformatie"
Content-length: 378
Content-type: text/xml; charset="UTF-8"
DEBUG:pysimplesoap.client:<?xml version="1.0" encoding="UTF-8"?><soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soap:Header/>
<soap:Body>
<VerzoekTotInformatie xmlns="http://www.kadaster.nl/schemas/kik-inzage/20141101">
</VerzoekTotInformatie>
</soap:Body>
</soap:Envelope>
DEBUG:pysimplesoap.client:date: Fri, 24 Apr 2015 12:51:05 GMT
status: 404
content-length: 956
content-type: text/html;charset=utf-8
DEBUG:pysimplesoap.client:<html><head><title>JBossWeb/2.0.0.GA_CP05 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;b
ackground-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-s
erif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 404 - </h1><HR si
ze="1" noshade="noshade"><p><b>type</b> Status report</p><p><b>message</b> <u></u></p><p><b>description</b> <u>The requested resource () is not available.</u></p><HR size="1" noshade="noshade"><h3>JBossWeb/2.0.0.GA_CP05</h3></body></html>
ERROR:pysimplesoap.simplexml:<html><head><title>JBossWeb/2.0.0.GA_CP05 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:whit
e;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,san
s-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 404 - </h1><HR
size="1" noshade="noshade"><p><b>type</b> Status report</p><p><b>message</b> <u></u></p><p><b>description</b> <u>The requested resource () is not available.</u></p><HR size="1" noshade="noshade"><h3>JBossWeb/2.0.0.GA_CP05</h3></body></html>
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/Library/Python/2.7/site-packages/pysimplesoap/client.py", line 181, in <lambda>
return lambda *args, **kwargs: self.wsdl_call(attr, *args, **kwargs)
File "/Library/Python/2.7/site-packages/pysimplesoap/client.py", line 346, in wsdl_call
return self.wsdl_call_with_args(method, args, kwargs)
File "/Library/Python/2.7/site-packages/pysimplesoap/client.py", line 370, in wsdl_call_with_args
response = self.call(method, *params)
File "/Library/Python/2.7/site-packages/pysimplesoap/client.py", line 262, in call
jetty=self.__soap_server in ('jetty',))
File "/Library/Python/2.7/site-packages/pysimplesoap/simplexml.py", line 56, in __init__
self.__document = xml.dom.minidom.parseString(text)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/dom/minidom.py", line 1928, in parseString
return expatbuilder.parseString(string)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/dom/expatbuilder.py", line 940, in parseString
return builder.parseString(string)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/xml/dom/expatbuilder.py", line 223, in parseString
parser.Parse(string, True)
ExpatError: mismatched tag: line 1, column 944
Answer: Constructing xml is not the necessary (or correct) way to call a soap method.
`PySimpleSoap` has already provided quite an elegant and human-readable way to
do this:
client = SoapClient(wsdl='http://www1.kadaster.nl/1/schemas/kik-inzage/20141101/verzoekTotInformatie-2.1.wsdl', trace=True)
client.VerzoekTotInformatie(Aanvraag={'berichtversie':4.7,
'klantReferentie':'cum murmure',
'productAanduiding': 'aeoliam venit'})
The debug log would be like this:
INFO:pysimplesoap.client:POST https://service1.kadaster.nl/kik/inzage/20141101/VerzoekTotInformatieService
DEBUG:pysimplesoap.client:SOAPAction: "VerzoekTotInformatie"
Content-length: 842
Content-type: text/xml; charset="UTF-8"
DEBUG:pysimplesoap.client:<?xml version="1.0" encoding="UTF-8"?>
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soap:Header/>
<soap:Body>
<VerzoekTotInformatieRequest xmlns="http://www.kadaster.nl/schemas/kik-inzage/20141101">
<Aanvraag xmlns="http://www.kadaster.nl/schemas/kik-inzage/ip-aanvraag/v20141101">
<berichtversie xmlns="http://www.kadaster.nl/schemas/kik-inzage/ip-aanvraag/v20141101">4.7000000000</berichtversie>
<klantReferentie xmlns="http://www.kadaster.nl/schemas/kik-inzage/ip-aanvraag/v20141101">cum murmure</klantReferentie>
<productAanduiding xmlns="http://www.kadaster.nl/schemas/kik-inzage/ip-aanvraag/v20141101">aeoliam venit</productAanduiding>
</Aanvraag>
</VerzoekTotInformatieRequest>
</soap:Body>
</soap:Envelope>
As you can see, the xml is automatically constructed and sent to server.
However I got `401: Unauthorized` error, which you may know how to fix.
|
Access dictionary keys and values in pandas dataframe column
Question: I've got a simple dataframe with a column populated by a python dictionary, in
the form:
User CLang
111 {u'en': 1}
112 {u'en': 1, u'es': 1}
112 {u'en': 1, u'es': 1}
113 {u'zh': 1, u'ja': 1, u'es': 2}
113 {u'zh': 1, u'ja': 1, u'es': 2}
113 {u'zh': 1, u'ja': 1, u'es': 2}
114 {u'es': 1}
113 {u'zh': 1, u'ja': 1, u'es': 2}
The `CLang` column contains the frequency of different values for each user.
How may I have access to single keys and values of the `CLang` column? For
instance I would like to groupby the `User` and the most frequent value inside
the dictionary, in a form like:
g = df.groupby(['User','CLang')
counting then the number of occurrences for each value:
d = g.size().unstack().fillna(0)
The resulting dataframe would appear as:
DLang en es
User
111 1 0
112 1 1
113 0 4
114 0 1
Answer: I'm not completely sure I understood correctly what you want your output to be
and also I don't think using `dict` in `pandas.DataFrame` is a very good idea
in general.
Reshaping your `DataFrame` to something more _pandas-like_ would be better,
you would then be able to use `pandas` methods to solve this problem.
Anyway, if you really want to do it, here's a (not very elegant) way:
In [1]: import pandas as pd
In [2]: l1 = [111, 112, 112, 113, 113, 113, 114, 113]
In [3]: l2 = [{'en': 1},
{'en': 1, 'es': 1},
{'en': 1, 'es': 1},
{'es': 2, 'ja': 1, 'zh': 1},
{'es': 2, 'ja': 1, 'zh': 1},
{'es': 2, 'ja': 1, 'zh': 1},
{'es': 1},
{'es': 2, 'ja': 1, 'zh': 1}]
In [4]: df = pd.DataFrame({'User': l1, 'CLang': l2})
In [5]: df
Out[5]:
User CLang
0 111 {u'en': 1}
1 112 {u'en': 1, u'es': 1}
2 112 {u'en': 1, u'es': 1}
3 113 {u'zh': 1, u'ja': 1, u'es': 2}
4 113 {u'zh': 1, u'ja': 1, u'es': 2}
5 113 {u'zh': 1, u'ja': 1, u'es': 2}
6 114 {u'es': 1}
7 113 {u'zh': 1, u'ja': 1, u'es': 2}
In [6]: def whatever(row):
....: tmp_d = {}
....: for d in row.values:
....: for k in d.keys():
....: if k in tmp_d.keys():
....: tmp_d[k] += 1
....: else:
....: tmp_d[k] = 1
....: return tmp_d
In [7]: new_df = df.groupby('User')['CLang'].apply(whatever).unstack().fillna(0)
In [8]: new_df
Out[8]:
en es ja zh
User
111 1 0 0 0
112 2 2 0 0
113 0 4 4 4
114 0 1 0 0
If you then want to know what was the `CLang` with more occurrences you can,
also not very elegantly since `list` in `DataFrame` should be avoided, do:
In [9]: def whatever2(row):
....: tmp_d = {}
....: for i, v in zip(row.index, row.values):
....: if v in tmp_d.keys():
....: tmp_d[v].append(i)
....: else:
....: tmp_d[v] = [i]
....: highest = max(tmp_d.keys())
....: return tmp_d[highest]
In [10]: new_df['Most_Used_CLang'] = new_df.apply(whatever2, axis=1)
In [11]: new_df
Out[11]:
en es ja zh Most_Used_CLang
User
111 1 0 0 0 [en]
112 2 2 0 0 [en, es]
113 0 4 4 4 [es, ja, zh]
114 0 1 0 0 [es]
|
Read filenames from CSV and then copy the files to different directory
Question: I have been able to write a batch file to find files and put the file paths
into a CSV. I haven't been able to figure out how to read the file locations
from the CSV and then move the files to a different storage device with the
same folder structure using python. This is what I'd like to do.
I wish I had some code to show you but none of it has worked.
Answer: Here's a quick and dirty solution. (I haven't tested it yet, YMMV!)
import csv
import os
import shutil
import sys
def main(argv):
# TODO: this should do some error checking or maybe use optparse
csv_file, existing_path_prefix, new_path_prefix = argv[1:]
with open(csv_file, 'rb') as f:
reader = csv.reader(f)
for row in reader:
# Assuming the column in the CSV file we want is the first one
filename = row[0]
if filename.startswith(existing_path_prefix):
filename = filename[len(existing_path_prefix):]
new_filename = os.path.join(new_path_prefix, filename)
print ('Copying %s to %s...' % filename, new_filename),
shutil.copy(filename, new_filename)
print 'done.'
print 'All done!'
if __name__ == '__main__':
main(sys.argv)
|
SAP - Python sapnwrfc and pyrfc fail
Question: I know that the newest library to connect python and SAP is
[PyRFC](http://sap.github.io/PyRFC/), I am using Windows to develop a django
app and when I try to install the `pyrfc-1.9.3-py2.7-win32.egg` that is what
corresponds to my system, It gives me error while importing the library, the
error is shown because the modules that pyrfc imports are missing, I followed
the whole README doc but I have no idea how to use this library.
I decided to use [sapnwrfc](https://pypi.python.org/pypi/sapnwrfc/) instead,
so I downloaded the source and compile it with MinGW, it installed pretty well
and I can now stablish a connection with SAP, but there are errors calling an
RFC function.
This is my code
def results(request):
sapnwrfc.base.config_location = BASE_DIR+'\\sap.yml'
sapnwrfc.base.load_config()
try:
conn = sapnwrfc.base.rfc_connect()
fd = conn.discover("ZRFC_TEST_TODO")
f = fd.create_function_call()
f.QUERY_TABLE("IT_SUCS") # ERROR IN THIS LINE
f.ROWCOUNT(50)
f.OPTIONS=([{'BUKRS': "TVN"}])
f.invoke()
d = f.DATA
todo = {'results': d}
conn.close()
except sapnwrfc.RFCCommunicationError as e:
todo = {'error':e}
return render_to_response(json.dumps(todo), content_type='application/json')
The error is `'NoneType' object is not callable` and if I change it to:
f.QUERY_TABLE="IT_SUCS"
f.ROWCOUNT=50
f.OPTIONS=[{'BUKRS': "TVN"}]
f.invoke()
Then the error desapears BUT `f` is always null.
I need to get some tables from an RFC from SAP, any idea to solve this? Is
there another way to do it? maybe another library?
**UPDATE** After testing and debbuging I think that the `fd` variable is not
been initialized correctly, because when I try to see `fd's` attributes,
python stops and and error is shown (`TypeError: 'NoneType' object is not
callable`). This is just a theory, I am not sure about it.
Answer: A bit late...
def get_table():
QUERY_TABLE = 'MY_TABLE'
Fields = ['MY_FIELD']
FIELDS=[{'FIELDNAME':x} for x in Fields]
get_data = conn.call("RFC_READ_TABLE", DELIMITER='|', FIELDS=FIELDS, QUERY_TABLE=QUERY_TABLE, )
get_data = get_data['something in get_data']
# here format your data, for x in ...
|
Accessing .zipx with Python
Question: I'm attempting to write a very simple script that counts the number of
entries/files a given ZIP file has, for some statistics.
I'm using the `zipfile` library, and I'm running into this problem where the
library appears not to support .zipx format.
bash-3.1$ python zipcount.py t.zipx
Traceback (most recent call last):
File "zipcount.py", line 10, in <module>
zipCount(file)
File "zipcount.py", line 5, in zipCount
with ZipFile(file, "r") as zf:
File "c:\Python34\lib\zipfile.py", line 937, in __init__
self._RealGetContents()
File "c:\Python34\lib\zipfile.py", line 978, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file
Googling for help reveals that the `zipx` format is not the same as `zip`, and
so maybe I shouldn't be expecting this to work. Further googling though fails
to bring up a library that actually _can_ deal with `zipx`. Searching stack
overflow didn't find much either.
I can't possibly be the only person who wants to manipulate zipx files in
python, right? Any suggestions?
Answer: [chilkat](http://www.chilkatsoft.com/) might work for this. It's **_not a free
library_** but there is a 30 day trial. Here is an example from
<http://www.example-code.com/python/ppmd_compress_file.asp>:
import sys
import chilkat
compress = chilkat.CkCompression()
# Any string argument automatically begins a 30-day trial.
success = compress.UnlockComponent("30-day trial")
if (success != True):
print "Compression component unlock failed"
sys.exit()
compress.put_Algorithm("ppmd")
# Decompress back to the original:
success = compress.DecompressFile("t.zipx", "t")
if (success != True):
print compress.lastErrorText()
sys.exit()
print "Success!"
The API documentation:
<http://www.chilkatsoft.com/refdoc/pythonCkCompressionRef.html>
|
'Thread' object has no attribute '_children' - django + scikit-learn
Question: I'm having problems with a django application that uses a random forest
classifier (<http://scikit-
learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html>)
to classify items. The error that I'm receiving says:
AttributeError at /items/
'Thread' object has no attribute '_children'
Request Method: POST
Request URL: http://localhost:8000/items/
Django Version: 1.7.6
Exception Type: AttributeError
Exception Value:
'Thread' object has no attribute '_children'
Exception Location: /usr/lib/python2.7/multiprocessing/dummy/__init__.py in start, line 73
Python Executable: /home/cristian/env/bin/python
Python Version: 2.7.3
Python Path:
['/home/cristian/filters',
'/home/cristian/env/lib/python2.7',
'/home/cristian/env/lib/python2.7/plat-linux2',
'/home/cristian/env/lib/python2.7/lib-tk',
'/home/cristian/env/lib/python2.7/lib-old',
'/home/cristian/env/lib/python2.7/lib-dynload',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-linux2',
'/usr/lib/python2.7/lib-tk',
'/home/cristian/env/local/lib/python2.7/site-packages']
Server time: Fri, 24 Apr 2015 16:08:20 +0000
The problem is that I'm not using threads at all. This is the code:
def item_to_dict(item):
item_dict = {}
for key in item:
value = item[key]
# fix encoding
if isinstance(value, unicode):
value = value.encode('utf-8')
item_dict[key] = [value]
return item_dict
def load_classifier(filter_name):
clf = joblib.load(os.path.join(CLASSIFIERS_PATH, filter_name, 'random_forest.100k.' + filter_name.lower() + '.pkl'))
return clf
@api_view(['POST'])
def classify_item(request):
"""
Classify item
"""
if request.method == 'POST':
serializer = ItemSerializer(data=request.data['item'])
if serializer.is_valid():
# get item and filter_name
item = serializer.data
filter_name = request.data['filter']
item_dict = item_to_dict(item)
clf = load_classifier(filter_name)
# score item
y_pred = clf.predict_proba(pd.DataFrame(item_dict))
item_score = y_pred[0][1]
# create and save classification
classification = Classification(classifier_name=filter_name,score=item_score,item_id=item['_id'])
classification_serializer = ClassificationSerializer(classification)
return Response(classification_serializer.data, status=status.HTTP_201_CREATED)
else:
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
I'm able to print out the "clf" and "item_dict" variables and everything seems
ok. The error raises when I call the method "predict_proba" of the classifier.
One important thing to add is that I don't recieve the error when I run the
server and send the post method for the first time.
Here's the full traceback:
File "/home/cristian/env/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
line 111. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/cristian/env/local/lib/python2.7/site-packages/django/views/decorators/csrf.py" in wrapped_view
line 57. return view_func(*args, **kwargs)
File "/home/cristian/env/local/lib/python2.7/site-packages/django/views/generic/base.py" in view
line 69. return self.dispatch(request, *args, **kwargs)
File "/home/cristian/env/local/lib/python2.7/site-packages/rest_framework/views.py" in dispatch
line 452. response = self.handle_exception(exc)
File "/home/cristian/env/local/lib/python2.7/site-packages/rest_framework/views.py" in dispatch
line 449. response = handler(request, *args, **kwargs)
File "/home/cristian/env/local/lib/python2.7/site-packages/rest_framework/decorators.py" in handler
line 50. return func(*args, **kwargs)
File "/home/cristian/filters/classifiers/views.py" in classify_item
line 70. y_pred = clf.predict_proba(pd.DataFrame(item_dict))
File "/home/cristian/env/local/lib/python2.7/site-packages/sklearn/pipeline.py" in predict_proba
line 159. return self.steps[-1][-1].predict_proba(Xt)
File "/home/cristian/env/local/lib/python2.7/site-packages/sklearn/ensemble/forest.py" in predict_proba
line 468. for i in range(n_jobs))
File "/home/cristian/env/local/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py" in __call__
line 568. self._pool = ThreadPool(n_jobs)
File "/usr/lib/python2.7/multiprocessing/pool.py" in __init__
line 685. Pool.__init__(self, processes, initializer, initargs)
File "/usr/lib/python2.7/multiprocessing/pool.py" in __init__
line 136. self._repopulate_pool()
File "/usr/lib/python2.7/multiprocessing/pool.py" in _repopulate_pool
line 199. w.start()
File "/usr/lib/python2.7/multiprocessing/dummy/__init__.py" in start
line 73. self._parent._children[self] = None
Exception Type: AttributeError at /items/
Exception Value: 'Thread' object has no attribute '_children'
Answer: As a workaround, you can disable the threading at prediction time with:
clf = load_classifier(filter_name)
clf.set_params(n_jobs=1)
y_pred = clf.predict_proba(pd.DataFrame(item_dict))
Also note, calling `load_classifier` at each request might be expensive it
actually loads the model from the disk.
You can pass `mmap_mode='r'` to `joblib.load` to memory map the data from the
disk. It will make it possible to load the model only once even if you have
concurrent requests accessing the same model parameters concurrently (both
with different threads and different Python processes if you use something
like gunicorn).
|
Python trouble converting milliseconds to datetime and back
Question: So I have two functions for converting python `datetime.datetime()` objects to
and from milliseconds. I cannot figure out where this is going wrong. Here's
what I'm working with:
>>> import datetime
>>> def mil_to_date(mil):
"""date items from REST services are reported in milliseconds,
this function will convert milliseconds to datetime objects
Required:
mil -- time in milliseconds
"""
if mil == None:
return None
elif mil < 0:
return datetime.datetime.utcfromtimestamp(0) + datetime.timedelta(seconds=(mil/1000))
else:
return datetime.datetime.fromtimestamp(mil / 1000)
>>> def date_to_mil(date):
"""converts datetime.datetime() object to milliseconds
date -- datetime.datetime() object"""
if isinstance(date, datetime.datetime):
epoch = datetime.datetime.utcfromtimestamp(0)
return long((date - epoch).total_seconds() * 1000.0)
>>> mil = 1394462888000
>>> date = mil_to_date(mil)
>>> date
datetime.datetime(2014, 3, 10, 9, 48, 8) #this is correct
>>> d2m = date_to_mil(date)
>>> d2m
1394444888000L
>>> mil
1394462888000L
>>> date2 = mil_to_date(d2m)
>>> date2
datetime.datetime(2014, 3, 10, 4, 48, 8) #why did I lose 5 hours??
For some reason, I am losing 5 hours. Am I overlooking something obvious? Or
is there a problem with one or both of my functions?
Answer: The reason for this is that `date_to_mil` works with `UTC` and `mil_to_date`
doesn't. You should replace `utcfromtimestamp` with `fromtimestamp`.
Further explanation:
In your code, `epoch` is the date of the epoch in UTC (but the object is
without any time-zone). But `date` is local since
[fromtimestamp](https://docs.python.org/2/library/datetime.html#datetime.datetime.fromtimestamp)
returns a local time:
> If optional argument tz is None or not specified, the timestamp is converted
> to the platformβs local date and time, and the returned datetime object is
> naive
So you subtract the UTC epoch from the local datetime and you get a delay
which is your local delay to UTC.
|
Is there any smart way to combine overlapping paths in python?
Question: Let's say I have two path names: **head** and **tail**. They can overlap with
any number of segments. If they don't I'd like to just join them normally. If
they overlap, I'd like to detect the common part and combine them accordingly.
To be more specific: If there are repetitions in names I'd like to find as
long overlapping part as possible. Example
"/root/d1/d2/d1/d2" + "d2/d1/d2/file.txt" == "/root/d1/d2/d1/d2/file.txt"
and not "/root/d1/d2/d1/d2/d1/d2/file.txt"
Is there any ready-to-use library function for such case, or I have to
implement one?
Answer: You can use a list comprehension within `join` function :
>>> p1="/root/d1/d2/d1/d2"
>>> p2="d2/d1/d2/file.txt"
>>> p1+'/'+'/'.join([i for i in p2.split('/') if i not in p1.split('/')])
'/root/d1/d2/d1/d2/file.txt'
Or if the difference is just the base name of second path you can use
`os.path.basename` to get the bname and concatenate it to `p1` :
>>> import os
>>> p1+'/'+os.path.basename(p2)
'/root/d1/d2/d1/d2/file.txt'
|
Summing specific values within an array in python[Solved]
Question: I am trying to do operations with values that I have added into an array.
step 1:
Loaded a xls file with xlrd library and created an array.
<http://i.imgur.com/Ig6Lz3L.png>
Here is the code.
import xlrd
datafile = "data.xls"
workbook = xlrd.open_workbook(datafile)
sheet = workbook.sheet_by_index(0)
data = [[sheet.cell_value(r, col)
for col in range(sheet.ncols)]
for r in range(sheet.nrows)]
I would like to sum all the values within a column `print (data[1][1]) print
(data[2][1]) print (data[3][1]) print (data[4][1]) print (data[5][1])`
summ all the way to I (data[I][1]) ??
like summing all the rows for the same column,
i have tried to do sum data[1] but this sums all the values for the same row
(like an horizontal sum of the values...) and I am trying to do a vertical
sum)
Answer:
>>> data = [[1,2],[3,4],[5,6]]
>>> sum(data[i][1] for i in range(len(data)))
12 # 2 + 4 + 6
>>>
|
Python "import comtypes.client" calls pwd.py indirectly
Question: I have not a deep experience in Python. Working on last application found very
interesting thing. I put the script with name pwd.py in the same directory as
the main script. I created pwd.py to test some basic modules and methods, no
more purpose.
But I was really surprised that later found that my main script indirectly
calls pwd.py! I put some debug printouts and found that the import statement
"import comtypes.client" calls pwd.py.
Well...I thought that it is probably some standard feature that I don't know
still, but:
* recursive search in the PYTHON_HOME (C:\Python343 in my case) does not show pwd.py in the standard Python directories. I even tried to do recursive search by file content inside c:\Python343 to find who calls pwd.py, but this search returned nothing (I used Total Commander search by Ctrl+F7, probably it fails sometimes).
* Google says nothing well-known regarding pwd.py
So, what it is the feature and why it is not described well anywhere? It is
even a kind of vulnerability. One can create pwd.py in the same directory
where the main script is located and put any code inside pwd.py...
May anybody check this behavior on own system? If it really works so, where I
can find the description of this feature?
Answer: I found that the line "import comtypes.client" causes this "issue". Here is
the content of the main script:
#!C:\Python343\python
import comtypes.client # this line causes pwd.py to be called indirectly !!!
And here is the content of pwd.py that was put to the same directory:
#!C:\Python343\python
print('pwd.py is called!')
raise RuntimeError("We got here!") # I put an exception as Kevin asked me above, but I am not sure that Python knows "throw" (it does not work for me) so use "raise"
After that I got the following result:
c:\dev>test.py
pwd.py is called!
Traceback (most recent call last):
File "C:\dev\test.py", line 2, in <module>
import comtypes.client # this line causes pwd.py to be called indirectly !!!
File "C:\Python343\lib\site-packages\comtypes\client\__init__.py", line 31, in <module>
from comtypes.client._code_cache import _find_gen_dir
File "C:\Python343\lib\site-packages\comtypes\client\_code_cache.py", line 7, in <module>
import ctypes, logging, os, sys, tempfile, types
File "c:\Python343\Lib\tempfile.py", line 34, in <module>
import shutil as _shutil
File "c:\Python343\Lib\shutil.py", line 24, in <module>
from pwd import getpwnam
File "C:\dev\pwd.py", line 3, in <module>
raise RuntimeError("We got here!")
RuntimeError: We got here!
So the call chain is:
C:\dev\test.py
C:\Python343\lib\site-packages\comtypes\client\__init__.py
C:\Python343\lib\site-packages\comtypes\client\_code_cache.py
c:\Python343\Lib\tempfile.py
c:\Python343\Lib\shutil.py
C:\dev\pwd.py
and the line "from pwd import getpwnam" seems to be responsible for the call
of pwd.py. After that I changed my main script the following way:
#!C:\Python343\python
import pwd # I know it's you!
and really, it calls pwd.py! Thanks Kevin for this simple idea. The only
remaining question is is it correct that "import pwd" calls pwd.py from the
same directory? Very interesting feature :-)
|
Python Sorting and Organising
Question: I'm trying to sort data from a file and not quiet getting what i need. I have
a text file with race details ( name placement( ie 1,2,3). I would like to be
able to organize the data by highest placement first and also alphabetically
by name. I can do this if i split the lines but then the name and score will
not match up.
Any help and suggestion would be very welcomed, I've hit that proverbial wall.
My apologies ( first time user for this site , and python noob, steep learning
curve ) Thank you for your suggestions , i really do appreciate the help.
comp=[]
results = open('d:\\test.txt', 'r')
for line in results:
line=line.split()
# (name,score)= line.split()
comp.append(line)
sorted(comp)
results.close()
print (comp)
Test file was in this format:
Jones 2
Ranfel 7
Peterson 5
Smith 1
Simons 9
Roberts 4
McDonald 3
Rogers 6
Elliks 8
Helm 10
Answer: I completely agree with everyone who has down-voted this question for being
badly posed. However, I'm in a good mood so I'll try and at least steer you in
the right direction:
Let's assume your text file looks like this:
Name,Placement
D,1
D,2
C,1
C,3
B,1
B,3
A,1
A,4
I suggest importing the data and sorting it using Pandas
<http://pandas.pydata.org/>
import pandas as pd
# Read in the data
# Replace <FULL_PATH_OF FILE> with something like C:/Data/RaceDetails.csv
# The first row is automatically used for column names
data=pd.read_csv("<FULL_PATH_OF_FILE>")
# Sort the data
sorted_data=data.sort(['Placement','Name'])
# Create a re-indexed data frame if you so desire
sorted_data_new_index=sorted_data.reset_index(drop=True)
This gives me:
Name Placement
A 1
B 1
C 1
D 1
D 2
B 3
C 3
A 4
I'll leave you to figure out the rest..
|
Python 2.6: Get input from bash command with Popen and communicate and store as variable
Question: I need to get the input from a Bash command and store it as a Python variable
(sprice; a single float). On Python 2.7 the following works well:
bashCommand = "curl -s 'http://download.finance.yahoo.com/d/quotes.csv?s=vwrl.as&f=l1'"
sprice = float(subprocess.check_output(bashCommand, shell=True))
However on **Python 2.6 check_output isn't available**. Instead we have to
use:
proc = Popen(['curl', '-s', 'http://download.finance.yahoo.com/d/quotes.csv?s=vwrl.as&f=l1'], stdout=PIPE)
print (proc.communicate()[0].split())
Which shows the float we're after, enclosed by brackets.
['40.365']
Which is all right if I want to see the output and be done. But I need to
**store it in a Python variable like in the previous (2.7) case**. However
when I try to assign it to a variable I get:
Traceback (most recent call last):
File "arr.py", line 49, in <module>
sprice = proc.communicate()[0].split()
File "/usr/lib/python2.7/subprocess.py", line 791, in communicate
stdout = _eintr_retry_call(self.stdout.read)
File "/usr/lib/python2.7/subprocess.py", line 476, in _eintr_retry_call
return func(*args)
ValueError: I/O operation on closed file
**Which is the proper way to do this?**
Answer:
import commands
status, output = commands.getstatusoutput("curl -s http://download.finance.yahoo.com/d/quotes.csv?s=iwda.as&f=l1")
From [the
docs](https://docs.python.org/2/library/commands.html#commands.getstatusoutput):
> Execute the string cmd in a shell with os.popen() and return a 2-tuple
> (status, output). cmd is actually run as { cmd ; } 2>&1, so that the
> returned output will contain output or error messages.
|
Create own IVR that will access Database
Question: I'm currently interning with a company and have been tasked with researching
some methods of using telephony. The goal is to provide our clients with the
ability to call in and through an IVR-prompted questions, get information
back. The information will be from our database.
I have successfully done this using Twilio and a small python app. It does
exactly what I'm looking to do, except the cost factor can be a bit high,
especially if we have 30,000+ clients calling for minutes on end.
My goal is to find a way to replicate what I've done with Twilio, but on our
own server. I've found options like Asterisk and IncrediblePBX, but because my
limited knowledge of Linux, every error I run into results in scouring the
internet for answers. Ultimately, I'm not sure if I'm heading in the right
direction.
This is an example of what I'd like to accomplish:
Client calls into number. They're directed to provide an account number,
(possibly their phone number) At that point it will take this information and
talk to a database. Gathering this information it will relay back to the
client the status of their account etc.
Questions: I was hoping to use Google Voice to route calls similar to Twilio,
is this possible? Alternatively, could my company switch to a VoIP and do the
same thing?
If I move away from Twilio, can Asterisk perform the necessary tasks?
Receiving calls and running the app to gather database information.
Current code for Twilio, in Python:
from flask import Flask, request, redirect
import twilio.twiml
import json
from urllib.request import urlopen
app = Flask(__name__)
callers = {
"+": "Nicholas",
}
@app.route("/", methods=['GET', 'POST'])
def initial():
# Get the caller's phone number from the incoming Twilio request
from_number = request.values.get('From', None)
resp = twilio.twiml.Response()
# if the caller is someone we know:
if from_number in callers:
# Greet the caller by name
caller = callers[from_number]
else:
caller = ""
resp = twilio.twiml.Response()
resp.say("Hello " + caller)
resp.say("Thank you for calling.")
your_number = list(from_number)
del your_number[0]
del your_number[0]
resp.say("You are calling from: ")
x = 0
while x < len(your_number):
resp.say(your_number[x])
x += 1
print("Please enter the neighborhood I.D. you're looking for.")
with resp.gather(numDigits=1, action="/handle-first", method="POST") as g:
g.say("Please enter the neighborhood I.D. you're looking for.")
return str(resp)
@app.route("/handle-first", methods=['GET', 'POST'])
def handle_key():
digit_pressed = request.values.get('Digits', '')
resp = twilio.twiml.Response()
url = 'http://localhost/...'
response = urlopen(url)
data = json.loads(response.readall().decode('utf-8'))
current = data['rows'][0]['Neighborhood']
print(current)
resp.say("You have chosen " + current + "as your neighborhood.")
with resp.gather(numDigits=1, action="/handle-second", method="POST") as h:
h.say("Press 1 to choose another Neighborhood?")
return str(resp)
@app.route("/handle-second", methods=['GET', 'POST'])
def handle_key2():
digit_pressed = request.values.get('Digits', '')
resp = twilio.twiml.Response()
if digit_pressed == "1":
return redirect("/")
else:
resp.say("Thank you for calling. Good-bye.")
return str(resp)
if __name__ == "__main__":
app.run(debug=True)
Answer: Yes, asterisk can do all task you can program. It even have api for access raw
stream.
However no, you can't use SAME code.
For google voice check chan_motif code.
For db access use dialplan+realtime or func_odbc or fastagi/agi interface.
|
Reading row data from CSV file
Question: I'm currently trying to create a program that reads a CSV file, more
specifically row data (data in lines across)
**Sampledata.csv:**
['Time', 'Date', 'Color ', 'Name']
['1pm', '01-01-99', 'blue', 'jack']
['2pm', '02-02-99', 'green', 'kevin']
['3pm', '03-03-99', 'yellow', 'phil']
['4pm', '04-04-99', 'white', 'alice']
['5pm', '05-05-99', 'black', 'bob']
Here is my code:
import csv
with open('Sampledata.csv', 'r') as csvfile :
regnumber = input("What is your regnumber?")
reader = csv.reader(csvfile, delimiter=',')
for row in reader:
print(row) # here lies the problem(python is reading columnal data (data going down) instead of row(lines across) data#
The problem is in reading the columns (data going down). Python reads the
columns instead.
Output:
Date
01-01-99
02-02-99
03-03-99
04-04-99
05-05-99
Answer: Is this the one you're looking for?
import csv #ignore my stupidity with the indentation and spaces#
with open('Sampledata.csv', 'r') as csvfile :
regnumber = raw_input("Enter the time:")
reader = csv.reader(csvfile)
for row in reader:
if(row[0]==regnumber):
print ', '.join(row)
else:
continue
The above code print values from csv file row by row.
|
pip broke after uninstalling a module
Question: I used pip uninstall on shapely and got an exception, which I did not save.
Since then every pip or easy install command will pop this exception :
Traceback (most recent call last):
File "C:\Python27\lib\runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "C:\Python27\lib\runpy.py", line 72, in _run_code
exec code in run_globals
File "C:\Python27\Scripts\pip.exe\__main__.py", line 5, in <module>
File "C:\Python27\lib\site-packages\pip\__init__.py", line 13, in <module>
from pip.utils import get_installed_distributions, get_prog
File "C:\Python27\lib\site-packages\pip\utils\__init__.py", line 17, in <module>
from pip.compat import console_to_str, stdlib_pkgs
File "C:\Python27\lib\site-packages\pip\compat\__init__.py", line 14, in <module>
from pip.compat.dictconfig import dictConfig as logging_dictConfig
File "C:\Python27\lib\site-packages\pip\compat\dictconfig.py", line 22, in <module>
import logging.handlers
File "C:\Python27\lib\logging\handlers.py", line 27, in <module>
import logging, socket, os, cPickle, struct, time, re
File "C:\Python27\lib\socket.py", line 47, in <module>
import _socket
ImportError: No module named _socket
Any ideas?
Answer: See this [bug](https://github.com/pypa/pip/issues/727).
Off the top, `pip uninstall shapely` seems to be removing the top dlls from
the Python installations. This is why you see the error on every pip command
after.
|
curve fitting with lmfit python
Question: I am new to python and trying to fit data using lmfit. I am following on the
lmfit tutorial here: <http://lmfit.github.io/lmfit-py/parameters.html> and
this is my code (based on the code explained in the above link):
import numpy as np
import lmfit
import matplotlib.pyplot as plt
from numpy import exp, sqrt, pi
from lmfit import minimize,Parameters,Parameter,report_fit
data=np.genfromtxt('test.txt',delimiter=',')
x=data[:,][:,0]
y=data[:,][:,1]
def fcn2fit(params,x,y):
"""model decaying sine wave, subtract data"""
S1=params['S1'].value
t0=params['t0'].value
T1=params['T1'].value
S2=params['S2'].value
T2=params['T2'].value
model = 1-(1-S1)*exp(-(x-t0)/T1)+S2*(1-exp(-(x-t0)/T2)
return model - y
params = Parameters()
params.add('S1', value=0.85, min=0.8, max=0.9)
params.add('t0', value=0.05, min=0.01, max=0.1)
params.add('T1', value=0.2, min=0.1, max=0.3)
params.add('S2', value=0.03, min=0.01, max=0.05)
params.add('T2', value=0.3, min=0.2, max=0.4)
result = minimize(fcn2fit, params, args=(x,y))
final = y + result.residual
report_fit (params)
try:
import pylab
pylab.plot(x,y, 'k+')
pylab.plot(x,final, 'r')
pylab.show()
except:
pass
Problem: it return syntax error for line return model-y
I appreciate if you could please let me to right direction.
Answer: You have forgotten a right parenthesis ")" in the previous line (result= ...).
Opening and closing parentheses are unbalanced causing a syntax error.
|
WebSocket broadcast to all clients using Python
Question: I am using a simple Python based web socket application:
from SimpleWebSocketServer import WebSocket, SimpleWebSocketServer
class SimpleEcho(WebSocket):
def handleMessage(self):
if self.data is None:
self.data = ''
# echo message back to client
self.sendMessage(str(self.data))
def handleConnected(self):
print self.address, 'connected'
def handleClose(self):
print self.address, 'closed'
server = SimpleWebSocketServer('', 8000, SimpleEcho)
server.serveforever()
It echoes messages sent by each client to the same individual client, but I I
am trying to send any message received by the ws server to all clients
connected to it. Can someone help me please?
Answer: I think you want to create a list clients and then progamatically send a
message to each of them.
So, when a new client connects, add them to an array:
wss = [] # Should be globally scoped
def handleConnected(self):
print self.address, 'connected'
if self not in wss:
wss.append(self)
Then, when you get a new request, send the message out to each of the clients
stored:
def handleMessage(self):
if self.data is None:
self.data = ''
for ws in wss:
ws.sendMessage(str(self.data))
I hope this helps you!
|
Python/Eclipse/wxPython -- CallAfter undefined variable? callAfter is as well -- confused
Question: I'm using Eclipse Luna and the latest pydev with it. I have wxpython 3.0
installed. First, I could import wx and I tried in the console to print
**version** , perfect, but then I do import wx.lib.pubsub -- it says
unresolved. I try other variations, no dice, so I have to go into the
properties of my project and add wx manually, then it worked.
Second, now all my CallAfter calls are underlined red, undefined variable from
import. I know callAfter used to be it, so I tried that too, it tries to
autocomplete to it -- but then underlines it. I know in 3.0, CallAfter is
capitalized. Even if it wasn't, Eclipse tries to autocomplete to an old
version and then says it's still bad.
I've never seen that before, I'm confused. Does anyone know what I'm doign
incorrectly?
EDIT: Even weirder -- I use the console inside pydev eclipse, it autocompletes
to normal CallAfter and doesn't throw any errors.
Answer: I figured it out on my own. I deleted the wx and wxPython forced builtins and
then loaded wx as an external library. Everything worked fine after that.
|
Defining a custom pandas aggregation function using Cython
Question: I have a big `DataFrame` in pandas with three columns: `'col1'` is string,
`'col2'` and `'col3'` are `numpy.int64`. I need to do a `groupby`, then apply
a custom aggregation function using `apply`, as follows:
pd = pandas.read_csv(...)
groups = pd.groupby('col1').apply(my_custom_function)
Each group can be seen as a numpy array with two integers columns `'col2'` and
`'col3'`. To understand what I am doing, you can think of each row
`('col2','col3')` as a time interval; I am checking whether there are no
intervals that are intersecting. I first sort the array by the first column,
then test whether the second column value at index `i` is smaller than the
first column value at `index i + 1`.
**FIRST QUESTION** : My idea is to use Cython to define the custom aggregate
function. Is this a good idea?
I tried the following definition in a `.pyx` file:
cimport nump as c_np
def c_my_custom_function(my_group_df):
cdef Py_ssize_t l = len(my_group_df.index)
if l < 2:
return False
cdef c_np.int64_t[:, :] temp_array
temp_array = my_group_df[['col2','col3']].sort(columns='col2').values
cdef Py_ssize_t i
for i in range(l - 1):
if temp_array[i, 1] > temp_array[i + 1, 0]:
return True
return False
I also defined a version in pure Python/pandas:
def my_custom_function(my_group_df):
l = len(my_group_df.index)
if l < 2:
return False
temp_array = my_group_df[['col2', 'col3']].sort(columns='col2').values
for i in range(l - 1):
if temp_array[i, 1] > temp_array[i + 1, 0]:
return True
return False
**SECOND QUESTION** : I timed the two versions, and both take exactly the same
time. The Cython version does not seem to speed up anything. What is
happening?
**BONUS QUESTION** : Do you see a better way to implement this algorithm?
Answer: A vector `numpy` test could be:
np.any(temp_array[:-1,1]>temp_array[1:,0])
Whether it does better than the python or cython iteration depends on where
the `True` occurs, if at all. If the return is at an early step in the
iteration, the iteration is clearly better. And the `cython` version won't
have much of an advantage. Also the test step will be faster than the sort
step.
But if the iteration usually steps all the way through, then the vector test
will be faster than the Python iteration, and faster than the sort. It may
though be slower than a properly coded cython iteration.
|
python curl execute command with parameter
Question: I need to execute the following command
curl -v -H 'X-Auth-User: myaccount:me' -H 'X-Auth-Key: secretpassword' http://localhost:8080/auth/v1.0/
when I run this from terminal it works perfectly. and gives me result like
following
* About to connect() to localhost port 8080 (#0)
* Trying ::1...
* Connection refused
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8080 (#0)
> GET /auth/v1.0/ HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:8080
> Accept: */*
> X-Auth-User: myaccount:me
> X-Auth-Key: secretpassword
>
< HTTP/1.1 200 OK
< X-Storage-Url: http://localhost:8080/v1/AUTH_myaccount
< X-Auth-Token: AUTH_tk8bf07349d36041339450f0b46a2adc39
< Content-Type: text/html; charset=UTF-8
< X-Storage-Token: AUTH_tk8bf07349d36041339450f0b46a2adc39
< Content-Length: 0
< X-Trans-Id: tx99a9e2a129f34ab487ace-00553cb059
< Date: Sun, 26 Apr 2015 09:31:05 GMT
<
* Connection #0 to host localhost left intact
But i need to run this from python. I have used subprocess.call and
subprocess.popen in the following way
import subprocess
subprocess.call(["curl", "-v -H 'X-Auth-User: myaccount:me' -H 'X-Auth-Key: secretpassword' http://localhost:8080/auth/v1.0/"], shell=False)
but I am getting the followin error
curl: option -v -H 'X-Auth-User: myaccount:me' -H 'X-Auth-Key: secretpassword' http://localhost:8080/auth/v1.0/: is unknown
curl: try 'curl --help' or 'curl --manual' for more information
using popen
result = subprocess.Popen(["curl", "-v -H 'X-Auth-User: myaccount:me' -H 'X-Auth-Key: secretpassword' http://localhost:8080/auth/v1.0/"])
print(result)
and getting error for this
curl: option -v -H 'X-Auth-User: myaccount:me' -H 'X-Auth-Key: secretpassword' http://localhost:8080/auth/v1.0/: is unknown
curl: try 'curl --help' or 'curl --manual' for more information
<subprocess.Popen object at 0x7fd003d82cd0>
How to fix this???
Answer: Since `call` needs to pass an array of command line arguments, you can split
the command line yourself and call like this:
subprocess.call([
"curl", "-v", "-H", "X-Auth-User: myaccount:me", "-H",
"X-Auth-Key: secretpassword", "http://localhost:8080/auth/v1.0/"
], shell=False)
|
Can't pass parameters to function from WxPython Menu Item
Question: I'm having trouble getting sensible output when I try to bind menu buttons to
a function. First, I pass a few items to create a menu in my application:
imported_applications = ["foo", "bar"]
application_menu = wx.Menu()
for app in imported_applications:
# Create Items
item = wx.MenuItem(application_menu, wx.ID_ANY, app, )
application_menu.Append(item)
# Bind to function
self.Bind(wx.EVT_MENU, self.select_application, item, id=app)
# Add to menubar
menubar.Append(application_menu, '&Applications')
self.SetMenuBar(menubar)
Then I try to get that parameter when select_application is called:
def select_application(self, event):
id_selected = event.GetId()
print(id_selected)
Outputs:
-2014
-2015
**Not sure where that is coming from but I expect it to output the id that I
set at`bind`. The contents of imported_applications are two strings, i.e.
["foo", "bar]**
Answer:
app = [ "foo", "bar", ]
for app in ...
Here, your **app** variable is overwritten.
|
running process in background on hangs PHP page
Question: I have a PHP page that launches a python script, yet the PHP page hangs until
the script finishes. Is there a way to run the script that will not make it
hang?
**PHP code**
The problem occurs in case alarm_on. All other cases work fine.
<?php
session_start();
if(isset($_COOKIE['userME']))
echo "Hello, ".$_COOKIE['userME'].'<br>';
if(isset($_SESSION['UserData']['Username']) || $passed )// or $logins[$Username] == $Password)
{
if (isset($_POST['submit'])) {
switch ($_POST['submit']) {
case 'room_light':
echo shell_exec("sudo python /home/pi/Desktop/PiControl/room_light.py");
break;
case 'alarm_on':
echo "alarm started";
shell_exec("nohup sudo python /home/pi/Desktop/PiControl/motion_sensor.py &");
echo "alarm on";
break;
case 'alarm_off':
echo "alarm off";
echo shell_exec("sudo pkill -f motion_sensor.py");
break;
}
}
?>
<form method="post">
<input type="submit" name="submit" value="room_light">
<input type="submit" name="submit" value="alarm_on">
<input type="submit" name="submit" value="alarm_off">
</form>
<?php
}
else
{
header("location:login.php");
}
?>
**motion_sensor.py**
import sys
import urllib2, urllib
import RPi.GPIO as GPIO
#setup GPIO using Board numbering. pin physical number corresponds to gpio call
GPIO.setmode(GPIO.BOARD)
pin=37
url="http://MY_IP/test.php"
GPIO.setup(pin, GPIO.IN)
def alarmChange(channel):
if(GPIO.input(pin)):
print 'Sensor tripped',channel
postdata={'sensor_tripped': channel}
postdata=urllib.urlencode(postdata)
req=urllib2.Request(url, postdata)
req.add_header("Content-type", "application/x-www-form-urlencoded")
page=urllib2.urlopen(req).read()
#print page
else:
print 'sensor no longer tripped', channel
GPIO.add_event_detect(pin, GPIO.BOTH, callback=alarmChange)
print "alarm on"
while True: #this is a hack to keep the script running all the time so it will continue event detection
i=0
GPIO.cleanup() #should never hit this point
Answer: Nohup doesn't seem to fix the problem because it still redirects to stdout. I
confirmed on my linux dev box.
What you need to do is redirect the output away from stdout/stderr:
shell_exec("sudo python /home/pi/Desktop/PiControl/motion_sensor.py >/dev/null 2>&1 &");
By adding `>/dev/null` you are directing the output to /dev/null. By using
`2>&1` you are directing the errors to the same place as the output
(/dev/null). You could also use a log file if you want. If you want to append
to the file, change `>` to `>>`.
|
Flask-Assets not working at all... = Insanity
Question: I have simply tried setting up _[flask-assets](http://flask-
assets.readthedocs.org/en/latest/)_ (based on _webassets_), however just can't
get it to work.
I have the standard setup;
* python virtualenv
* pip to install the bare essentials (flask, flask-assets)
* sass ruby gem (for trying sass / scss)
* less via npm (for trying lesscss)
* jsmin via pip (for trying jsmin)
Config:
* I have created a working homepage in Flask
* `static` folder created (for css / js assets)
* The css / js files are confirmed working (css background, js slider, etc)
* Basically my development non-flask-assets site is working perfectly
I have followed the easy official guide here: [flask-assets
usage](http://flask-assets.readthedocs.org/en/latest/). I fully understand how
to work with it (as per that page). I have even exact copy-pasted the code, &
still can't get it working.
Some code I've tried (for lesscss): (of course I have working css in
main.less)
from flask.ext.assets import Environment, Bundle
assets = Environment(app)
assets.debug = True
lesscss = Bundle('main.less', output='main.css', filters='less')
assets.register('less', lesscss)
Then in my template:
{% assets "less" %}
<link href="{{ ASSET_URL }}" rel="stylesheet">
{% endassets %}
However flask-assets just won't work. I've tried the same with sass, scss, &
also jsmin (exact code copy-paste from the usage guide) - it still won't work.
* I notice that the `.webassets-cache` folder is created, but is (always) empty...
Also; relevant error?; I expect it to create the main.css, but as it doesn't,
I get an error in the browser (using `app.debug = True` & flask's built-in dev
server):
webassets.exceptions.BuildError
BuildError: Nothing to build for <Bundle output=css/main.css, filters=[<webassets.filter.less.Less object at 0x7f4958dc6710>], contents=('css/main.less',)>, is empty
So; If I manually create an empty main.css, it loads the page (no error),
_however_ the main.css file is not filled with css so _flask-assets_ /
_webassets_ in still not working.
I've also tried passing the _assets_ object to the template in various ways
just in case it's needed (although no documentation states this) - that didn't
work.
It's been driving me crazy. Any pointers would be appreciated.
Thank you
Answer: There is info missing on the Flask-Assets docs. You problem is either the
sass_bin config or the enviroment load path. You should try both, in my case
it was the config. See below my working config.
PS: IMHO Flask Assets is neither very complete nor well documented. It also
consumes you application runtime, which sucks both on debugging and
production. I have switched to GULP!
env = Environment(app)
env.config['sass_bin'] = '/usr/local/bin/sass'
# env.load_path = [os.path.join(os.path.dirname(__file__), 'sass')]
js = Bundle('js/jquery/jquery-2.1.4.js', 'js/angular/angular.js',
'js/socketio/socket.io.js', filters='jsmin', output='js/all_min.js')
env.register('js_all', js)
myjs = Bundle('myjs/Interpolation.js', 'myjs/socketio.js' , filters='jsmin', output='myjs/all_min.js')
env.register('myjs_all', myjs)
css = Bundle('sass/base.sass', filters='sass', output='css/base.css')
env.register('css_all', css)
|
Python3 and Sqlite3 can't Insert
Question: I am trying to write a function to do a simple insert. Here is what I have
tried so far
#! /usr/bin/env python3
#import
import sqlite3 as lite
#trying an insert version 1 (does nothing)
def createTableTask():
"""
Create a new table with the name Task
"""
#Connnection to the database and cursor creation
con = lite.connect('./exemple.sqlite')
con.row_factory = lite.Row
cur = con.cursor()
#that does nothing
try:
cur.execute('''CREATE TABLE Tasks (\
Name TEXT PRIMARY KEY, \
Description TEXT, \
Priority TEXT);''')
except lite.IntegrityError as error_SQLite:
print("error: "+ str(error_SQLite))
else:
print("No error has occured.")
con.close();
def insert1():
"""
insert a new task
"""
#Allocating variables data
taskName = 'finish code'
taskDescription = 'debug'
taskPriority = 'normal'
#Connnection to the database and cursor creation
con = lite.connect('./exemple.sqlite')
con.row_factory = lite.Row
cur = con.cursor()
#that does nothing
try:
with con:
cur.execute('''INSERT INTO Tasks (Name, Description, Priority) \
VALUES (?, ?, ?)''', (taskName, taskDescription, taskPriority))
except lite.IntegrityError as error_SQLite:
print("error: "+ str(error_SQLite))
else:
print("No error has occured. but no insert happend ?")
con.close();
def showResult():
"""
Show the result of the insert
"""
con = lite.connect('./exemple.sqlite')
con.row_factory = lite.Row
cur = con.cursor()
cur.execute\
('''SELECT * FROM Tasks ;''')
row = cur.fetchone()
while row:
print(row["Name"], ' | ', row["Description"], ' | ', \
row["Priority"])
row = cur.fetchone()
con.close();
#trying an insert version 2 (this one crash giving :Value error)
def insert2():
"""
insert a new task
"""
#Allocating variables data
taskName = 'finish code'
taskDescription = 'debug'
taskPriority = 'normal'
#Connnection to the database and cursor creation
con = lite.connect('./exemple.sqlite')
con.row_factory = lite.Row
cur = con.cursor()
queryInsert = ('''INSERT INTO Tasks (Name, Description, Priority) \
VALUES (?, ?, ?)''', (taskName, taskDescription, taskPriority))
try:
with con:
cur.execute(queryInsert)
except lite.IntegrityError as error_SQLite:
print("error: "+ str(error_SQLite))
else:
print("No error has occured.")
con.close();
def run():
createTableTask()
insert1()
showResult()
insert2()
showResult()
#calling section
run()
The problem is that none of the insert that I have made so far worked. The
first one does actualy nothing but has a correct syntax
The second one, well it crash.
Here is the output:
> spark@spark-Razer-Blade-Pro:~/Documents/testing$ ./exemp.py
> No error has occured.
> No error has occured. but no insert happend ?
> Traceback (most recent call last):
> File "./exemp.py", line 98, in
> run()
> File "./exemp.py", line 94, in run
> insert2()
> File "./exemp.py", line 83, in insert2
> cur.execute(queryInsert)
> ValueError: operation parameter must be str
> spark@spark-Razer-Blade-Pro:~/Documents/testing$ sqlite3 exemple.sqlite
> SQLite version 3.8.2 2013-12-06 14:53:30
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> SELECT * FROM Tasks;
> sqlite>
I am looking for the most simple fix and maybe know what is wrong with my
code. Because Right now I do not know what is going on with the no insert one.
Normally it should, or am I missing something ?
Answer:
queryInsert = ('''INSERT ...''', (taskName, taskDescription, taskPriority))
This makes `queryInsert` a tuple with two elements.
But to call the `execute` method, you need two separate parameters.
You could just unpack the tuple:
cur.execute(*queryInsert)
but it might be clearer to use two separate variables:
queryString = '''INSERT ...'''
queryParams = (taskName, taskDescription, taskPriority)
cur.execute(queryString, queryParams)
|
issue saving uploaded files in django
Question: I tried the simplest way of saving files in my django media folder this is my
.conf apache wsgi configuration file
ServerName testapplication.com
WSGIScriptAlias / /home/seba/git/CNBLUE/supergestor/supergestor/wsgi.py
WSGIPythonPath /home/seba/git/CNBLUE/supergestor
Alias /static /home/seba/git/CNBLUE/supergestor/static/
Alias /media /home/seba/git/CNBLUE/supergestor/media/
<Directory /home/seba/git/CNBLUE/supergestor/supergestor>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
<Directory /home/seba/git/CNBLUE/supergestor/media/>
Require all granted
</Directory>
<Directory /home/seba/git/CNBLUE/supergestor/static/>
Require all granted
</Directory>
in my settings.py file i set
MEDIA_ROOT='/home/seba/git/CNBLUE/supergestor/media/'
and MEDIA_URL=''
the media folder is at the root of my project folder, the django project being
supergestor
when i tried to upload i've got this error [Errno 13] Permission denied:
'/home/seba/git/CNBLUE/supergestor/media' i have no clue
Answer: The `MEDIA_ROOT` should set like this:
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
MEDIA_ROOT = os.path.join(BASE_DIR, "/media")
And I suggest you check the directory permissions first.
|
Copy certain files from one folder to another using python
Question: I am trying to copy only certain files from one folder to another. The
filenames are in a attribute table of a shapefile.
I am successful upto writing the filenames into a .csv file and list the
column containing the list of the filenames to be transferred. I am stuck
after that on how to read those filenames to copy them to another folder. I
have read about using Shutil.copy/move but not sure how to use it. Any help is
appreciated. Below is my script:
* * *
import arcpy
import csv
import os
import sys
import os.path
import shutil
from collections import defaultdict
fc = 'C:\\work_Data\\Export_Output.shp'
CSVFile = 'C:\\wokk_Data\\Export_Output.csv'
src = 'C:\\UC_Training_Areas'
dst = 'C:\\MOSAIC_Files'
fields = [f.name for f in arcpy.ListFields(fc)]
if f.type <> 'Geometry':
for i,f in enumerate(fields):
if f in (['FID', "Area", 'Category', 'SHAPE_Area']):
fields.remove (f)
with open(CSVFile, 'w') as f:
f.write(','.join(fields)+'\n')
with arcpy.da.SearchCursor(fc, fields) as cursor:
for row in cursor:
f.write(','.join([str(r) for r in row])+'\n')
f.close()
columns = defaultdict(list)
with open(CSVFile) as f:
reader = csv.DictReader(f)
for row in reader:
for (k,v) in row.items():
columns[k].append(v)
print(columns['label'])
Answer: Here is the script I used to solve my problem:
import os
import arcpy
import os.path
import shutil
featureclass = "C:\\work_Data\\Export_Output.shp"
src = "C:\\Data\\UC_Training_Areas"
dst = "C:\\Data\\Script"
rows = arcpy.SearchCursor(featureclass)
row = rows.next()
while row:
print row.Label
shutil.move(os.path.join(src,str(row.Label)),dst)
row = rows.next()
|
Python - Optional Command Line Argument
Question: I would like to have an option -n, that will allow users to specify a size of
a list. The default will be 30. So:
./findNumberOfPlayers.py -n10
I haven't done any command line arguments with python before but am confused
with how to include `-n10` within the program. I understand I would `import
sys` and have 12 assigned to `sys.argv[1]` but how does it work with `-n10`?
Thank You! I appreciate the help.
Answer: Use [argparse](https://docs.python.org/2/howto/argparse.html).
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-n", "--number", help="Enter a number", type=int)
You can then access the arg like this -
args = parser.parse_args()
num_players = args.number if args.number else 30
|
Finding the simplest way to wrap a double->double C function to Python
Question: I need to wrap a C function of interface
double foo(double)
to python, so that in python console
>>foo_py(2.9)
should give me the double value of foo(2.9). Eventually, I use the exported
foo_py in scipy.
I have been looking for the _simplest_ way to wrap such a C function.There are
certainly many related posts on stackoverflow, but they are dated a couple of
years ago and at this moment I am looking for an updated answer for my
specific demand. Thanks.
[EDIT] For information, I tried with C extension using PyObj etc. The
documentation on this seems lacking. I also tried with Boost.Python that wraps
C++ functions to Python. Here I need to wrap C functions (in order to avoid
naming demangling issues, etc).
Answer: Here is a simple example of how to do it:
**test.c**
#include <Python.h>
static PyObject* foo(PyObject* self, PyObject* args) {
double d;
PyArg_ParseTuple(args, "d", &d);
return Py_BuildValue("d", d);
}
// Bind python function to C function
static PyMethodDef simplemodule_methods[] = {
{"foo_py", foo, METH_VARARGS},
{NULL, NULL, 0}
};
// Initialize module
void inittest() {
(void) Py_InitModule("test", simplemodule_methods);
}
**test.py**
from test import *
print foo_py(2.9) # will output 2.9
Compile with
`gcc -shared -fPIC -I/usr/include/python2.7 -o test.so test.c`
Run with
`python test.py`
|
Print a dictionary in python
Question: I have a file that looks like this:
5 John Doe
3 Jadzia Kowalska
13 Heather Graham
44 Jane Doe
I read it in, split it up and save it up in a dictionary with the number as
the key and a list as the value and each word as a separate string in the
list. I want to print each id followed by its name out to the command line but
I am unsure how to do so since each name is a list of strings. Would I need a
triple for loop? Any help would be greatly appreciated!
import sys
filename=sys.argv[1]
#reads file into list
with open(filename) as f:
filecontent = f.readlines()
names=[]
ids=[]
for i in filecontent:
split=i.split()
names.append(split[1:])
ids.append(split[0])
d= dict.fromkeys(ids,names)
sorted(d, key=d.get)
for id, name in d:
print id, name
Answer: Use:
for id, name in d:
print id, ' '.join(name)
The way this works is that
' '.join(['first', 'last'])
Joins all elements in the list with an empty space as its separator. This
would also work if you wanted to create a `CSV` file in which case you would
use a comma as the separator. For example:
','.join(['first', 'second', 'third'])
If you want to print with a space between ID and the name, use
print "{0} {1}".format(id, ' '.join(name))
|
read from stdin python
Question: I am trying to take a filename input from the first command line argument and
if there is none then to read from `stdin` but for some reason my `if`
statement does not seem to be working. I have pasted it below. Any help would
be greatly appreciated.
import sys
filename=sys.argv[1]
if filename==None:
filename=sys.stdin.readlines()
Answer: You should check the length of `sys.argv` before trying to index it:
import sys
if len(sys.argv) > 1:
filename=sys.argv[1]
else:
filename=sys.stdin.readline().strip()
The first element in `sys.argv` is always the filename of the python script,
hence the `> 1`.
**[EDIT]**
To expand on fixxxer's answer regarding `is` vs `==`:
`is` returns `True` if the variables compared point to the same object. This
holds true for `None`, as `None` is a singleton. `==` on the other hand,
returns `True` if the variables compared are equal, i.e. they can point to two
distinct (but equal) objects and `==` will yield `True`.
|
Opening a password protected page in Python and Selenium WebDriver
Question: I am trying to open this local IP in my web browser.
from selenium import webdriver
driver = webdriver.Firefox()
url = 'http://165.269.261.210/source/'
page = urllib.urlopen(url)
when I run the above code it asks for username and password on Python shell.
Enter username for coffee at 165.269.261.210: agrawal
Warning (from warnings module):
File "C:\Python27\Lib\getpass.py", line 92
return fallback_getpass(prompt, stream)
GetPassWarning: Can not control echo on the terminal.
Warning: Password input may be echoed.
Enter password for agrawal.a in coffee at 165.269.261.210: bravokid
after I provide username and password nothing opens in Firefox ? and also how
can I provide username and password in the code itself ? Is there any better
way to do it?
Update: When I directly put the link in the browser a pop-up opens asking for
username and password. Only after I provide the username & password page opens
otherwise throws 401 error
Answer: What you are doing here:
from selenium import webdriver
driver = webdriver.Firefox()
url = 'http://165.269.261.210/source/'
# This is no way connected to Selenium, but fetches URL using Python urllib library
# page = urllib.urlopen(url)
driver.get('http://username:[email protected]/source/')
Instead, use `driver.get()` method as [explained in the Selenium WebDriver
tutorial](http://selenium-python.readthedocs.org/en/latest/getting-
started.html).
[Further, you need to pass username and password in the URL for HTTP Basic
Auth](http://stackoverflow.com/a/9262587/315168).
|
what is the equivalent of ni si in IDA python?
Question: I want to make a simple IDA-python script which sets a break point to some
specific memory address then continue and filter break point hits with some
rules..
In case of gdb. I can make a gdb script such as
bp 0x12345678 c if ~~~ else ~~~ ni si ... c ...
how can I do this kind of stuff with IDA python? thank you in advance
Answer: You are looking for `dbg_step_into` and `dbg_step_over`. The IDA API is
debugger-agnostic, and its GUI allows you to set the debugger used (and as you
probably already know, it supports GDB). See [here](https://hex-
rays.com/products/ida/support/sdkdoc/dbg_8hpp.html) for the API documentation.
Similarly, the relevant IDA actions are documented [here](https://www.hex-
rays.com/products/ida/support/idapython_docs/) (`idaapi.request_step_into`).
[Here](https://code.google.com/p/idapython/source/browse/trunk/examples/debughook.py)
is a use case taken from the IDApython repository, reproduced here in part,
just in case the link goes stale:
* * *
# Original Author: Gergely Erdelyi <[email protected]>
from idaapi import *
class MyDbgHook(DBG_Hooks):
""" This class implements the various callbacks required.
"""
def dbg_process_start(self, pid, tid, ea, name, base, size):
print("Process started, pid=%d tid=%d name=%s" % (pid, tid, name))
def dbg_process_exit(self, pid, tid, ea, code):
print("Process exited pid=%d tid=%d ea=0x%x code=%d" % (pid, tid, ea, code))
def dbg_library_unload(self, pid, tid, ea, info):
print("Library unloaded: pid=%d tid=%d ea=0x%x info=%s" % (pid, tid, ea, info))
return 0
def dbg_process_attach(self, pid, tid, ea, name, base, size):
print("Process attach pid=%d tid=%d ea=0x%x name=%s base=%x size=%x" % (pid, tid, ea, name, base, size))
def dbg_process_detach(self, pid, tid, ea):
print("Process detached, pid=%d tid=%d ea=0x%x" % (pid, tid, ea))
return 0
def dbg_library_load(self, pid, tid, ea, name, base, size):
print "Library loaded: pid=%d tid=%d name=%s base=%x" % (pid, tid, name, base)
def dbg_bpt(self, tid, ea):
print "Break point at 0x%x pid=%d" % (ea, tid)
# return values:
# -1 - to display a breakpoint warning dialog
# if the process is suspended.
# 0 - to never display a breakpoint warning dialog.
# 1 - to always display a breakpoint warning dialog.
return 0
def dbg_suspend_process(self):
print "Process suspended"
def dbg_exception(self, pid, tid, ea, exc_code, exc_can_cont, exc_ea, exc_info):
print("Exception: pid=%d tid=%d ea=0x%x exc_code=0x%x can_continue=%d exc_ea=0x%x exc_info=%s" % (
pid, tid, ea, exc_code & idaapi.BADADDR, exc_can_cont, exc_ea, exc_info))
# return values:
# -1 - to display an exception warning dialog
# if the process is suspended.
# 0 - to never display an exception warning dialog.
# 1 - to always display an exception warning dialog.
return 0
def dbg_trace(self, tid, ea):
print("Trace tid=%d ea=0x%x" % (tid, ea))
# return values:
# 1 - do not log this trace event;
# 0 - log it
return 0
def dbg_step_into(self):
print("Step into")
self.dbg_step_over()
def dbg_run_to(self, pid, tid=0, ea=0):
print "Runto: tid=%d" % tid
idaapi.continue_process()
def dbg_step_over(self):
eip = GetRegValue("EIP")
print("0x%x %s" % (eip, GetDisasm(eip)))
self.steps += 1
if self.steps >= 5:
request_exit_process()
else:
request_step_over()
# Remove an existing debug hook
try:
if debughook:
print("Removing previous hook ...")
debughook.unhook()
except:
pass
# Install the debug hook
debughook = MyDbgHook()
debughook.hook()
debughook.steps = 0
# Stop at the entry point
ep = GetLongPrm(INF_START_IP)
request_run_to(ep)
# Step one instruction
request_step_over()
# Start debugging
run_requests()
|
how to pass argparse arguments to another python program in unchanged condition?
Question: I am working on a script under which some sub program will run. for example
**test.py** is the main program and under that **test1.py** , **test2.py** ,
**test3.pl** will run and I need to pass the arguments from the main
program(test.py) to test1.py and test2.py program also.the arguments should be
in unchanged condition. while passing to another program.
code: **test.py**
import argparse
import subprocess
import os
commandLineArgumentParser = argparse.ArgumentParser()
commandLineArgumentParser.add_argument("-fname", "--fname",help="first name")
commandLineArgumentParser.add_argument("-lname","--lname", help="last name")
commandLineArgumentParser.add_argument("-age","--age", help="age")
commandLineArguments = commandLineArgumentParser.parse_args()
fname = commandLineArguments.fname
lname = commandLineArguments.lname
age = commandLineArguments.age
print "%s%s%s" %(fname,lname,age)
I am running the program by the bellow commands :
python test.py -fname=abc -age=22 -lname='test a'
or
python test.py -fname="abc test" lname='val' -age=30
or
python test.py -age=45 -lname='abc aa' fname="abc"
or
python test.py -age=45 -lname="test"
now I want to grab the argument part in unchanged condition and put in one
variable then we can easily pass the arguments to program easily.
For the first command the variable will hold
-fname=abc -age=22 -lname='test a'
for 2nd command
-fname="abc test" lname='val' -age=30
I was trying to grab the arguments using the bellow code but the quotas are
missing by the script.
my_argu=''
if len(sys.argv) >1:
for x in sys.argv:
if x == sys.argv[0]:
continue
if x == sys.argv[len(sys.argv)-1]:
my_argu =my_argu+x
else:
my_argu = my_argu+x+' '
print "%s" %(my_argu)
for the
python test.py -fname="abc test" lname='val' -age=30
the output is :
abc testval30
-fname=abc test lname=val -age=30
as you can see quotas are missing. So I need help to solve it
Answer: Seems like you should pull these all together in one wrapper and call that
instead.
# wrapper.py
import test, test1, test2, test3
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("-fname", "--fname",help="first name")
parser.add_argument("-lname","--lname", help="last name")
parser.add_argument("-age","--age", help="age")
cli_args = parser.parse_args()
test.run(cli_args)
test1.run(cli_args)
test2.run(cli_args)
test3.run(cli_args)
Then in each of your `testN.py`s...
# test.py
def run(cli_args):
print("Your name is {} {}".format(cli_args.fname, cli_args.lname))
# or whatever your program does....
Then do:
$ python wrapper.py --fname Adam --lname Smith
|
Tweepy: Filtering stream gives multiple errors
Question: I'm trying to run the following script from this tutorial [tulane.edu-
twitter](http://www.tulane.edu/~howard/CompCultES/twitter.html) and I'm
getting the following errors:
File "GetTextFromTwitter.py", line 61, in <module>
stream.filter(track=['de'], languages=['es'])
File "/Users/enricok/anaconda/lib/python2.7/site-packages/tweepy/streaming.py", line 428, in filter
self._start(async)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/tweepy/streaming.py", line 346, in _start
self._run()
File "/Users/enricok/anaconda/lib/python2.7/site-packages/tweepy/streaming.py", line 239, in _run
verify=self.verify)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/adapters.py", line 370, in send
timeout=timeout
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 544, in urlopen
body=body, headers=headers)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 349, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/Users/enricok/anaconda/lib/python2.7/httplib.py", line 995, in request
self._send_request(method, url, body, headers)
File "/Users/enricok/anaconda/lib/python2.7/httplib.py", line 1029, in _send_request
self.endheaders(body)
File "/Users/enricok/anaconda/lib/python2.7/httplib.py", line 991, in endheaders
self._send_output(message_body)
File "/Users/enricok/anaconda/lib/python2.7/httplib.py", line 848, in _send_output
self.send(message_body)
File "/Users/enricok/anaconda/lib/python2.7/httplib.py", line 820, in send
self.sock.sendall(data)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", line 208, in sendall
sent = self._send_until_done(data)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", line 198, in _send_until_done
return self.connection.send(data)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/OpenSSL/SSL.py", line 947, in send
raise TypeError("data must be a byte string")
TypeError: data must be a byte string
I also tried to follow the tutorial on
[youtube](https://www.youtube.com/watch?v=pUUxmvvl2FE&spfreload=10) and I'm
getting the same errors:
File "tweepyTest.py", line 39, in <module>
twitterStream.filter(track=["car"])
File "/Users/enricok/anaconda/lib/python2.7/site-packages/tweepy/streaming.py", line 428, in filter
self._start(async)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/tweepy/streaming.py", line 346, in _start
self._run()
File "/Users/enricok/anaconda/lib/python2.7/site-packages/tweepy/streaming.py", line 239, in _run
verify=self.verify)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/adapters.py", line 370, in send
timeout=timeout
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 544, in urlopen
body=body, headers=headers)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 349, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/Users/enricok/anaconda/lib/python2.7/httplib.py", line 995, in request
self._send_request(method, url, body, headers)
File "/Users/enricok/anaconda/lib/python2.7/httplib.py", line 1029, in _send_request
self.endheaders(body)
File "/Users/enricok/anaconda/lib/python2.7/httplib.py", line 991, in endheaders
self._send_output(message_body)
File "/Users/enricok/anaconda/lib/python2.7/httplib.py", line 848, in _send_output
self.send(message_body)
File "/Users/enricok/anaconda/lib/python2.7/httplib.py", line 820, in send
self.sock.sendall(data)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", line 208, in sendall
sent = self._send_until_done(data)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/requests/packages/urllib3/contrib/pyopenssl.py", line 198, in _send_until_done
return self.connection.send(data)
File "/Users/enricok/anaconda/lib/python2.7/site-packages/OpenSSL/SSL.py", line 947, in send
raise TypeError("data must be a byte string")
TypeError: data must be a byte string
A couple of days ago it worked perfectly (still having the .csv it returned).
But now like with an other script I tried, it gives me multiple errors when
using the `.filter` method. If I replace `.filter()` with `.sample()` though,
it works. The only thing I can remember, was doing a `brew update` that might
have broken something. Here the code for the first mentioned tutorial that
tries to filter Spanish tweets. Any ideas how to fix this?
import tweepy
import requests
from tweepy import API
from tweepy import Stream
from tweepy import OAuthHandler
from tweepy.streaming import StreamListener
API_KEY = ''
API_SECRET = ''
ACCESS_TOKEN = ''
ACCESS_TOKEN_SECRET = ''
key = tweepy.OAuthHandler(API_KEY, API_SECRET)
key.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(key)
class Stream2Screen(tweepy.StreamListener):
def __init__(self, api=None):
self.api = api or API()
self.n = 0
self.m = 20
def on_status(self, status):
print status.text.encode('utf8')
self.n = self.n+1
if self.n < self.m:
return True
else:
print 'tweets = '+str(self.n)
return False
stream = tweepy.streaming.Stream(key, Stream2Screen())
stream.filter(track=['de'], languages=['es'])
# stream.sample()
Answer: It seems to be a connection error.
Install pyOpenSSL.
<https://pypi.python.org/pypi/pyOpenSSL>
|
how can I serve external folder with twisted?
Question: there is a captiveportal script in python that works with twisted but it load
the portal page in the script (search CAPTIVE_TEMPLATE)
<https://github.com/bendemott/captiveportal/blob/master/captiveportal>
how can I load a full index.html folder with js and css and other material
like docs and files with twisted?
best regards
Answer: Use File and pass it a directory.
from twisted.web.server import Site
from twisted.web.static import File
from twisted.internet import reactor
resource = File('/tmp')
factory = Site(resource)
reactor.listenTCP(8888, factory)
reactor.run()
See more [here](http://twistedmatrix.com/documents/12.1.0/web/howto/web-
in-60/static-content.html "here")
|
Possible to execute Python bytecode from a script?
Question: Say I have a running CPython session,
Is there a way to run the data (`bytes`) from a `pyc` file directly? _(without
having the data on-disk necessarily, and without having to write a temporary
pyc file)_
Example script to show a simple use-case:
if foo:
data = read_data_from_somewhere()
else:
data = open("bar.pyc", 'rb').read()
assert(type(data) is bytes)
code = bytes_to_code(data)
# call a method from the loaded code
code.call_function()
* * *
Exact use isn't so important, but generating code dynamically and copying over
a network to execute is one use-case.
* * *
Here are some example use-cases, which made me curious to know how this can be
done:
* Checking Python scripts for malicious code.
_If a single command can access a larger body of code hidden in binary data,
what would that command look like?_
* Dynamically generate code and cache it for re-use (not necessarily on disk, could use a data-base for example).
* Ability to send pre-compiled byte-code to a process, _control an application which embeds Python for eg_.
Answer: > Is there a way to run the data from a pyc file directly?
Yes, but the exact functions involved depend on the use case. An easy way to
do this is with [exec](https://docs.python.org/3/library/functions.html#exec).
Here a small example:
spam = 'print(3)'
eggs = compile(spam, '<string>', 'exec')
exec(eggs)
edit:
The compiled code object can be saved using `marshal`
import marshal
bytes = marshal.dumps( eggs )
the bytes can be converted back to a code object
eggs = marshal.loads( bytes )
exec( eggs )
According to [Ned
Batchelder](http://nedbatchelder.com/blog/200804/the_structure_of_pyc_files.html)
a pyc file is basically just a marshaled code object.
* * *
Note, the link references Python2, but its almost the same in Python3, the
`pyc` header size is just 12 instead of 8 bytes. This needs to be skipped
before passing the data to `marshal.loads`.
|
Import csv as list in python
Question: I am trying to import a csv file as a list:
file = open('curr.csv', 'rt')
f = file.read()
f = f.split(',')
print(f)
The csv file is only 'GBP, USD' so I want the list ['GBP', 'USD']. However the
result I get is:
['GBP', 'USD\n']
How do I stop \n from appearing on the last value?
Answer: You need to strip your lines,but as a pythonic way you can use [`csv`
module](https://docs.python.org/2/library/csv.html) for dealing with `csv`
files :
>>> import csv
>>> with open('curr.csv', 'rb') as csvfile:
... spamreader = csv.reader(csvfile, delimiter=',')
... print list(spamreader)
Note that this will return a nested list of your `csv` file rows if you just
want the firs row in a list you can use `next()` method of `reader` object :
>>> import csv
>>> with open('curr.csv', 'rb') as csvfile:
... spamreader = csv.reader(csvfile, delimiter=',')
... print spamreader.next()
And if you want the whole of your rows within a list you can use a nested list
comprehension :
>>> import csv
>>> with open('curr.csv', 'rb') as csvfile:
... spamreader = csv.reader(csvfile, delimiter=',')
... print [j for i in spamreader for j in i]
|
How to specify gamma distribution using shape and rate in Python?
Question: With [Scipy gamma
distribution](http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.gamma.html),
one can only specify shape, loc, and scale. How do I create a gamma variable
with shape and rate?
Answer: Inverse scale (`1/scale`) is rate parameter.
So if you have `shape` and `rate` you can create gamma rv with this code
>>> from scipy.stats import gamma
>>> rv = gamma(shape, scale = 1.0/rate)
Read more about different parametrizations of Gamma distribution on Wikipedia:
<http://en.wikipedia.org/wiki/Gamma_distribution>
|
Why is my computation so much faster in C# than Python
Question: Below is a simple piece of process coded in `C#` and `Python` respectively
(for those of you curious about the process, it's the solution for Problem No.
5 of [Project Euler](https://projecteuler.net/problem=5)).
My question is, the `C#` code below takes only 9 seconds to iterate, while
completion of `Python` code takes 283 seconds (to be exact, 283 seconds on
Python 3.4.3 - 64 bits and 329 seconds on Python 2.7.9 - 32 bits).
So far, I've coded similar processes both in `C#` and `Python` and the
execution time differences were comparable. This time however, there is an
extreme difference between the elapsed times.
I think, some part of this difference arise from the flexible variable type of
python language (I suspect, python converts some part of variables into
double) but this much is still hard to explain.
What am I doing wrong?
My system: Windows-7 64 bits,
C# - VS Express 2012 (9 seconds)
Python 3.4.3 64 bits (283 seconds)
Python 2.7.9 32 bits (329 seconds)
c-sharp code:
using System;
namespace bug_vcs {
class Program {
public static void Main(string[] args) {
DateTime t0 = DateTime.Now;
int maxNumber = 20;
bool found = false;
long start = maxNumber;
while (!found) {
found = true;
int i = 2;
while ((i < maxNumber + 1) && found) {
if (start % i != 0) {
found = false;
}
i++;
}
start++;
}
Console.WriteLine("{0:d}", start - 1);
Console.WriteLine("time elapsed = {0:f} sec.", (DateTime.Now - t0).Seconds);
Console.ReadLine();
}
}
}
and python code:
from datetime import datetime
t0 = datetime.now()
max_number = 20
found = False
start = max_number
while not found:
found = True
i = 2
while ((i < max_number + 1) and found):
if (start % i) != 0:
found = False
i += 1
start += 1
print("number {0:d}\n".format(start - 1))
print("time elapsed = {0:f} sec.\n".format((datetime.now() - t0).seconds))
Answer: The answer is simply that Python deals with objects for everything and that it
doesn't have [JIT](http://en.wikipedia.org/wiki/Just-in-time_compilation) by
default. So rather than being very efficient by modifying a few bytes on the
stack and optimizing the hot parts of the code (i.e., the iteration) β Python
chugs along with rich objects representing numbers and no on-the-fly
optimizations.
If you tried this in a variant of Python that has JIT (for example, PyPy) I
guarantee you that you'll see a massive difference.
A general tip is to avoid standard Python for very computationally expensive
operations (especially if this is for a backend serving requests from multiple
clients). Java, C#, JavaScript, etc. with JIT are incomparably more efficient.
By the way, if you want to write your example in a more Pythonic manner, you
could do it like this:
from datetime import datetime
start_time = datetime.now()
max_number = 20
x = max_number
while True:
i = 2
while i <= max_number:
if x % i: break
i += 1
else:
# x was not divisible by 2...20
break
x += 1
print('number: %d' % x)
print('time elapsed: %d seconds' % (datetime.now() - start_time).seconds)
The above executed in 90 seconds for me. The reason it's faster relies on
seemingly stupid things like `x` being shorter than `start`, that I'm not
assigning variables as often, and that I'm relying on Python's own control
structures rather than variable checking to jump in/out of loops.
|
wxPython - Set Items in ListCtrl and Get Selected Item
Question: I have the following code for creating a ListCtrl called "browser list".
self.browserList=wx.ListCtrl(panel, pos=(20,150), size=(250,100), style.wx.LC_REPORT|wx.BORDER_SUNKEN)
self.browserList.InsertColumn(0, '')
self.browserList.InsertColumn(1, 'Browser: ')
self.browserList.SetColumnWidth(0, 50)
self.browserList.SetColumnWidth(1, 200)
I wante to add the following to add these strings as items, but it puts it in
the 1st column, whereas I need it in the 2nd column:
self.browserList.InsertStringItem(1, 'Google Chrome')
self.browserList.InsertStringItem(2, 'Mozilla Firefox')
Also, how can I get the selected item and store it in a variable?
Answer: It's a bit more complicated than that. You insert the item and then you use
SetStringItem to insert data into other columns. Here's a quick and dirty
example:
import wx
########################################################################
class MyForm(wx.Frame):
#----------------------------------------------------------------------
def __init__(self):
wx.Frame.__init__(self, None, wx.ID_ANY, "List Control Tutorial")
# Add a panel so it looks the correct on all platforms
panel = wx.Panel(self, wx.ID_ANY)
self.index = 0
self.list_ctrl = wx.ListCtrl(panel, size=(-1,100),
style=wx.LC_REPORT
|wx.BORDER_SUNKEN
)
self.list_ctrl.InsertColumn(0, '', width=50)
self.list_ctrl.InsertColumn(1, 'Browser', width=200)
# add some browsers
self.list_ctrl.InsertStringItem(0, "foo")
self.list_ctrl.SetStringItem(0, 1, "Google Chrome")
self.list_ctrl.InsertStringItem(1, "bar")
self.list_ctrl.SetStringItem(1, 1, "Mozilla Firefox")
sizer = wx.BoxSizer(wx.VERTICAL)
sizer.Add(self.list_ctrl, 0, wx.ALL|wx.EXPAND, 5)
panel.SetSizer(sizer)
#----------------------------------------------------------------------
# Run the program
if __name__ == "__main__":
app = wx.App(False)
frame = MyForm()
frame.Show()
app.MainLoop()
I personally prefer using ObjectListView instead of the ListCtrl. I just think
it works better. But you might find my old tips and tricks tutorial useful:
* <http://www.blog.pythonlibrary.org/2011/01/04/wxpython-wx-listctrl-tips-and-tricks/>
And if you decide to switch, then you might find this one helpful as well:
* <http://www.blog.pythonlibrary.org/2009/12/23/wxpython-using-objectlistview-instead-of-a-listctrl/>
|
Python 3 - Multiprocessing - Queue.get() does not respond
Question: I want to make a brute force attack and therefore need some speed... So I came
up to use the multiprocessing library... However, in every tutorial I've
found, something did not work.... Hm.. This one seems to work very well,
except, that I whenever I call the get() function, idle seems to go to sleep
and it doesn't respond at all. Am I just silly or what? I just copy pasted the
example, so it should have worked....
import multiprocessing as mp
import random
import string
# Define an output queue
output = mp.Queue()
# define a example function
def rand_string(length, output):
""" Generates a random string of numbers, lower- and uppercase chars. """
rand_str = ''.join(random.choice(
string.ascii_lowercase
+ string.ascii_uppercase
+ string.digits)
for i in range(length))
output.put(rand_str)
# Setup a list of processes that we want to run
processes = [mp.Process(target=rand_string, args=(5, output)) for x in range(2)]
# Run processes
for p in processes:
p.start()
# Exit the completed processes
for p in processes:
p.join()
# Get process results from the output queue
results = [output.get() for p in processes]
print(results)
Answer: @dano hit it on the head! You don't have `if __name__ == "__main__":` so you
have a "fork bomb". That is, each process is running the processes, and so on.
You will also notice that I have moved the creation of the queue.
import multiprocessing as mp
import random
import string
# define a example function
def rand_string(length, output):
""" Generates a random string of numbers, lower- and uppercase chars. """
rand_str = ''.join(random.choice(
string.ascii_lowercase
+ string.ascii_uppercase
+ string.digits)
for i in range(length))
output.put(rand_str)
if __name__ == "__main__":
# Define an output queue
output = mp.Queue()
# Setup a list of processes that we want to run
processes = [mp.Process(target=rand_string, args=(5, output)) for x in range(2)]
# Run processes
for p in processes:
p.start()
# Exit the completed processes
for p in processes:
p.join()
# Get process results from the output queue
results = [output.get() for p in processes]
print(results)
What happens is that `multiprocessing` runs each child process as a module, so
`__name__` is only `__main__` in the parent. If you don't have that then each
child process will (attempt to) start two more processes, each of which will
start two more, and so on. No wonder IDLE stops.
|
Python: Logging into website with python using GET request
Question: I'm using python 3.4.3 and I'm trying to login to OKCupid using requests.
The page that my code returns is the initial login page, not the page a user
would see after successfully logging in. I have tried looking at several
answers on here and other tutorials and most of them direct me to inspect the
developer tab and look at requests with method "POST", but I do not see any
such requests.
Instead, I see "GET" requests and I'm unsure how requests handle those. I have
tried a number of different approaches, but none worked. Here is the simpler
code I have:
import requests
from bs4 import BeautifulSoup
user='USERNAME'
pw='PASSWORD'
url='http://www.okcupid.com/login'
session=requests.session()
values = {'login_username':user, 'login_password':pw}
r = session.post(url,data=values)
soup = BeautifulSoup(r.content)
pSoup = BeautifulSoup.prettify(soup)
print(soup.title.string)
Answer: I was able to figure it out. In case this is helpful to someone in the future:
there were two things preventing my previous code from working:
1. I needed to specify 'https' instead of 'http' in the url.
2. I was missing a the 'okc_api' value in the values vector. I hadn't detected this previously because Chrome's Developer tools did not have "Preserve log" checked. As a result, Chrome was erasing the login "POST" request before I could look at the "Form Data" Values.
Here is the revised code:
import requests
from bs4 import BeautifulSoup
user='USERNAME'
pw='PASSWORD'
url='https://www.okcupid.com/login'
session=requests.session()
values = {'username': user, 'password': pw, 'okc_api': '1'}
session.post(url, data=values)
page = session.get('http://www.okcupid.com/')
soup = BeautifulSoup(page.content)
print(soup.title.string)
session.close()
|
Bad request API google Python
Question: I was getting usually send the request by the code below, but out of nowhere
he started the mistake and do not want to send. Could anyone help me?
The code:
import urllib2
import json
url = "https://www.googleapis.com/qpxExpress/v1/trips/search?key=AIzaSyBtRCSB8tZ7u0F9U6txJeMwMcYRdJ9b4B8"
code = {
"request": {
"passengers": {
"adultCount": 1,
"childCount": 1,
"infantInSeatCount": 1
},
"slice": [
{
"origin": "SSA",
"destination": "REC",
"date": "2015-05-01",
"maxStops": 1,
"permittedDepartureTime": {
"earliestTime": "09:00",
"latestTime": "16:00"
}
},
{
"origin": "REC",
"destination": "SSA",
"date": "2015-05-20",
"maxStops": 1
}
],
"maxPrice": "BRL 200.00",
"refundable": "false",
"solutions": 20
}
}
jsonreq = json.dumps(code, encoding = 'utf-8')
req = urllib2.Request(url, jsonreq, {'Content-Type': 'application/json'})
flight = urllib2.urlopen(req)
response = flight.read()
flight.close()
print(response)
The Error:
Traceback (most recent call last):
File "C:\Users\Francisco\Desktop\Projeto\test1.py", line 41, in <module>
flight = urllib2.urlopen(req)
File "C:\Python27\lib\urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "C:\Python27\lib\urllib2.py", line 437, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 550, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 475, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 409, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 400: Bad Request
Answer: The 400 error will return information what is incorrect:
> Invalid inputs for request id \"nL8TcZS7RG7DPB70m0MNw9\": Brl 200.00 is not
> a valid value for x.maxPrice."
In this case the space is causing the trouble. Replace
BRL 200.00
with
BRL200.00
Link to documentation for [maxPrice](https://developers.google.com/qpx-
express/v1/trips/search)
|
Multiprocessing code works upon import, breaks upon being called
Question: In a file called `test.py` I have
print 'i am cow'
import multi4
print 'i am cowboy'
and in `multi4.py` I have
import multiprocessing as mp
manager = mp.Manager()
print manager
I am confused by the way this code operates.
At the command line, if I type `python` and then in the python environment if
I type `import test.py` I get the expected behavior:
Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information.
>>>import test
i am cow
<multiprocessing.managers.SyncManager object at 0x025209B0>
i am cowboy
>>>
However if I type `test.py` at the command line, I get
i am cow
i am cow
i am cow
i am cow
i am cow
i am cow
i am cow
i am cow
i am cow
i am cow
i am cow
i am cow
i am cow
i am cow
i am cow
i am cow
i am cow
i am cow
Which would presumably go on forever unless I kill it. When I kill it, I get a
bunch of repeated errors:
KeyboardInterrupt
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python27\lib\multiprocessing\forking.py", line 373, in main
prepare(preparation_data)
File "C:\Python27\lib\multiprocessing\forking.py", line 488, in prepare
'__parents_main__', file, path_name, etc
KeyboardInterrupt
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python27\lib\multiprocessing\forking.py", line 373, in main
prepare(preparation_data)
File "C:\Python27\lib\multiprocessing\forking.py", line 488, in prepare
'__parents_main__', file, path_name, etc
KeyboardInterrupt
So what is going on? Why does it behave one way under import, and another when
I try to run it?
Answer: `multiprocessing` [won't work
properly](https://docs.python.org/2/library/multiprocessing.html#using-a-pool-
of-workers) in the interactive prompt on Windows, because it can't properly
re-import `__main__` in the child processes it spawns. However, that actually
actually _helps_ you here, since it keeps the `manager = mp.Manager()` line
from being recursively executed in the child process that spawns when the
`Manager` starts up.
In the actual script, however, the child can re-import `__main__` properly.
You're seeing the infinite recursion because you're not protecting the call to
`mp.Manager()` with an `if __name__ == "__main__":` guard, which is [required
on Windows](https://docs.python.org/2/library/multiprocessing.html#windows) to
prevent `mp.Manager()` from being executed in the child when the re-import
occurs:
import multiprocessing as mp
if __name__ == "__main__":
manager = mp.Manager()
print manager
**Edit:**
Your example, where your main script (test.py) imports the module that creates
the `Manager`, needs a bit of refactoring. You need to instantiate the
`Manager` from the main script, by calling a method in the module that
actually uses `multiprocessing`:
print 'i am cow'
import multi4
if __name__ == "__main__":
multi4.init_manager()
print 'i am cowboy'
**multi4.py**
import multiprocessing as mp
manager = None
def init_manager():
global manager
manager = mp.Manager()
This is the only way to make sure you only create the `Manager` when you're
actually executing a script.
|
Use size in Python
Question: I created a ndarray array in python
temp = np.array([1, 2, 3, 4])
To measure the length of this array, I can use
temp.size
or
np.size(temp)
both return 4. But I'm wondering what's the difference between the two
expressions? Also, to get the lena image, I need to write
>>> import scipy.misc
>>> lena = scipy.misc.lena()
I'm wondering why there's a bracket pair after lena? Isn't lena a matrix?
Something with () is like a function. I understand lena() is a function takes
no inputs and returns a ndarray. I just feel like it's tedious to write this
way.
In Matlab, it's quite clear to distinguish between a constant and a function.
Function is defined and called with (), but constant (or pre-stored) can be
called directly, e.g., "blobs.png"
Answer: `np.size(temp)` is a little more general than `temp.size`. At first glance,
they appear to do the same thing:
>>> x = np.array([[1,2,3],[4,5,6]])
>>> x.size
6
>>> np.size(x)
6
This is true when you don't supply any additional arguments to `np.size`. But
if you look at the
[documentation](http://docs.scipy.org/doc/numpy/reference/generated/numpy.ma.size.html#numpy.ma.size)
for `np.size`, you'll see that it accepts an additional `axis` parameter,
which gives the size along the corresponding axis:
>>> np.size(x, 0)
2
>>> np.size(x, 1)
3
As far as your second question, `scipy.misc.lena` _is_ a function as you point
out. It is not a matrix. It is a function returning a matrix. The function
(presumably) loads the data on the fly so that it isn't placed in memory
whenever you import the `scipy.misc` module. This is a _good_ thing, and
actually not all that different than matlab.
|
How do I create a CSV file from a model & then add it as an email attachment?
Question: I'm trying to generate a CSV from an object stored in my database and then add
the csv to an email as an attachment.
I'm using Django 1.7 & Python 2.7.
I'm getting this error with the following code: "AttributeError: '_csv.writer'
object has no attribute 'encode'"
Here's my code:
def export_as_csv(report, fields, force_fields):
print "export to csv started"
"""
Generic csv export admin action.
based on http://djangosnippets.org/snippets/2020/
extended for being able to give list_display as fields and work with admin-defined functions
"""
opts = report
if not force_fields:
field_names = set([field.name for field in opts.fields])
if fields:
fieldset = set(fields)
field_names = field_names & fieldset
elif fields:
field_names = set(fields)
else:
raise("option force_fields can only be used in parallel with option fields")
writer = csv.writer(open("myfile.csv","w"), delimiter=',',quoting=csv.QUOTE_ALL)
writer.writerow(list(field_names))
row = []
for field in field_names:
try:
row.append(unicode(getattr(report, field)).encode('utf-8'))
except AttributeError:
row.append(unicode((getattr(report, field)(report))).encode('utf-8'))
except:
raise
writer.writerow(row)
response = writer
return response
The code below returns the above function's output as a parameter in
mail.attach(). The report parameter is my DB model/object that's created &
saved before any of this code is called.
list_display = ('primary_id', 'created_date', 'report_size',)
mail = EmailMessage('Email Subject', 'Email Body', None, ['[email protected]'])
mail.attach("report.csv", export_as_csv(report, fields = list_display, force_fields=True) , "text/csv")
mail.send()
Here's the model for a Report() object:
class Report(models.Model):
primary_id = models.IntegerField('Primary Key', primary_key = True, default=0)
created_date = models.DateTimeField('Created date', default=timezone.now)
report_size = models.IntegerField('Report Size', default=0)
Any help is greatly appreciated.
Here's how I ideally want this code to work: export_as_csv() returns a CSV
object that I don't need to save as a file or in my database - then the code
attaches that to the email directly from a CSV object in memory, not a saved
file. Not sure if this is possible, but that's ideal for my needs.
Answer: `csvwriter` can write to any object with a `write()` method, according to [the
docs](https://docs.python.org/2/library/csv.html#csv.writer). The usual Python
class to use when you want an in-memory filelike object is a
[`StringIO`](https://docs.python.org/2/library/stringio.html) instance, or
`cStringIO` if you're sure you won't be writing Unicode objects to it. So your
`export_to_csv` function should look something like:
import cStringIO
def export_as_csv(report, fields, force_fields):
# define field_names as above, assuming your indentation is correct
csv_buffer = cStringIO.StringIO()
writer = csv.writer(csv_buffer, delimiter=',', quoting=csv.QUOTE_ALL)
field_names = list(field_names) # loses order, but at least it's consistent
writer.writerow(field_names)
row = []
for field in field_names:
row.append(getattr(report, field).encode('utf-8'))
writer.writerow(row)
return csv_buffer
Then something like:
mail.attach("report.csv", export_as_csv(report, fields=list_display, force_fields=True).getvalue() , "text/csv")
Key differences are:
1) `csv_buffer` is an in-memory filelike object. Nothing is written to the
file system.
2) This handles field lookup for the simple model fields `'primary_id',
'created_date', 'report_size'` that you show in your example code. If you
actually need to handle the names of callables in your`field_names` sequence,
it gets harder.
3) This uses a single variable to hold the `field_names` after converting to a
list. It will probably work to use `list(field_names)` and `for field in
field_names` while `field_names` is a set, Python sets and dictionaries should
be order-stable as long as no modifications are made, but I find it clearer
and more reliable to be explicit about making sure the order syncs up.
4) This returns the `cStringIO.StringIO` object rather than the writer. The
writer's work is done once you've written everything you want to to it. The
`cStringIO` object, conversely, should return the buffered CSV contents when
you call `getvalue()` on it.
|
Missing SimpleJson Module
Question: I have Ansible playbook run against a machine with CentOS release 5.6 (Final).
I have simplejson installed in the target machine and the module is importable
from the python interpreter. But still my playbooks fails with the below
error.
Error: ansible requires a json module, none found!
I am confirming the presence of the simple json module at runtime by a raw
module as shown below.
---
-
gather_facts: false
hosts: "{{ host_group }}"
name: deploy
vars_files:
- "{{env}}.yml"
tasks:
- name: check python version
raw: python -c "import simplejson"
- name: "git checkout"
git: "repo={{repository}} dest={{base_dir}} version={{branch}}"
The first step succeeds without any issue as shown below
TASK: [check python version] **************************************************
ok: [my-target-machine] => {"rc": 0, "stderr": "", "stdout": ""}
but the second fails with the above said error of missing json module.
Answer: This could be happening because you have two versions of python: the system
python at `/usr/bin/python` and another python at perhaps
`/usr/local/bin/python`. If the first-on-path python is >=2.5 or otherwise has
simplejson in its site-packages, the first task would execute fine. However,
if you've not installed simplejson for the system python at `/usr/bin/python`
(easiest to just `sudo yum -y install python-simplejson`), then the `git` task
could fail.
Standard ansible modules always use the `#!/usr/bin/python` shebang, and the
[git module](https://github.com/ansible/ansible-modules-
core/blob/devel/source_control/git.py) is no exception.
Also, from the [ansible documentation](http://docs.ansible.com/faq.html#how-
do-i-handle-python-pathing-not-having-a-python-2-x-in-usr-bin-python-on-a-
remote-machine):
> By default Ansible assumes it can find a /usr/bin/python on your remote
> system that is a 2.X version of Python, specifically 2.4 or higher.
>
> Setting of an inventory variable βansible_python_interpreterβ on any host
> will allow Ansible to auto-replace the interpreter used when executing
> python modules. Thus, you can point to any python you want on the system if
> /usr/bin/python on your system does not point to a Python 2.X interpreter.
|
how to parse a particular output like in rows as the output comes in a clubbed way in python?
Question: When I run the code below, the output comes as such. I need the output to show
in rows **and** only the values under the caption.
Desired output:
caption : 3PAR
3PAR
Actual output:
('Caption DeviceID Model Partitions Size \r\r\n3PARdata VV SCSI Disk Device \\\\.\\PHYSICALDRIVE19 3PARdata VV SCSI Disk Device 0 1069286400 \r\r\nHP P2000 G3 FC SCSI Disk Device \\\\.\\PHYSICALDRIVE1 HP P2000 G3 FC SCSI Disk Device 1 49993251840 \r\r\nHP HSV360 SCSI Disk Device \\\\.\\PHYSICALDRIVE7 HP HSV360 SCSI Disk Device 4 1069286400 \r\r\nHP HSV360 SCSI Disk Device \\\\.\\PHYSICALDRIVE27 HP HSV360 SCSI Disk Device 0 1069286400 \r\r\nHP HSV360 SCSI Disk Device \\\\.\\PHYSICALDRIVE5 HP HSV360 SCSI Disk Device 0 1069286400 \r\r\nHP P2000 G3 FC SCSI Disk Device \\\\.\\PHYSICALDRIVE23 HP P2000 G3 FC SCSI Disk Device 1 49993251840 \r\r\n3PARdata VV SCSI Disk Device \\\\.\\PHYSICALDRIVE13 3PARdata
Code:
p5=subprocess.Popen("rsh -l Administrator 10.10.11.37 \"wmic diskdrive list brief\"",stdout=subprocess.PIPE, shell=True)
result = p5.communicate()
status = p5.wait()
print(result),
Answer: If you only want the values under `Caption` then use `re` to split on the
space after `Disk Device` which is common in all Caption rows:
result = """Caption DeviceID Model Partitions Size \r\r\n3PARdata VV SCSI Disk Device \\\\.\\PHYSICALDRIVE19 3PARdata VV SCSI Disk Device 0 1069286400 \r\r\nHP P2000 G3 FC SCSI Disk Device \\\\.\\PHYSICALDRIVE1 HP P2000 G3 FC SCSI Disk Device 1 49993251840 \r\r\nHP HSV360 SCSI Disk Device \\\\.\\PHYSICALDRIVE7 HP HSV360 SCSI Disk Device 4 1069286400 \r\r\nHP HSV360 SCSI Disk Device \\\\.\\PHYSICALDRIVE27 HP HSV360 SCSI Disk Device 0 1069286400 \r\r\nHP HSV360 SCSI Disk Device \\\\.\\PHYSICALDRIVE5 HP HSV360 SCSI Disk Device 0 1069286400 \r\r\nHP P2000 G3 FC SCSI Disk Device \\\\.\\PHYSICALDRIVE23 HP P2000 G3 FC SCSI Disk Device 1 49993251840 \r\r\n3PARdata VV SCSI Disk Device \\\\.\\PHYSICALDRIVE13 3PARdata"""
import re
spl = result.splitlines()
print(spl[0].split()[0].rstrip())
for line in spl[1:]:
if line:
print(re.split("(?<=Disk Device)\s+",line,1)[0])
Caption
3PARdata VV SCSI Disk Device
HP P2000 G3 FC SCSI Disk Device
HP HSV360 SCSI Disk Device
HP HSV360 SCSI Disk Device
HP HSV360 SCSI Disk Device
HP P2000 G3 FC SCSI Disk Device
3PARdata VV SCSI Disk Device
|
How to install Python 2.7 on Ubuntu 9.10
Question: Now we're developing our software on the customer side, and there is:
maestro@UIServer:~$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=9.10
DISTRIB_CODENAME=karmic
DISTRIB_DESCRIPTION="Ubuntu 9.10"
system is installed. We're not allowed to upgrade this system to a newer
version, but we need to use Python 2.7 in our project.
E.g. we have to use pymorphy2 package, but when we're trying to import it into
project, we get:
>>> import pymorphy2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/site-packages/pymorphy2/__init__.py", line 3, in <module>
from .analyzer import MorphAnalyzer
File "/usr/local/lib/python2.7/site-packages/pymorphy2/analyzer.py", line 10, in <module>
from pymorphy2 import opencorpora_dict
File "/usr/local/lib/python2.7/site-packages/pymorphy2/opencorpora_dict/__init__.py", line 4, in <module>
from .storage import load_dict as load
File "/usr/local/lib/python2.7/site-packages/pymorphy2/opencorpora_dict/storage.py", line 24, in <module>
from pymorphy2.utils import json_write, json_read
File "/usr/local/lib/python2.7/site-packages/pymorphy2/utils.py", line 5, in <module>
import bz2
ImportError: No module named bz2
Ok, we're trying to install libbz2-dev:
sudo apt-get install libbz2-dev
end getting this:
ValueError: /usr/bin/python does not match the python default version. It must be reset to point to python2.6
dpkg: error processing python-pip (--configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
python-pip
E: Sub-process /usr/bin/dpkg returned an error code (1)
How to avoid this problem?
Thanks in advance!
Answer: Download [python](https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tgz),
build and install using :
$ ./configure
$ make
$ make install
I am assuming you have `build-essential` installed or at least `gcc`. You can
customize installation by passing
`prefix=/path/where/you/want/python/installed` and other flags to `make`.
|
Unable to identify link class
Question: I am very new to Programming and Python and am trying out to code this simple
scraper to extract all the profile URLs of therapists from this page
[http://www.therapy-
directory.org.uk/search.php?search=Sheffield&services[23]=1&business_type[individual]=1&distance=40&uqs=626693](http://www.therapy-
directory.org.uk/search.php?search=Sheffield&services\[23\]=1&business_type\[individual\]=1&distance=40&uqs=626693)
import requests
from bs4 import BeautifulSoup
def tru_crawler(max_pages):
p = '&page='
page = 1
while page <= max_pages:
url = 'http://www.therapy-directory.org.uk/search.php?search=Sheffield&distance=40&services[23]=on&services=23&business_type[individual]=on&uqs=626693' + p + str(page)
code = requests.get(url)
text = code.text
soup = BeautifulSoup(text)
for link in soup.findAll('a',{'member-summary':'h2'}):
href = 'http://www.therapy-directory.org.uk' + link.get('href')
yield href + '\n'
print(href)
page += 1
Now when I am running this code, I am getting nothing, primarily because the
soup.findall is empty.
The HTML of the profile link shows
<div class="member-summary">
<h2 class="">
<a href="/therapists/julia-church?uqs=626693">Julia Church</a>
</h2>
So I am not sure what additional parameters to pass in soup.findall('a') in
order to get the Profile URLs
Please help
Thanks
**Update -**
I ran the revised code and it returned a bunch of errors
Okay this time after it scraped page 1 it returned a bunch of errors
Traceback (most recent call last):
File "C:/Users/PB/PycharmProjects/crawler/crawler-revised.py", line 19, enter code here`in <module>
tru_crawler(3)
File "C:/Users/PB/PycharmProjects/crawler/crawler-revised.py", line 9, in tru_crawler
code = requests.get(url)
File "C:\Python27\lib\requests\api.py", line 68, in get
return request('get', url, **kwargs)
File "C:\Python27\lib\requests\api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "C:\Python27\lib\requests\sessions.py", line 464, in request
resp = self.send(prep, **send_kwargs)
File "C:\Python27\lib\requests\sessions.py", line 602, in send
history = [resp for resp in gen] if allow_redirects else []
File "C:\Python27\lib\requests\sessions.py", line 195, in resolve_redirects
allow_redirects=False,
File "C:\Python27\lib\requests\sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "C:\Python27\lib\requests\adapters.py", line 415, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine("''",))
What's going wrong here?
Answer: Currently, parameter of `findAll()` you have doesn't make sense. It reads:
find all `<a>` having `member-class` attribute equals "h2".
One possible way is using `select()` method passing [CSS
selector](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors)
as parameter :
for link in soup.select('div.member-summary h2 a'):
href = 'http://www.therapy-directory.org.uk' + link.get('href')
yield href + '\n'
print(href)
Above CSS selector reads: find `<div>` tag having class equals "member-
summary", then within that `<div>` find `<h2>` tag, then within that `<h2>`
find `<a>` tag.
Working example:
import requests
from bs4 import BeautifulSoup
p = '&page='
page = 1
url = 'http://www.therapy-directory.org.uk/search.php?search=Sheffield&distance=40&services[23]=on&services=23&business_type[individual]=on&uqs=626693' + p + str(page)
code = requests.get(url)
text = code.text
soup = BeautifulSoup(text)
for link in soup.select('div.member-summary h2 a'):
href = 'http://www.therapy-directory.org.uk' + link.get('href')
print(href)
Output (trimmed, from total 26 links) :
http://www.therapy-directory.org.uk/therapists/lesley-lister?uqs=626693
http://www.therapy-directory.org.uk/therapists/fiona-jeffrey?uqs=626693
http://www.therapy-directory.org.uk/therapists/ann-grant?uqs=626693
.....
.....
http://www.therapy-directory.org.uk/therapists/jan-garbutt?uqs=626693
|
py2exe: Skip archive ignored
Question: I'm probably making a stupid error somewhere but i can't find it. Here is my
setup.py:
import py2exe
from distutils.core import setup
setup(windows=['Gui.py'],
data_files = [('Drivers', ['Drivers/chromedriver.exe',
'Drivers/IEDriverServer.exe']
)],
options={
"py2exe":{
"skip_archive": True,
"unbuffered": True,
"optimize": 2
}
})
The command i'm running is:
**python setup.py py2exe**
All of my files are in setup.py (no subdirectories)
I'm using:
* **python 3.4.2**
* **py2exe 0.9.2.2**
The error i'm getting is that in /dist archive.zip is still here and is not
divided in subdirectories.
Any help would be apreciated
Answer: Your data_files definition doesn't look right, shouldn't it be:
data_files = [('Drivers', ['Drivers/chromedriver.exe',
'Drivers/IEDriverServer.exe'])],
i.e. a ',' after 'Drivers'
Is the error you are getting an exception? or ....?
|
Extracing song length and size from HTML using Python
Question: I am making a simple mp3 down-loader from a Website. In doing so I stuck while
parsing time and size of audio:
<div class="mp3-info">
1.69 mins
<br/>
2.33 mb
</div>
Now I need to parse `1.69 mins` and `2.33 mb` from above HTML. I am using
python 3.4
Answer: I would use [BeautifulSoup4](http://www.crummy.com/software/BeautifulSoup/) to
parse your HTML. See docs
[here](http://www.crummy.com/software/BeautifulSoup/bs4/doc/).
import BeautifulSoup
soup = BeautifulSoup.BeautifulSoup(your_html_string)
soup.findAll("div", {"class": "mp3-info"})
# Now extract the text
Also because it's a class, it could be that there are multiple ones on the
page...
|
Python script to harvest tweets to a MongoDb works with users but not hashtags. Any ideas why not?
Question: I'm playing around the Twitter API and am in the process of developing a
script to pull all Tweets with a certain hashtag down to a local mongoDB. I
have it working fine when I'm downloading tweets from users, but when
downloading tweets from a hashtag I get:
return loads(fp.read(),
AttributeError: 'int' object has no attribute 'read'
Can anyone offer their infinite wisdom into how I could get this script to
work?
To run, save it as a .py file, cd to the folder and run:
python twitter.py
Code:
__author__ = 'Tom Cusack'
import pymongo
import oauth2 as oauth
import urllib2, json
import sys, argparse, time
def oauth_header(url, consumer, token):
params = {'oauth_version': '1.0',
'oauth_nonce': oauth.generate_nonce(),
'oauth_timestamp': int(time.time()),
}
req = oauth.Request(method = 'GET',url = url, parameters = params)
req.sign_request(oauth.SignatureMethod_HMAC_SHA1(),consumer, token)
return req.to_header()['Authorization'].encode('utf-8')
def main():
### Twitter Settings
numtweets = '32000'
verbose = 'store_true'
retweet = 'store_false'
CONSUMER_KEY = 'M7Xu9Wte0eIZvqhb4G9HnIn3G'
CONSUMER_SECRET = 'c8hB4Qwps2aODQUx7UsyzQuCRifEp3PKu6hPQll8wnJGIhbKgZ'
ACCESS_TOKEN = '3213221313-APuXuNjVMbRbZpu6sVbETbgqkponGsZJVT53QmG'
ACCESS_SECRET = 'BJHrqWC9ed3pA5oDstSMCYcUcz2pYF3DmJ7jcuDe7yxvi'
base_url = url = 'https://api.twitter.com/1.1/search/tweets.json?include_entities=true&count=200&q=#mongodb&include_rts=%s' % (retweet)
oauth_consumer = oauth.Consumer(key = CONSUMER_KEY, secret = CONSUMER_SECRET)
oauth_token = oauth.Token(key = ACCESS_TOKEN, secret = ACCESS_SECRET)
### Mongodb Settings
uri = 'mongodb://127.0.0.1:27017/SARKY'
if uri != None:
try:
conn = pymongo.MongoClient(uri)
print 'Pulling Tweets..'
except:
print 'Error: Unable to connect to DB. Check uri variable.'
return
uri_parts = pymongo.uri_parser.parse_uri(uri)
db = conn[uri_parts['database']]
db['twitter-harvest'].ensure_index('id_str')
### Helper Variables for Harvest
max_id = -1
tweet_count = 0
stream = 0
### Begin Harvesting
while True:
auth = oauth_header(url, oauth_consumer, oauth_token)
headers = {"Authorization": auth}
request = urllib2.Request(url, headers = headers)
try:
stream = urllib2.urlopen(request)
except urllib2.HTTPError, err:
if err.code == 404:
print 'Error: Unknown user. Check --user arg'
return
if err.code == 401:
print 'Error: Unauthorized. Check Twitter credentials'
return
tweet_list = json.load(stream)
if len(tweet_list) == 0:
print 'No tweets to harvest!'
return
if 'errors' in tweet_list:
print 'Hit rate limit, code: %s, message: %s' % (tweets['errors']['code'], tweets['errors']['message'])
return
if max_id == -1:
tweets = tweet_list
else:
tweets = tweet_list[1:]
if len(tweets) == 0:
print 'Finished Harvest!'
return
for tweet in tweets:
max_id = id_str = tweet['id_str']
try:
if tweet_count == numtweets:
print 'Finished Harvest- hit numtweets!'
return
if uri != None:
db[user].update({'id_str':id_str},tweet,upsert = True)
else:
print tweet['text']
tweet_count+=1
if verbose == True and uri != None:
print tweet['text']
except Exception, err:
print 'Unexpected error encountered: %s' %(err)
return
url = base_url + '&max_id=' + max_id
if __name__ == '__main__':
try:
main()
except SystemExit as e:
if e.code == 0:
pass
Answer: You initially set `stream = 0`. When your `try...except` block catches a HTTP
response with a code that isn't 404 or 401, `stream` is still equal to `0`,
but your `except` block doesn't break out of the function.
I'd look more closely at what this response says.
|
Python: Read values into empty array and then back into file
Question: I would like to read numbers from a txt file (highscore.txt) in python and
assign those values to an empty array (highscores).
Then I'd like to add another value into the array and then overwrite the file
with the new values in the array.
I've put together a program but it doesn't give me the desired output. Please
see what's wrong with it...
Read from file:
highscores = []
#Read values from file and put them into array
file = open('highscore.txt', 'r') #read from file
file.readline() #read heading line
for line in file:
highscores.append(file.readline())
file.close() #close file
Add value and overwrite file:
highscores.append(wins)
# Print sorted highscores print to file
file = open('highscore.txt', 'w') #write to file
file.write('Highscores (number of wins out of 10 games):\n') #write heading line
for line in highscores:
file.write(str(line))
file.close() #close file
It needs work in such a way that I can (once this is working) sort the array
with the added value before overwriting the file again...
I expect to read from a file:
Highscores (number of wins out of 10 games):
8
6
5
5
3
1
0
0
Read those value into the array Then to add (let's say) 4 to the array Then
overwrite the file with the new values
In this case we can expect the output to be:
Highscores (number of wins out of 10):
8
6
5
5
3
1
0
0
4
Hope you can find out what's wrong there...
Edit: Thanks to EvenListe's answer I could find a solution, here's the
relevant code that I've used to get my program working perfectly (includes the
array that gets added being sorted in descending order after being added)
from __future__ import print_function
highscores = []
with open("highscore.txt", "r") as f:
f.readline() # Reads header
for line in f:
highscores.append(line.strip())
highscores.append(wins)
highscores = sorted(highscores, key=int, reverse=True)
# Print sorted highscores print to file
with open("highscore.txt", "w") as f:
for val in highscores:
print(val, file=f)
If you want to test out what the lines in the file are you can use this (I
used it for before adding the array and after adding the array, it really
helps finding out what's wrong without you having to constantly open the
file):
print('Highscores (number of wins out of 10 games):')
for lines in highscores:
print(lines)
Answer: From what I can tell, one obvious issue with your code is your
for line in infile:
highscores.append(infile.readline())
which skips every other line. You should have
for line in infile:
highscores.append(line)
or easier:
highscores=infile.readlines()
highscores=highscores[1:] #Remove header
|
Trouble with downloading images using Scrapy
Question: I'm getting the following error when attempting to download images using a
spider with Scrapy.
File "C:\Python27\lib\site-packages\scrapy\http\request\__init__.py",
line 61, in _set_url
raise ValueError('Missing scheme in request url: %s' % self._url)
exceptions.ValueError: Missing scheme in request url: h
As best as I can understand it, it looks like I'm missing an "h" in a url
somewhere? But I can't for the life of me see where. Everything works fine if
I'm not trying to download images. But once I add the appropriate code to the
four files below, I can't get anything to work properly. Could anyone help me
make sense of this error?
**items.py**
import scrapy
class ProductItem(scrapy.Item):
model = scrapy.Field()
shortdesc = scrapy.Field()
desc = scrapy.Field()
series = scrapy.Field()
imageorig = scrapy.Field()
image_urls = scrapy.Field()
images = scrapy.Field()
**settings.py**
BOT_NAME = 'allenheath'
SPIDER_MODULES = ['allenheath.spiders']
NEWSPIDER_MODULE = 'allenheath.spiders'
ITEM_PIPELINES = {'scrapy.contrib.pipeline.images.ImagesPipeline': 1}
IMAGES_STORE = 'c:/allenheath/images'
**pipelines.py**
class AllenheathPipeline(object):
def process_item(self, item, spider):
return item
import scrapy
from scrapy.contrib.pipeline.images import ImagesPipeline
from scrapy.exceptions import DropItem
class MyImagesPipeline(ImagesPipeline):
def get_media_requests(self, item, info):
for image_url in item['image_urls']:
yield scrapy.Request(image_url)
def item_completed(self, results, item, info):
image_paths = [x['path'] for ok, x in results if ok]
if not image_paths:
raise DropItem("Item contains no images")
item['image_paths'] = image_paths
return item
**products.py** (my spider)
import scrapy
from allenheath.items import ProductItem
from scrapy.selector import Selector
from scrapy.http import HtmlResponse
class productsSpider(scrapy.Spider):
name = "products"
allowed_domains = ["http://www.allen-heath.com/"]
start_urls = [
"http://www.allen-heath.com/ahproducts/ilive-80/",
"http://www.allen-heath.com/ahproducts/ilive-112/"
]
def parse(self, response):
for sel in response.xpath('/html'):
item = ProductItem()
item['model'] = sel.css('#prodsingleouter > div > div > h2::text').extract()
item['shortdesc'] = sel.css('#prodsingleouter > div > div > h3::text').extract()
item['desc'] = sel.css('#tab1 #productcontent').extract()
item['series'] = sel.css('#pagestrip > div > div > a:nth-child(3)::text').extract()
item['imageorig'] = sel.css('#prodsingleouter > div > div > h2::text').extract()
item['image_urls'] = sel.css('#tab1 #productcontent img').extract()[0]
item['image_urls'] = 'http://www.allen-heath.com' + item['image_urls']
yield item
Any help would be greatly appreciated.
Answer: The issue is here:
def get_media_requests(self, item, info):
for image_url in item['image_urls']:
yield scrapy.Request(image_url)
and here:
item['image_urls'] = sel.css('#tab1 #productcontent img').extract()[0]
You are extracting this field and taking the first element. Which means that
once you are iterating over it in the pipeline, you are in fact iterating over
characters in the URL, which begins with `http` \- explaining the error
message you are seeing, as soon as the first letter tries to be processed:
Missing scheme in request url: h
Remove the `[0]` from the line. While you're at it, fetch the `src` of the
image, instead of the entire element:
item['image_urls'] = sel.css('#tab1 #productcontent img').xpath('./@src').extract()
After that, you should update the next line also, in case the image url is
relative, to convert it to absolute:
import urlparse # put this at the top of the script
item['image_urls'] = [urlparse.urljoin(response.url, url) for url in item['image_urls']]
But you don't need this last part if the image URL in `src` is actually
absolute, so just remove it.
|
How to improve the speed of my selection process, python
Question: **Edit: Due to errors in my code i updated with my oldest, but working code**
I get a list of speed recordings from a database, and I want to find the max
speed in that list. Sounds easy enough, but I got some requirements for any
max speed to count:
If the max speed is over a certain level, it has to have more than a certain
number of records to be recognized as maximum speed. The reason for this logic
is that I want the max speed under normal conditions, not just an error or one
time occurrence. I also have a constraint that a speed has to be over a
certain limit to be counted, for the same reason.
Here is the example on a speed array:
v = [8.0, 1.3, 0.7, 0.8, 0.9, 1.1, 14.9, 14.0, 14.1, 14.2, 14.3, 13.8, 13.9, 13.7, 13.6, 13.5, 13.4, 15.7, 15.8, 15.0, 15.3, 15.4, 15.5, 15.6, 15.2, 12.8, 12.7, 12.6, 8.7, 8.8, 8.6, 9.0, 8.5, 8.4, 8.3, 0.1, 0.0, 16.4, 16.5, 16.7, 16.8, 17.0, 17.1, 17.8, 17.7, 17.6, 17.4, 17.5, 17.3, 17.9, 18.2, 18.3, 18.1, 18.0, 18.4, 18.5, 18.6, 19.0, 19.1, 18.9, 19.2, 19.3, 19.9, 20.1, 19.8, 20.0, 19.7, 19.6, 19.5, 20.2, 20.3, 18.7, 18.8, 17.2, 16.9, 11.5, 11.2, 11.3, 11.4, 7.1, 12.9, 14.4, 13.1, 13.2, 12.5, 12.1, 12.2, 13.0, 0.2, 3.6, 7.4, 4.6, 4.5, 4.3, 4.0, 9.4, 9.6, 9.7, 5.8, 5.7, 7.3, 2.1, 0.4, 0.3, 16.1, 11.9, 12.0, 11.7, 11.8, 10.0, 10.1, 9.8, 15.1, 14.7, 14.8, 10.2, 10.3, 1.2, 9.9, 1.9, 3.4, 14.6, 0.6, 5.1, 5.2, 7.5, 19.4, 10.7, 10.8, 10.9, 0.5, 16.3, 16.2, 16.0, 16.6, 12.4, 11.0, 1.7, 1.6, 2.4, 11.6, 3.9, 3.8, 14.5, 11.1]
This is my code to find what I define as the true maximum speed:
from collections import Counter
while max(speeds)>30:
speeds.remove(max(speeds))
nwsp = []
for s in speeds:
nwsp.append(np.floor(s))
count = Counter(nwsp)
while speeds and max(speeds)>14 and count[np.floor(max(speeds))]<10:
speeds.remove(max(speeds))
while speeds and max(speeds)<5:
speeds.remove(max(speeds))
if speeds:
print max(speeds)
return max(speeds)
else:
return False
Result with v as shown over: 19.9
The reason that i make the nwsp is that it doesn't matter for me if f.ex 19.6
is only found 9 times - if any number inside the same integer, f.ex 19.7 is
found 3 times as well, then 19.6 will be valid.
How can I rewrite/optimize this code so the selection process is quicker? I
already removed the max(speeds) and instead sorted the list and referenced the
largest element using speeds[-1].
Sorry for not adding any unit to my speeds.
Answer: Your code is just slow because you call `max` and `remove` over and over and
over again and each of those calls costs time proportional to the length of
the list. Any reasonable solution will be much faster.
If you know that `False` can't happen, then this suffices:
speeds = [8.0, 1.3, 0.7, 0.8, 0.9, 1.1, 14.9, 14.0, 14.1, 14.2, 14.3, 13.8, 13.9, 13.7, 13.6, 13.5, 13.4, 15.7, 15.8, 15.0, 15.3, 15.4, 15.5, 15.6, 15.2, 12.8, 12.7, 12.6, 8.7, 8.8, 8.6, 9.0, 8.5, 8.4, 8.3, 0.1, 0.0, 16.4, 16.5, 16.7, 16.8, 17.0, 17.1, 17.8, 17.7, 17.6, 17.4, 17.5, 17.3, 17.9, 18.2, 18.3, 18.1, 18.0, 18.4, 18.5, 18.6, 19.0, 19.1, 18.9, 19.2, 19.3, 19.9, 20.1, 19.8, 20.0, 19.7, 19.6, 19.5, 20.2, 20.3, 18.7, 18.8, 17.2, 16.9, 11.5, 11.2, 11.3, 11.4, 7.1, 12.9, 14.4, 13.1, 13.2, 12.5, 12.1, 12.2, 13.0, 0.2, 3.6, 7.4, 4.6, 4.5, 4.3, 4.0, 9.4, 9.6, 9.7, 5.8, 5.7, 7.3, 2.1, 0.4, 0.3, 16.1, 11.9, 12.0, 11.7, 11.8, 10.0, 10.1, 9.8, 15.1, 14.7, 14.8, 10.2, 10.3, 1.2, 9.9, 1.9, 3.4, 14.6, 0.6, 5.1, 5.2, 7.5, 19.4, 10.7, 10.8, 10.9, 0.5, 16.3, 16.2, 16.0, 16.6, 12.4, 11.0, 1.7, 1.6, 2.4, 11.6, 3.9, 3.8, 14.5, 11.1]
from collections import Counter
count = Counter(map(int, speeds))
print max(s for s in speeds
if 5 <= s <= 30 and (s <= 14 or count[int(s)] >= 10))
If the `False` case can happen, this would be one way:
speeds = [8.0, 1.3, 0.7, 0.8, 0.9, 1.1, 14.9, 14.0, 14.1, 14.2, 14.3, 13.8, 13.9, 13.7, 13.6, 13.5, 13.4, 15.7, 15.8, 15.0, 15.3, 15.4, 15.5, 15.6, 15.2, 12.8, 12.7, 12.6, 8.7, 8.8, 8.6, 9.0, 8.5, 8.4, 8.3, 0.1, 0.0, 16.4, 16.5, 16.7, 16.8, 17.0, 17.1, 17.8, 17.7, 17.6, 17.4, 17.5, 17.3, 17.9, 18.2, 18.3, 18.1, 18.0, 18.4, 18.5, 18.6, 19.0, 19.1, 18.9, 19.2, 19.3, 19.9, 20.1, 19.8, 20.0, 19.7, 19.6, 19.5, 20.2, 20.3, 18.7, 18.8, 17.2, 16.9, 11.5, 11.2, 11.3, 11.4, 7.1, 12.9, 14.4, 13.1, 13.2, 12.5, 12.1, 12.2, 13.0, 0.2, 3.6, 7.4, 4.6, 4.5, 4.3, 4.0, 9.4, 9.6, 9.7, 5.8, 5.7, 7.3, 2.1, 0.4, 0.3, 16.1, 11.9, 12.0, 11.7, 11.8, 10.0, 10.1, 9.8, 15.1, 14.7, 14.8, 10.2, 10.3, 1.2, 9.9, 1.9, 3.4, 14.6, 0.6, 5.1, 5.2, 7.5, 19.4, 10.7, 10.8, 10.9, 0.5, 16.3, 16.2, 16.0, 16.6, 12.4, 11.0, 1.7, 1.6, 2.4, 11.6, 3.9, 3.8, 14.5, 11.1]
from collections import Counter
count = Counter(map(int, speeds))
valids = [s for s in speeds
if 5 <= s <= 30 and (s <= 14 or count[int(s)] >= 10)]
print max(valids) if valids else False
Or sort and use `next`, which can take your `False` as default:
speeds = [8.0, 1.3, 0.7, 0.8, 0.9, 1.1, 14.9, 14.0, 14.1, 14.2, 14.3, 13.8, 13.9, 13.7, 13.6, 13.5, 13.4, 15.7, 15.8, 15.0, 15.3, 15.4, 15.5, 15.6, 15.2, 12.8, 12.7, 12.6, 8.7, 8.8, 8.6, 9.0, 8.5, 8.4, 8.3, 0.1, 0.0, 16.4, 16.5, 16.7, 16.8, 17.0, 17.1, 17.8, 17.7, 17.6, 17.4, 17.5, 17.3, 17.9, 18.2, 18.3, 18.1, 18.0, 18.4, 18.5, 18.6, 19.0, 19.1, 18.9, 19.2, 19.3, 19.9, 20.1, 19.8, 20.0, 19.7, 19.6, 19.5, 20.2, 20.3, 18.7, 18.8, 17.2, 16.9, 11.5, 11.2, 11.3, 11.4, 7.1, 12.9, 14.4, 13.1, 13.2, 12.5, 12.1, 12.2, 13.0, 0.2, 3.6, 7.4, 4.6, 4.5, 4.3, 4.0, 9.4, 9.6, 9.7, 5.8, 5.7, 7.3, 2.1, 0.4, 0.3, 16.1, 11.9, 12.0, 11.7, 11.8, 10.0, 10.1, 9.8, 15.1, 14.7, 14.8, 10.2, 10.3, 1.2, 9.9, 1.9, 3.4, 14.6, 0.6, 5.1, 5.2, 7.5, 19.4, 10.7, 10.8, 10.9, 0.5, 16.3, 16.2, 16.0, 16.6, 12.4, 11.0, 1.7, 1.6, 2.4, 11.6, 3.9, 3.8, 14.5, 11.1]
count = Counter(map(int, speeds))
print next((s for s in reversed(sorted(speeds))
if 5 <= s <= 30 and (s <= 14 or count[int(s)] >= 10)),
False)
Instead of `Counter`, you could also use `groupby`:
speeds = [8.0, 1.3, 0.7, 0.8, 0.9, 1.1, 14.9, 14.0, 14.1, 14.2, 14.3, 13.8, 13.9, 13.7, 13.6, 13.5, 13.4, 15.7, 15.8, 15.0, 15.3, 15.4, 15.5, 15.6, 15.2, 12.8, 12.7, 12.6, 8.7, 8.8, 8.6, 9.0, 8.5, 8.4, 8.3, 0.1, 0.0, 16.4, 16.5, 16.7, 16.8, 17.0, 17.1, 17.8, 17.7, 17.6, 17.4, 17.5, 17.3, 17.9, 18.2, 18.3, 18.1, 18.0, 18.4, 18.5, 18.6, 19.0, 19.1, 18.9, 19.2, 19.3, 19.9, 20.1, 19.8, 20.0, 19.7, 19.6, 19.5, 20.2, 20.3, 18.7, 18.8, 17.2, 16.9, 11.5, 11.2, 11.3, 11.4, 7.1, 12.9, 14.4, 13.1, 13.2, 12.5, 12.1, 12.2, 13.0, 0.2, 3.6, 7.4, 4.6, 4.5, 4.3, 4.0, 9.4, 9.6, 9.7, 5.8, 5.7, 7.3, 2.1, 0.4, 0.3, 16.1, 11.9, 12.0, 11.7, 11.8, 10.0, 10.1, 9.8, 15.1, 14.7, 14.8, 10.2, 10.3, 1.2, 9.9, 1.9, 3.4, 14.6, 0.6, 5.1, 5.2, 7.5, 19.4, 10.7, 10.8, 10.9, 0.5, 16.3, 16.2, 16.0, 16.6, 12.4, 11.0, 1.7, 1.6, 2.4, 11.6, 3.9, 3.8, 14.5, 11.1]
from itertools import *
groups = (list(group) for _, group in groupby(reversed(sorted(speeds)), int))
print next((s[0] for s in groups
if 5 <= s[0] <= 30 and (s[0] <= 14 or len(s) >= 10)),
False)
Just in case all of these look odd to you, here's one close to your original.
Just looking at the speeds from fastest to slowest and returning the first
that matches the requirements:
def f(speeds):
count = Counter(map(int, speeds))
for speed in reversed(sorted(speeds)):
if 5 <= speed <= 30 and (speed <= 14 or count[int(speed)] >= 10):
return speed
return False
Btw, your definition of _"the true maximum speed"_ seems rather odd to me. How
about just looking at a certain percentile? Maybe like this:
print sorted(speeds)[len(speeds) * 9 // 10]
|
Splitting time by the hour Python
Question: I have a dataframe df1 like this, where starttime and endtime are datetime
objects.
StartTime EndTime
9:08 9:10
9:10 9:35
9:35 9:55
9:55 10:10
10:10 10:20
If endtime.hour is not the same as startime.hour, I would like to split times
like this
StartTime EndTime
9:08 9:10
9:10 9:55
**9:55 10:00**
**10:00 10:10**
10:10 10:20
Essentially insert a row into the existing dataframe df1. I have looked at a
ton of examples but haven't figured out how to do this. If my question isn't
clear please let me know.
Thanks
Answer: This does what you want ...
# load your data into a DataFrame
data="""StartTime EndTime
9:08 9:10
9:10 9:35
9:35 9:55
9:55 10:10
10:10 10:20
"""
from StringIO import StringIO # import from io for Python 3
df = pd.read_csv(StringIO(data), header=0, sep=' ', index_col=None)
# convert strings to Pandas Timestamps (we will ignore the date bit) ...
import datetime as dt
df.StartTime = [dt.datetime.strptime(x, '%H:%M') for x in df.StartTime]
df.EndTime = [dt.datetime.strptime(x, '%H:%M') for x in df.EndTime]
# assumption - all intervals are less than 60 minutes
# - ie. no multi-hour intervals
# add rows
dfa = df[df.StartTime.dt.hour != df.EndTime.dt.hour].copy()
dfa.EndTime = [dt.datetime.strptime(str(x), '%H') for x in dfa.EndTime.dt.hour]
# play with the start hour ...
df.StartTime = df.StartTime.where(df.StartTime.dt.hour == df.EndTime.dt.hour,
other = [dt.datetime.strptime(str(x), '%H') for x in df.EndTime.dt.hour])
# bring back together and sort
df = pd.concat([df, dfa], axis=0) #top/bottom
df = df.sort('StartTime')
# convert the Timestamps to times for easy reading
df.StartTime = [x.time() for x in df.StartTime]
df.EndTime = [x.time() for x in df.EndTime]
And yields
In [40]: df
Out[40]:
StartTime EndTime
0 09:08:00 09:10:00
1 09:10:00 09:35:00
2 09:35:00 09:55:00
3 09:55:00 10:00:00
3 10:00:00 10:10:00
4 10:10:00 10:20:00
|
Disable Cache/Buffer on Specific File (Linux)
Question: I am currently working in a Yocto Linux build and am trying to interface with
a hardware block on an FPGA. This block is imitating an SD card with a FAT16
file system on it; containing a single file (cam.raw). This file represents
the shared memory space between the FPGA and the linux system. As such, I want
to be able to write data from the linux system to this memory and get back any
changes the FPGA might make (Currently, the FPGA simply takes part of the data
from the memory space and adds 6 to the LSB of a 32-bit word, like I write
0x40302010 and should get back 0x40302016 if I read back the data). However,
due to some caching somewhere, while I can write the data to the FPGA, I
cannot immediately get back the result.
I am currently doing something like this (using python because its easy):
% mount /dev/mmcblk1 /memstick
% python
>> import mmap
>> import os
>> f = os.open("/memstick/cam.raw", os.O_RDWR | os.O_DIRECT)
>> m = mmap.mmap(f, 0)
>> for i in xrange(1024):
... m[i] = chr(i % 256)
...
>> m.flush() # Make sure data goes from linux to FPGA
>> hex(ord(m[0])) # Should be 0x6
'0x0'
I can confirm with dd that the data is changed (though I frequently run into
buffering issues with that too) and using the tools for the FPGA
(SignalTap/ChipScope) that I am indeed getting correct answer (ie, the first
32-bit word in this case is 0x03020106). However, someone, whether its python
or linux or both are buffering the file and not reading from the "SD card"
(FPGA) again and storing the file data in memory. I need to shut this
completely off so all reads result in reads from the FPGA; but Im not sure
where the buffering is taking place or how to do that.
Any insight would be appreciated! (Note, I can use mmap.flush() to take any
data I write from python to dump it to the FPGA, but I need like a reverse
flush or something to have it reread the file data into the mmap!)
Update:
As suggested in the comments, the mmap approach might not be the best one to
implement what I need. However, I have now tried both in python and C, but
using basic I/O functions (os.read/write in python, read/write in C) using the
O_DIRECT flag. For most of these operations, I end up getting errno 22. Still
looking into this....
Answer: After doing digging, I found out what I was doing wrong with the O_DIRECT
flag. In my C and Python versions, I wasnt using memalign to create the buffer
and wasn't doing block reads/writes. This post has a good explanation:
[How can I read a file with read() and O_DIRECT in C++ on
Linux?](http://stackoverflow.com/questions/6001272/how-can-i-read-a-file-with-
read-and-o-direct-in-c-on-linux)
So, in order to achieve what I am doing, this C program works as a basic
example:
#include <stdio.h>
#include <fcntl.h>
#include <errno.h>
#define BLKSIZE 512
int main() {
int fd;
int x;
char* buf;
fd = open("/home/root/sd/fpga/cam.raw", O_RDWR | O_SYNC | O_DIRECT);
if (!fd) {
printf("Oh noes, no file!\n");
return -1;
}
printf("%d %d\n", fd, errno);
buf = (char*) memalign(BLKSIZE, BLKSIZE*2);
if (!buf) {
printf("Oh noes, no buf!\n");
return -1;
}
x = read(fd, buf, BLKSIZE);
printf("%d %d %x %x %x %x\n", x, errno, buf[0], buf[1], buf[2], buf[3]);
lseek(fd, 0, 0);
buf[0] = '1';
buf[1] = '2';
buf[2] = '3';
buf[3] = '4';
x = write(fd, buf, BLKSIZE);
printf("%d %d\n", fd, errno);
lseek(fd, 0, 0);
x = read(fd, buf, BLKSIZE);
printf("%d %d %x %x %x %x\n", x,errno, buf[0], buf[1], buf[2], buf[3]);
return 0;
}
This will work for my purposes, I didnt look how to do proper memory alignment
to use Python's os.read/os.write functions in a similar way.
|
Calculating the averages for each KEY in a Pairwise (K,V) RDD in Spark with Python
Question: I want to share this particular Apache Spark with Python solution because
documentation for it is quite poor.
I wanted to calculate the average value of K/V pairs (stored in a Pairwise
RDD), by KEY. Here is what the sample data looks like:
>>> rdd1.take(10) # Show a small sample.
[(u'2013-10-09', 7.60117302052786),
(u'2013-10-10', 9.322709163346612),
(u'2013-10-10', 28.264462809917358),
(u'2013-10-07', 9.664429530201343),
(u'2013-10-07', 12.461538461538463),
(u'2013-10-09', 20.76923076923077),
(u'2013-10-08', 11.842105263157894),
(u'2013-10-13', 32.32514177693762),
(u'2013-10-13', 26.249999999999996),
(u'2013-10-13', 10.693069306930692)]
Now the following code sequence is a **less than optimal** way to do it, but
it does work. It is what I was doing before I figured out a better solution.
It's not terrible but -- as you'll see in the answer section -- there is a
more concise, efficient way.
>>> import operator
>>> countsByKey = sc.broadcast(rdd1.countByKey()) # SAMPLE OUTPUT of countsByKey.value: {u'2013-09-09': 215, u'2013-09-08': 69, ... snip ...}
>>> rdd1 = rdd1.reduceByKey(operator.add) # Calculate the numerators (i.e. the SUMs).
>>> rdd1 = rdd1.map(lambda x: (x[0], x[1]/countsByKey.value[x[0]])) # Divide each SUM by it's denominator (i.e. COUNT)
>>> print(rdd1.collect())
[(u'2013-10-09', 11.235365503035176),
(u'2013-10-07', 23.39500642456595),
... snip ...
]
Answer: Now a much better way to do this is to use the rdd.aggregateByKey() method.
Because that method is so poorly documented in the Apache Spark with Python
documentation (which is why I'm writing this), until recently I had been using
the above code sequence. But again, it's less efficient, **so don't do it that
way unless you need to**.
Here's how to do the same using the rdd.aggregateByKey() method
(**recommended**) ...
By KEY, simultaneously calculate the SUM (numerator for the average we want to
compute), and COUNT (denominator for the average we want to compute).
>>> rdd1 = rdd1.aggregateByKey((0,0), lambda a,b: (a[0] + b, a[1] + 1),
lambda a,b: (a[0] + b[0], a[1] + b[1]))
Where the following is true about the meaning of each 'a' and 'b' pair above
(just so you can visualize what's happening):
First lambda expression for Within-Partition Reduction Step::
a: is a TUPLE that holds: (runningSum, runningCount).
b: is a SCALAR that holds the next Value
Second lambda expression for Cross-Partition Reduction Step::
a: is a TUPLE that holds: (runningSum, runningCount).
b: is a TUPLE that holds: (nextPartitionsSum, nextPartitionsCount).
Finally, calculate the average for each KEY, and collect results.
>>> finalResult = rdd1.mapValues(lambda v: v[0]/v[1]).collect()
>>> print(finalResult)
[(u'2013-09-09', 11.235365503035176),
(u'2013-09-01', 23.39500642456595),
(u'2013-09-03', 13.53240060820617),
(u'2013-09-05', 13.141148418977687),
... snip ...
]
I hope this aggregateByKey() illustration will help others.
|
re.sub in python : verbose mode does not work with replacement pattern?
Question: Is there any way to get around this limitation of re.sub? It is not fully
functional for verbose mode (with back reference here) in the replace pattern;
it does not eliminate whitespace or comments (yet it does interpret
backreferences properly).
import remport re
ft1=r"""(?P<test>[0-9]+)"""
ft2=r"""\g<test>and then: \g<test> #this remains"""
print re.sub(ft1,ft2,"front 1234 back",flags=re.VERBOSE) #Does not work
#result: front 1234and then: 1234 #this remains back
re.VERBOSE does not apply to the replacement pattern... Is there a work-
around? (Simpler than working with groups after an re.match.)
Answer: Here is the only way I have found to "compile" an re replace expression for
sub. There are a few extra constraints: both spaces and newlines have to be
written like spaces are written for the re match expression (in square
brackets: [ ] and [\n\n\n]) and the whole replace expression should have a
verbose newline at the beginning.
An example: this searches a string and detects a word repeated after /ins/ and
/del/, then replaces those occurrences with a single occurrence of the word in
front of .
Both the match and the replace expressions are complex, which is why I want a
verbose version of the replace expression.
===========================
import re
test = "<p>Le petit <ins>homme Γ </ins> <del>homme en</del> ressorts</p>"
find=r"""
<ins>
(?P<front>[^<]+) #there is something added that matches
(?P<delim1>[ .!,;:]+) #get delimiter
(?P<back1>[^<]*?)
</ins>
[ ]
<del>
(?P=front)
(?P<delim2>[ .!,;:]+)
(?P<back2>[^<]*?)
</del>
"""
replace = r"""
<<<<<\g<front>>>>> #Pop out in front matching thing
<ins>
\g<delim1>
\g<back1>
</ins>
[ ]
<del>
\g<delim2> #put delimiters and backend back
\g<back2>
</del>
"""
flatReplace = r"""<<<<<\g<front>>>>><ins>\g<delim1>\g<back1></ins> <del>\g<delim2>\g<back2></del>"""
def compileRepl(inString):
outString=inString
#get space at front of line
outString=re.sub(r"\n\s+","\n",outString)
#get space at end of line
outString=re.sub(r"\s+\n","",outString)
#get rid of comments
outString=re.sub(r"\s*#[^\n]*\n","\n",outString)
#preserve space in brackets, and eliminate brackets
outString=re.sub(r"(?<!\[)\[(\s+)\](?!\[)",r"\1",outString)
# get rid of newlines not in brackets
outString=re.sub(r"(?<!\[)(\n)+(?!\])","",outString)
#get rid of brackets around newlines
outString=re.sub(r"\[((\\n)+)\]",r"\1",outString)
#trim brackets
outString=re.sub(r"\[\[(.*?)\]\]","[\\1]",outString)
return outString
assert(flatReplace == compileRepl(replace))
print test
print compileRepl(replace)
print re.sub(find,compileRepl(replace),test, flags=re.VERBOSE)
#<p>Le petit <ins>homme Γ </ins> <del>homme en</del> ressorts</p>
#<<<<<\g<front>>>>><ins>\g<delim1>\g<back1></ins> <del>\g<delim2>\g<back2></del>
#<p>Le petit <<<<<homme>>>><ins> Γ </ins> <del> en</del> ressorts</p>
|
mollview command not returning plot image
Question: I'm new to healpy and I'm trying to plot the following in iPython27:
import numpy as np
import healpy as hp
NSIDE = 32
m=np.arange(hp.nside2npix(NSIDE))
hp.mollview(m, title="Mollview image RING")
This does not return an image as I was expecting. What could be the problem?
Answer: You need to setup IPython to display matplotlib plots.
If you are using IPython console
%matplotlib
If IPython notebook
%matplotlib inline
|
Python crawler is not crawling all sites
Question: I'm trying to get links from websites through this code
import requests
from bs4 import BeautifulSoup
def get_links(max_pages):
page = 1
while page <= max_pages:
address = 'http://hamariweb.com/mobiles/nokia_mobile-phones1.aspx?Page=' + str(page)
source_code = requests.get(address)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for link in soup.findAll('a', {'class': 'TextClass8pt'}):
href = link.get("href")
print(href)
page += 1
get_links(3)
and it's giving expected output. But when I tried this
address = 'http://propakistani.pk/category/cellular/page/' + str(page)
for link in `soup.findAll('a', {'class': 'aa_art_hdng'}):`
It's showing this error
> TypeError: getresponse() got an unexpected keyword argument 'buffering'
I also tried another site but that time neighter it showed any output nor any
error. Why it's showing proper output with different sites? Is there any
problem with my code? Please help me. Thanks
Answer: there is **No Tag Match** with this condition `soup.findAll('a', {'class':
'TextClass8pt'})`
Try following
**Demo** :
import requests
from bs4 import BeautifulSoup
def get_links(max_pages):
page = 1
while page <= max_pages:
address = 'http://propakistani.pk/category/cellular/page/' + str(page)
source_code = requests.get(address)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for link in soup.findAll('a'):
href = link.get("href")
print(href)
page += 1
get_links(3)
* * *
**Or**
There are `a` tag with class value `aa_loop_h2a` e.g. `<a class="aa_loop_h2a"
href="http://propakistani.pk/2015/04/20/mobile-data-usage-in-pakistan-
grows-600-during-2014/" title="Mobile Data Usage in Pakistan Grows 600% During
2014">Mobile Data Usage in Pakistan Grows 600% During 2014</a>`
So try with `soup.findAll('a', {'class': 'aa_loop_h2a'})` condition.
|
extracting entities of tweets using opencalais
Question: I have to extract entities from tweets using tool opencalais. My code is:
# this code is based on: http://www.flagonwiththedragon.com/2011/06/08/dead-simple-python-calls-to-open-calais-api/
import urllib, urllib2
##### set API key and REST URL values.
x-ag-access-token1 = 'O7tTcXv6TFHA4Z5EKjjxPcrcdWndxl' # your Calais API key.
calaisREST_URL = 'https://api.thomsonreuters.com/permid/Calais' # this is the older REST interface.
# info on the newer one: http://www.opencalais.com/documentation/calais-web-service-api/api-invocation/rest
# alert user and shut down if the API key variable is still null.
if x-ag-access-token1 == '':
print "You need to set your Calais API key in the 'x-ag-access-token' variable."
import sys
sys.exit()
##### set the text to ask Calais to analyze.
# text from: http://www.usatoday.com/sports/football/nfl/story/2012-03-22/Tim-Tebow-Jets-hoping-to-avoid-controversy/53717542/1
sampleText = '''
Like millions of football fans, Tim Tebow caught a few training camp glimpses of the New York Jets during the summer of 2010 on HBO's Hard Knocks.
'''
##### set XML parameters for Calais.
# see "Input Parameters" at: http://www.opencalais.com/documentation/calais-web-service-api/forming-api-calls/input-parameters
calaisParams = '''
<c:params xmlns:c="http://s.opencalais.com/1/pred/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
<c:processingDirectives c:contentType="text/txt"
c:enableMetadataType="GenericRelations,SocialTags"
c:outputFormat="Text/Simple"/>
<c:userDirectives/>
<c:externalMetadata/>
</c:params>
'''
#########################
##### send data to Calais API.
# see: http://www.opencalais.com/APICalls
dataToSend = urllib.urlencode({
'x-ag-access-token': x-ag-access-token1,
'content': sampleText,
'paramsXML': calaisParams
})
##### get API results and print them.
results = urllib2.urlopen(calaisREST_URL, dataToSend).read()
print results
I am getting the following error:
> x-ag-access-token1 = 'O7tTcXv6TFHA4Z5EKjjxPcrcdWndxl' # your Calais API key.
> SyntaxError: can't assign to operator. The open calais has changed its new
> API.
Answer: Don't use '-' in variable assignment, use '_' because otherwise Python
interprets it as a minus sign and throws the 'SyntaxError: can't assign to
operator' . Also make sure you are using the latest API from OpenCalais. Try
this simple test:
>>> my-var = 'hello world'
File "<stdin>", line 1
SyntaxError: can't assign to operator
|
List comprehension assignment/comparison fails after 256
Question: I tried to find the performance difference between slice assignment and
regular assignment for lists. Here is the code:
import time
N = 1000
a = list(range(N))
b = list(range(N))
time1 = time.time()
for i in range(N):
a = [x for x in a if x is not i]
time2 = time.time()
for i in range(N):
b[:] = [x for x in b if x is not i]
time3 = time.time()
print a
print b
print time2 - time1
print time3 - time2
My expectation is that, for each list `a` and `b`, this will remove one
element at a time, so that `print a` and `print b` both print empty lists.
Instead, they seem to always print the starting lists, but with the first
`256` elements missing.
They both print:
[257, 258, 259 ... N-1]
What is happening?
I'm using Python 2.7.6.
Answer: The problem is that you're using `is` instead of `==`.
The former checks for object _identity_ , not equality. There's no reason to
believe that evaluating, say, `300+1` twice will give you the same `int`
object, just that they'll both give you `int` objects whose value is `301`.
This happens to "work" for numbers up to 256 because your particular Python
implementation* happens to intern integers up to 256. At startup, it creates a
singleton object for the number `1`, a singleton object for `2`, and so on.
Any time an expression evaluates to the number `1`, it gives you that object,
instead of a new one.**
Needless to say, you should not rely on that optimization.
* * *
* IIRC, every version of CPython from the 1.x days to 3.5 defaults to this behavior for all integers from -5 to 256, but you can change those limits, or turn off the feature, at build time, and a different implementation might do something different.
** If you're wondering how this works in CPython, at the C API level,
[`PyLong_FromLong`](https://docs.python.org/3/c-api/long.html#c.PyLong_FromLong)
does this by looking up numbers from -5 to 256 in an array of singleton
values. You can see the 3.4 version of the code, for example,
[here](https://hg.python.org/cpython/file/3.4/Objects/longobject.c#l230); the
macro `CHECK_SMALL_INT` and the actual function `get_small_int` that it calls,
and the static array that function uses, are all are in the same file, up near
the top.
|
How to convert timezone to country code in Python?
Question: I used this
from pytz import country_timezones
But It doesn't include below timezones
> Africa/Asmera, Africa/Timbuktu, America/Argentina/ComodRivadavia,
> America/Atka, America/Buenos_Aires, America/Catamarca,
> America/Coral_Harbour, America/Cordoba, America/Ensenada,
> America/Fort_Wayne, America/Indianapolis, America/Jujuy, America/Knox_IN,
> America/Louisville, America/Mendoza, America/Montreal, America/Porto_Acre,
> America/Rosario, America/Shiprock, America/Virgin, Antarctica/South_Pole,
> Asia/Ashkhabad, Asia/Calcutta, Asia/Chongqing, Asia/Chungking, Asia/Dacca,
> Asia/Harbin, Asia/Istanbul, Asia/Kashgar, Asia/Katmandu, Asia/Macao,
> Asia/Saigon, Asia/Tel_Aviv, Asia/Thimbu, Asia/Ujung_Pandang,
> Asia/Ulan_Bator, Atlantic/Faeroe, Atlantic/Jan_Mayen, Australia/ACT,
> Australia/Canberra, Australia/LHI, Australia/NSW, Australia/North,
> Australia/Queensland, Australia/South, Australia/Tasmania,
> Australia/Victoria, Australia/West, Australia/Yancowinna, Brazil/Acre,
> Brazil/DeNoronha, Brazil/East, Brazil/West, CET, CST6CDT, Canada/Atlantic,
> Canada/Central, Canada/East-Saskatchewan, Canada/Eastern, Canada/Mountain,
> Canada/Newfoundland, Canada/Pacific, Canada/Saskatchewan, Canada/Yukon,
> Chile/Continental, Chile/EasterIsland, Cuba, EET, EST, EST5EDT, Egypt, Eire,
> Europe/Belfast, Europe/Nicosia, Europe/Tiraspol, GB, GB-Eire, Greenwich,
> HST, Hongkong, Iceland, Iran, Israel, Jamaica, Japan, Kwajalein, Libya, MET,
> MST, MST7MDT, Mexico/BajaNorte, Mexico/BajaSur, Mexico/General, NZ, NZ-CHAT,
> Navajo, PRC, PST8PDT, Pacific/Ponape, Pacific/Samoa, Pacific/Truk,
> Pacific/Yap, Poland, Portugal, ROC, ROK, Singapore, Turkey, UCT, US/Alaska,
> US/Aleutian, US/Arizona, US/Central, US/East-Indiana, US/Eastern, US/Hawaii,
> US/Indiana-Starke, US/Michigan, US/Mountain, US/Pacific, US/Samoa, UTC,
> Universal, W-SU, WET, Zulu
How can I convert these timezones to country code?
Answer: You can't do what you want. Or, you can, but you'll get the results you're
getting, not the results you want. Briefly, if you ask for "the country that
uses Zulu", and no country uses Zulu, you won't be able to find anything. In
more detailβ¦
* * *
As the docs on [Country Information](http://pytz.sourceforge.net/#country-
information) say:
> A mechanism is provided to access the timezones commonly in use for a
> particular country, looked up using the ISO 3166 country code.
* * *
However, "deprecated" zones like `America/Buenos_Aires` and "historical" zones
like `US/Pacific` aren't in use in any particular country. Many of them _do_
happen to be aliases for timezones that _are_ in use in some country, like
`America/Argentina/Buenos_Aires` and `America/Los_Angeles`, respectively, but
that doesn't do you any good, because `pytz` doesn't expose that information.
You could file an enhancement request against `pytz` to add that in a future
version, if you think it's important.
* * *
At any rate, this is how you can identify the countries that use a given
timezone, like this:
{country for country, timezones in country_timezones.items()
if timezone in timezones}
* * *
If you need to do lots of these lookups, you can of course build your own dict
to make it faster and simpler:
timezone_countries = {}
for country, timezones in country_timezones.items():
for timezone in timezones:
timezone_countries.setdefault(timezone, set()).add(country)
And now it's just:
timezone_countries[timezone]
* * *
But either way, you may get an empty set, or a set of 3 countries, instead of
1. If the database actually says that there are 0 or 3 countries that use that
timezone, that's what you're going to get.
|
Reduce list of lists if entries match
Question: I have a list in python which looks like
[['boy','121','is a male child'],['boy','121','is male'],['boy','121','is a child'],['girl','122','is a female child'],['girl','122','is a child']]
I want to reduce the list based on the first 2 entries in each list, to get
[['boy','121',is a male child, is male, is a child'],['girl','122','is a female child','is a child']]
is there a way to do this efficiently without creating a dummy list?
Answer: As a more pythonic way for such task you can use a dictionary :
>>> li=[['boy','121','is a male child'],['boy','121','is male'],['boy','121','is a child'],['girl','122','is a female child'],['girl','122','is a child']]
>>>
>>> d={}
>>>
>>> for i,j,k in li:
... d.setdefault((i,j),[]).append(k)
...
>>> d
{('boy', '121'): ['is a male child', 'is male', 'is a child'], ('girl', '122'): ['is a female child', 'is a child']}
> [**setdefault(key[,
> default])**](https://docs.python.org/2/library/stdtypes.html#dict.setdefault)
>
> If key is in the dictionary, return its value. If not, insert key with a
> value of default and return default. default defaults to None.
And if you want to have the elements within 1 container you can loop over the
items and convert the value to `tuple` then ad it with key:
>>> [i+tuple(j) for i,j in d.items()]
[('boy', '121', 'is a male child', 'is male', 'is a child'), ('girl', '122', 'is a female child', 'is a child')]
As @jonrsharpe says as a more elegant way you can also use
`collections.defaultdict` :
>>> from collections import defaultdict
>>>
>>> d=defaultdict(list)
>>> for i,j,k in li:
... d[i,j].append(k)
...
>>> d
defaultdict(<type 'list'>, {('boy', '121'): ['is a male child', 'is male', 'is a child'], ('girl', '122'): ['is a female child', 'is a child']})
|
How to apply a dict in python to a string as opposed to a single letter
Question: I am trying to output the alphabetical values of a user entered string, I have
created a dict and this process works, but only with one letter.
* If I try entering more than one letter, it returns a `KeyError: (string I entered)`
* If I try creating a list of the string so it becomes `['e', 'x', 'a', 'm', 'p', 'l', 'e']` and I get a `TypeError: unhashable type: 'list'`
I cannot use the `chr` and `ord` functions (I know how to but they aren't
applicable in this situation) and I have tried using the `map` function once
I've turned it to a list but only got strange results.
I've also tried turning the `list` into a `tuple` but that produces the same
error.
Here is my code:
import string
step = 1
values = dict()
for index, letter in enumerate(string.ascii_lowercase):
values[letter] = index + 1
keyw=input("Enter your keyword for encryption")
keylist=list(keyw)
print(values[keylist])
Alt version without the list:
import string
step=1
values=dict()
for index, letter in enumerate(string.ascii_lowercase):
values[letter] = index + 1
keyw=input("Enter your keyword for encryption")
print(values[keyw])
Answer: You need to loop through all the letters and map each one individually:
mapped = [values[letter] for letter in keyw]
print(mapped)
This uses a list comprehension to build the list of integers:
>>> [values[letter] for letter in 'example']
[5, 24, 1, 13, 16, 12, 5]
The `map()` function would do the same thing, essentially, but returns an
_iterator_ ; you need to loop over that object to see the results:
>>> for result in map(values.get, 'example'):
... print(result)
5
24
1
13
16
12
5
Note that you can build your `values` dictionary in one line; `enumerate()`
takes a second argument, the start value (which defaults to 0); using a dict
comprehension to reverse the value-key tuple would give you:
values = {letter: index for index, letter in enumerate(string.ascii_lowercase, 1)}
|
How to use top-level arguments with subparsers in argparse
Question: In Python's argparse, how do you implement top-level arguments while still
using commands implemented as subparsers?
I'm trying to implement a `--version` argument to show the program's version
number, but argparse is giving me `error: too few arguments` because I'm not
specifying a sub-command for one of the subparsers.
My code:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
'-v', '--version',
help='Show version.',
action='store_true',
default=False
)
subparsers = parser.add_subparsers(
dest="command",
)
list_parser = subparsers.add_parser('list')
parser.parse_args(['--version'])
the output:
usage: myscript.py [-h] [-v] {list} ...
myscript.py: error: too few arguments
Answer: If you only need version to work, you can do this:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
'-v', '--version',
action='version',
version='%(prog)s 1.0',
)
Subparsers won't bother any more; the special `version` action is processed
and exits the script before the parser looks for subcommands.
|
I cannot click the Admin button nested inside the iFrame
Question: I am automating our Web application using Python with Selenium Webdriver. I
log into the application and I want to click the Administration button. When i
run my code it cannot find the Administration button by my Xpath. I have tried
a few different ways.
If i enter //div[7]/div/div in selenium IDE and click Find it highlights the
Administration button. I do not know why it won't find it when i run the code.
I would prefer to use CSS as that is faster than Xpath. I need some help
please.
I get the following error:
selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: {"method":"xpath","selector":"html/body/div[2]/div[2]/div/div[2]/div/div[2]/div/div[7]/div/div"}
I inspect the HTML element. The full HTML is as follows:
<html style="overflow: hidden;">
<head>
<body style="margin: 0px;">
<html style="overflow: hidden;">
<head>
<body style="margin: 0px;">
<iframe id="__gwt_historyFrame" style="position: absolute; width: 0; height: 0; border: 0;" tabindex="-1" src="javascript:''">
<html>
</iframe>
<noscript> <div style="width: 22em; position: absolute; left: 50%; margin-left: -11em; color: red; background-color: white; border: 1px solid red; padding: 4px; font-family: sans-serif;"> Your web browser must have JavaScript enabled in order for this application to display correctly.</div> </noscript>
<script src="spinner.js" type="text/javascript">
<script type="text/javascript">
<script src="ClearCore/ClearCore.nocache.js" type="text/javascript">
<script defer="defer">
<iframe id="ClearCore" src="javascript:''" style="position: absolute; width: 0px; height: 0px; border: medium none;" tabindex="-1">
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<script>
<script type="text/javascript">
<script type="text/javascript">
</head>
<body>
</html>
</iframe>
<div style="position: absolute; z-index: -32767; top: -20cm; width: 10cm; height: 10cm; visibility: hidden;" aria-hidden="true"> </div>
<div style="position: absolute; left: 0px; top: 0px; right: 0px; bottom: 0px;">
<div style="position: absolute; z-index: -32767; top: -20ex; width: 10em; height: 10ex; visibility: hidden;" aria-hidden="true"> </div>
<div style="position: absolute; overflow: hidden; left: 0px; top: 0px; right: 0px; bottom: 0px;">
<div style="position: absolute; left: 0px; top: 0px; right: 0px; bottom: 0px;">
<div style="position: absolute; z-index: -32767; top: -20ex; width: 10em; height: 10ex; visibility: hidden;" aria-hidden="true"> </div>
<div style="position: absolute; overflow: hidden; left: 1px; top: 1px; right: 1px; bottom: 1px;">
<div class="gwt-TabLayoutPanel" style="position: absolute; left: 0px; top: 0px; right: 0px; bottom: 0px;">
<div style="position: absolute; z-index: -32767; top: -20ex; width: 10em; height: 10ex; visibility: hidden;" aria-hidden="true"> </div>
<div style="position: absolute; overflow: hidden; left: 0px; top: 0px; right: 0px; height: 30px;">
<div class="gwt-TabLayoutPanelTabs" style="position: absolute; left: 0px; right: 0px; bottom: 0px; width: 16384px;">
<div class="gwt-TabLayoutPanelTab GEGQEWXCK gwt-TabLayoutPanelTab-selected" style="background-color: rgb(254, 255, 238);">
<div class="gwt-TabLayoutPanelTab GEGQEWXCK" style="background-color: rgb(254, 255, 238);">
<div class="gwt-TabLayoutPanelTab GEGQEWXCK" style="background-color: rgb(254, 255, 238);">
<div class="gwt-TabLayoutPanelTab GEGQEWXCK" style="background-color: rgb(254, 255, 238);">
<div class="gwt-TabLayoutPanelTab GEGQEWXCK" style="background-color: rgb(254, 255, 238);">
<div class="gwt-TabLayoutPanelTab GEGQEWXCK" style="background-color: rgb(254, 255, 238);">
<div class="gwt-TabLayoutPanelTab GEGQEWXCK" style="background-color: rgb(254, 255, 238);">
<div class="gwt-TabLayoutPanelTabInner">
<div class="gwt-HTML">Administration</div>
</div>
</div>
</div>
</div>
<div style="position: absolute; overflow: hidden; left: 0px; top: 30px; right: 0px; bottom: 0px;">
</div>
</div>
<div style="position: absolute; overflow: hidden; top: 1px; right: 1px; width: 30px; height: 25px;">
<div style="position: absolute; overflow: hidden; left: 0px; top: -25px; right: 0px; height: 25px;">
</div>
</div>
</div>
<div style="display: none;" aria-hidden="true"></div>
</body>
</html>
My code is as follows:
element.py
from selenium.webdriver.support.ui import WebDriverWait
class BasePageElement(object):
def __set__(self, obj, value):
driver = obj.driver
WebDriverWait(driver, 100).until(
lambda driver: driver.find_element_by_name(self.locator))
driver.find_element_by_name(self.locator).send_keys(value)
def __get__(self, obj, owner):
driver = obj.driver
WebDriverWait(driver, 100).until(
lambda driver: driver.find_element_by_name(self.locator))
element = driver.find_element_by_name(self.locator)
return element.get_attribute("value")
locators.py
from selenium.webdriver.common.by import By
class MainPageLocators(object):
Submit_button = (By.ID, 'submit')
usernameTxtBox = (By.ID, 'unid')
passwordTxtBox = (By.ID, 'pwid')
submitButton = (By.ID, 'button')
AdministrationButton = (By.CSS_SELECTOR, 'div.gwt-HTML.firepath-matching-node')
AdministrationButtonXpath = (By.XPATH, '//html/body/div[2]/div[2]/div/div[2]/div/div[2]/div/div[7]/div/div')
AdministrationButtonCSS = (By.CSS_SELECTOR, '/body/div[2]/div[2]/div/div[2]/div/div[2]/div/div[7]/div/div')
AdministrationButtonXpath2 = (By.XPATH, 'html/body/div[2]/div[2]/div/div[2]/div/div[2]/div/div[7]/div/div/text()')
AdministrationButtonXpath3 = (By.XPATH, '//div[7]/div/div')
contentFrame = (By.ID, 'ClearCore')
Page.py
from element import BasePageElement
from locators import MainPageLocators
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
class SearchTextElement(BasePageElement):
class BasePage(object):
def __init__(self, driver):
self.driver = driver
class LoginPage(BasePage):
search_text_element = SearchTextElement()
def userLogin_valid(self):
userName_textbox = self.driver.find_element(*MainPageLocators.usernameTxtBox)
userName_textbox.clear()
userName_textbox.send_keys("riaz.ladhani")
password_textbox = self.driver.find_element(*MainPageLocators.passwordTxtBox)
password_textbox.clear()
password_textbox.send_keys("test123")
submitButton = self.driver.find_element(*MainPageLocators.submitButton)
submitButton.click()
#mydriver.find_element_by_xpath(xpaths['usernameTxtBox']).clear()
def clickAdministration_button(self):
#administrationButton = self.driver.find_element(*MainPageLocators.AdministrationButton)
content_frame = self.driver.find_element(*MainPageLocators.contentFrame)
self.driver.switch_to.frame(content_frame)
#self.driver.switch_to.frame(*MainPageLocators.contentFrame)
#self.driver.Switch_to().Frame(*MainPageLocators.contentFrame)
#administrationButtonCSS = self.driver.find_element(*MainPageLocators.AdministrationButtonCSS)
#administrationButtonXpath= self.driver.find_element(*MainPageLocators.AdministrationButtonXpath)
#administrationButtonXpath= self.driver.find_element(*MainPageLocators.AdministrationButton_CSS_regex)
#administrationButtonCSS2 = self.driver.find_element(*MainPageLocators.AdministrationButtonCSS2)
adminButton = self.driver.find_element(*MainPageLocators.AdministrationButtonXpath3)
adminButton.click()
LoginPage_TestCase.py
import unittest
from selenium import webdriver
import page
class LoginPage_TestCase(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.get("http://my-pc.company.local:8080/clearcore")
def test_login_valid_user(self):
login_page = page.LoginPage(self.driver)
login_page.userLogin_valid()
login_page.ClickAdministration_button()
def tearDown(self):
self.driver.close()
if __name__ == "__main__":
unittest.main()
Answer: As the βAdministration buttonβ is located under the frame whose id is
β**ClearCore** β and it is not in the webpage. That is the reason why the
element is unable to locate while executing the code.
__
So before clicking that button you need to switch to that frame either by
using
1. driver.switch_to_window("windowName")
2. driver.switch_to_frame("frameName")
Once we are done with working on frames, we will have to come back to the
parent frame which can be done using:
driver.switch_to_default_content()
|
Pygame: Colliding Rectangle on multiple other rectangles
Question: I am attempting to create a game in which a block moves back and forth until
the player presses space. Upon which, the block jumps to the next line up and
stops.
Currently i am having problems with the collision code.
The error being thrown up by the shell is:
if doRectsOverlap(j['rect'], floors['line']):
TypeError: list indices must be integers, not str
I am stuck with understanding where my code has gone wrong. My knowledge of
how python works is very limited.
There is also code i have commented out to do with the floor moving dowards
when the player jumps. it has been commented out until i can get the
collisions working, but still included
Code Below:
import pygame, sys, time
from pygame.locals import *
def doRectsOverlap(rect1, rect2):
for a, b in [(rect1, rect2), (rect2, rect1)]:
# Check if a's corners are inside b
if ((isPointInsideRect(a.left, a.top, b)) or
(isPointInsideRect(a.left, a.bottom, b)) or
(isPointInsideRect(a.right, a.top, b)) or
(isPointInsideRect(a.right, a.bottom, b))):
return True
return False
def isPointInsideRect(x, y, rect):
if (x > rect.left) and (x < rect.right) and (y > rect.top) and (y < rect.bottom):
return True
else:
return False
# set up pygame
pygame.init()
mainClock = pygame.time.Clock()
# set up the window
WINDOWWIDTH = 480
WINDOWHEIGHT = 800
windowSurface = pygame.display.set_mode((WINDOWWIDTH, WINDOWHEIGHT), 0, 32)
pygame.display.set_caption('Jumper')
#Directions
LEFT = 4
RIGHT = 6
UP = 8
DOWN = 2
STILL = 5
#blocks location for jumping
#BLOCKLOCY = 700
#Binary for stopping movement
#STOPPER = 0
MOVESPEED = 1
# set up the colors
BLACK = (0, 0, 0)
RED = (255, 0, 0)
GREEN = (0, 255, 0)
BLUE = (0, 0, 255)
j = {'rect':pygame.Rect(240, 700, 20, 20), 'color':GREEN, 'dir':LEFT, 'jump':STILL}
f1 = {'line':pygame.Rect(0,720,480,2), 'color':GREEN, 'dir':STILL}
f2 = {'line':pygame.Rect(0,650,480,2), 'color':GREEN, 'dir':STILL}
floors = [f1,f2]
# run the game loop
while True:
# check for the QUIT event
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
# draw the black background onto the surface
windowSurface.fill(BLACK)
# move the block data structure
if j['dir'] == LEFT:
j['rect'].left -= MOVESPEED
if j['dir'] == RIGHT:
j['rect'].left += MOVESPEED
if j['jump'] == UP:
j['rect'].bottom -= MOVESPEED
#BLOCKLOCY -= MOVESPEED
if j['rect'].left < 0:
j['dir'] = RIGHT
if j['rect'].left > WINDOWWIDTH-j['rect'].width:
j['dir'] = LEFT
if event.type == KEYDOWN:
if event.key == K_SPACE:
j['jump'] = UP
if doRectsOverlap(j['rect'], floors['line']):
j['jump'] = STILL
#Floor controll code for moving level - not working currently
# for f in floors:
#if f['dir'] == DOWN:
# f['line'].y += MOVESPEED
# if event.type == KEYDOWN:
# if event.key == K_SPACE:
# f['dir'] = DOWN
# if f['line'].top == BLOCKLOCY:
# f['dir'] = STILL
# STOPPER = 1
#if f['line'].bottom == BLOCKLOCY:
# f['dir'] = STILL
# STOPPER = 1
# draw the block onto the surface
pygame.draw.rect(windowSurface, j['color'], j['rect'])
pygame.draw.rect(windowSurface, f['color'], f['line'])
# draw the window onto the screen
pygame.display.update()
mainClock.tick(40)
Answer: You are creating `floors` as a `list`:
f1 = {'line':pygame.Rect(0,720,480,2), 'color':GREEN, 'dir':STILL}
f2 = {'line':pygame.Rect(0,650,480,2), 'color':GREEN, 'dir':STILL}
floors = [f1,f2]
So when you call:
if doRectsOverlap(j['rect'], floors['line']):
j['jump'] = STILL
You're message is telling you that you need an index as an `int`:
for n in range(len(floors)):
if doRectsOverlap(j['rect'], floors[n]['line']):
j['jump'] = STILL
|
Why is this lxml.etree.HTMLPullParser leaking memory?
Question: I'm trying to use lxml's HTMLPullParser on Linux Mint but I'm finding that the
memory usage keeps increasing and I'm not sure why. Here's my test code:
# -*- coding: utf-8 -*-
from __future__ import division, absolute_import, print_function, unicode_literals
import lxml.etree
import resource
from io import DEFAULT_BUFFER_SIZE
for _ in xrange(1000):
with open('StackOverflow.html', 'r') as f:
parser = lxml.etree.HTMLPullParser()
while True:
buf = f.read(DEFAULT_BUFFER_SIZE)
if not buf: break
parser.feed(buf)
parser.close()
# Print memory usage
print((resource.getrusage(resource.RUSAGE_SELF)[2] * resource.getpagesize())/1000000.0)
StackOverflow.html is the homepage of stackoverflow that I've saved in the
same folder as the python script. I've tried adding explicit deletes and
clears but so far nothing has worked. What am I doing wrong?
Answer: Elements constructed by the parsers are leaking, and I can't see an API
contract violation in your code that's causing it. Since the objects survive a
manual garbage collection run with `gc.collect()`, your best bet is probably
to try a different parsing strategy as a workaround.
To see the root cause, I used the memory exploration module
[objgraph](http://mg.pov.lt/objgraph/) and installed
[xdot](http://pypi.python.org/pypi/xdot) to view the graphs it created.
Before running the code, I ran:
In [3]: import objgraph
In [4]: objgraph.show_growth()
After running the code, I ran:
In [6]: objgraph.show_growth()
tuple 1616 +147
_Element 146 +146
list 1100 +24
wrapper_descriptor 1423 +15
weakref 1155 +6
getset_descriptor 677 +4
dict 2777 +4
member_descriptor 315 +3
method_descriptor 891 +2
_TempStore 2 +1
In [7]: import random
In [8]: objgraph.show_chain(
...: objgraph.find_backref_chain(
...: random.choice(objgraph.by_type('_Element')), objgraph.is_proper_module))
Graph written to /tmp/objgraph-bfuwa9.dot (8 nodes)
Spawning graph viewer (xdot)
Note: the numbers might be different than what you see depending on the
webpage viewed.
|
gammu receive sms message python fails
Question: I have found a script on this website
<http://wammu.eu/docs/manual/smsd/run.html>
#!/usr/bin/python
import os
import sys
numparts = int(os.environ['DECODED_PARTS'])
# Are there any decoded parts?
if numparts == 0:
print('No decoded parts!')
sys.exit(1)
# Get all text parts
text = ''
for i in range(1, numparts + 1):
varname = 'DECODED_%d_TEXT' % i
if varname in os.environ:
text = text + os.environ[varname]
# Do something with the text
f = open('/home/pi/output.txt','w')
f.write('Number %s have sent text: %s' % (os.environ['SMS_1_NUMBER'], text))
And i know that my gammu-smsd is working fine, because i can turn of my
ledlamp on raspberry by sending sms to the raspberry, but my question is why
is this script failing? nonthing is happening. and when I try to run the
script by it self it also fails.
What I would like to do is just receive the sms and then read the content and
save the content and phonenumber which sent the sms to a file.
I hope you understand my issue. Thank you in advance, all the best.
Answer: In the gammu-smsd config file, you can use the file backend which does this
for you automatically.
See this example from the gammu documentation
<http://wammu.eu/docs/manual/smsd/config.html#files-service>
[smsd]
Service = files
PIN = 1234
LogFile = syslog
InboxPath = /var/spool/sms/inbox/
OutboPpath = /var/spool/sms/outbox/
SentSMSPath = /var/spool/sms/sent/
ErrorSMSPath = /var/spool/sms/error/
Also see options for the file backend to tailor to your needs.
<http://wammu.eu/docs/manual/smsd/config.html#files-backend-options>
Hope this helps :)
|
Python tarfile and zipfile producing archives with different MD5 for 2 identical files
Question: I am trying to ensure that 2 archives with the same files inside produce the
same MD5 checksum.
For example, file1.txt and file2.txt have identical content, the only
difference between them is _creation time_. However, they produce the same
MD5:
>>> import md5
>>> md5.md5(open("file1.zip","rb").read()).hexdigest()
'c99e47de6046f141693b9aecdbdd2dc2'
>>> md5.md5(open("file2.zip","rb").read()).hexdigest()
'c99e47de6046f141693b9aecdbdd2dc2'
However, when I create tarfile (or zipfile) archives for both identical files,
I get completely different MD5s. Note I am using tarfile for file 1 and 2 in
the exact same fashion.
>>> import tarfile, md5
>>> #file 1
>>> a1 = tarfile.open('archive1.tar.gz','w:gz')
>>> a1.add("file1.txt")
>>> a1.close()
>>> md5.md5(open("archive1.zip","rb").read()).hexdigest()
'0865abb94f6fd92df990963c75519b2e'
>>> #file 2
>>> a2 = tarfile.open('archive2.tar.gz','w:gz')
>>> a2.add("file2.txt")
>>> a2.close()
>>> md5.md5(open("archive2.zip","rb").read()).hexdigest()
'cee53e271a1f457dfd5b5401d8311fcc'
Any ideas why this is occurring? I am guessing it has something to do with the
header data in the archive that is causing this. Perhaps the archives maintain
the different creation times of file1 and file2, thus the different checksums.
Answer: Whilst the payload of the two archives may be identical, the underlying
structure of the archives is different, and compression only adds to those
differences.
Zip and Tar are both archiving formats, and they can both be combined with
compression; more often than not, they are. The combinations of differing
compression algorithms and fundamentally different underlying format structure
will result in different MD5s.
\--
In this case, the last modification time and names of the underlying files are
different, even though the contents of the files are the same; this results in
a different MD5.
|
BeautifulSoup (bs4) parsing wrong
Question: Parsing this sample document with bs4, from python 2.7.6:
<html>
<body>
<p>HTML allows omitting P end-tags.
<p>Like that and this.
<p>And this, too.
<p>What happened?</p>
<p>And can we <p>nest a paragraph, too?</p></p>
</body>
</html>
Using:
from bs4 import BeautifulSoup as BS
...
tree = BS(fh)
HTML has, for ages, allowed omitted end-tags for various element types,
including P (check the schema, or a parser). However, bs4's prettify() on this
document shows that it doesn't end any of those paragraphs until it sees
</body>:
<html>
<body>
<p>
HTML allows omitting P end-tags.
<p>
Like that and this.
<p>
And this, too.
<p>
What happened?
</p>
<p>
And can we
<p>
nest a paragraph, too?
</p>
</p>
</p>
</p>
</p>
</body>
It's not prettify()'s fault, because traversing the tree manually I get the
same structure:
<[document]>
<html>
β
<body>
β
<p>
HTML allows omitting P end-tags.ββ
<p>
Like that and this.ββ
<p>
And this, too.ββ
<p>
What happened?
</p>
β
<p>
And can we
<p>
nest a paragraph, too?
</p>
</p>
β
</p>
</p>
</p>
</body>
β
</html>
β
</[document]>
Now, this would be the right result for XML (at least up to </body>, at which
point it should report a WF error). But this ain't XML. What gives?
Answer: The doc at <http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-
a-parser> tells how to get BS4 to use different parsers. Apparently the
default is html.parse, which the BS4 doc says is broken before Python 2.7.3,
but apparently still has the problem described above in 2.7.6.
Switching to "lxml" was unsuccessful for me, but switching to "html5lib"
produces the correct result:
tree = BS(htmSource, "html5lib")
|
TypeError in dictionary
Question: I am trying to pull specific values from a dictionary but my program keeps
throwing a TypeError on some values from the dictionary. Can someone tell me
why?
**python program**
import csv,os,re,sys
input_dict = csv.DictReader(open("./MCPlayerData/AllPlayerData2.csv"))
def list_scale_values(regexmatch,stopOnNA,person):
data=[]
for key in person:
if re.match(regexmatch, key):
try:
data.append(int(person[key]))
except (ValueError):
data.append('NA')
if(stopOnNA):
data=['NA'] #if any NA return NA
break
return data
try:
for person in input_dict:
print(list_scale_values(r'npi[0-9]+',True,person))
except (TypeError):
type, value, traceback = sys.exc_info()
print(type)
print(value)
print(traceback)
print '\n---\n',person,'\n---'
sys.exit()
print('DONE')
**error output**
Traceback (most recent call last):
File "correlations.py", line 22, in <module>
print(list_scale_values(r'npi[0-9]+',True,person))
File "correlations.py", line 9, in list_scale_values
if re.match(regexmatch, key):
File "/usr/lib/python2.7/re.py", line 137, in match
return _compile(pattern, flags).match(string)
TypeError: expected string or buffer
Answer: To avoid this problem check if the key is a valid string or buffer before a
regular expression is made.
so instead of `if re.match(regexmatch, key):`
write the following code
if key != None and re.match(regexmatch, key):
**To answer the question why it would happen for some`person` in the csv
dictionary?**
If the csv file does not take into consideration for all the columns in a row
for each row this situation is totally possible for only certain rows, in this
case `people`
To give an example :
consider the following CSV file
name,type,address,value
zoo,1,111,2
foo,2,222,,
bar,3,333,5
This would get the following results for the persons in
`csv.DictReader(open("filename.csv"))`
{'type': '1', 'name': 'abc', 'value': '2', 'address': '111'}
{None: [''], 'type': '2', 'name': 'foo', 'value': '', 'address': '222'}
{'type': '3', 'name': 'bar', 'value': '5', 'address': '333'}
So in this case it would work for `people[0]` and `people[2]` but fail at
`people[1]`
|
how do you test private static method in module in python
Question: we have a module of static methods in our python app. these methods use a lot
of private (e.g: "__do_sub_task2(**args)") I would like to write unit tests
for these private static methods within this module, but I am getting refernce
errors.
is there a way to do this?
**update: adding scenario**
I have a module file named 'my_module.py' contents of said file is as follows:
def public_method_foo(my_number):
return __sub_method_bar(my_number * 10)
def __sub_method_bar(other_number)
return other_number + 11
**update #2** The reason I am asking this question is because I have a similar
scenario as above, but when I add the following reference to my test.py module
file:
from my_module import __sub_method_bar
and try to use it in my test, I get the following exception in my test
global name '_MyTests__sub_method_bar' is not defined
Answer: What you have are not methods, not private, and not static; they're just plain
old public functions in the module. So you call them the same way as any other
function. For your example:
>>> my_module.__sub_method_bar(5)
That's it; nothing tricky.*
* Well, there is _one_ tricky thing, but it's probably not going to affect you here: If `my_module` doesn't have an `__all__`, and you do `from my_module import *`, you will not get any of the globals (including functions) whose names start with `_`. But normally your unit tests are going to `import my_module`, so this won't be relevant.
* * *
_Methods_ are callables that are members of a _class_. And methods _can_ be
private ("private" in this sense means "visible only to this class, not even
to super- or sub-classes", so it doesn't make sense for anything but methods).
The tutorial chapter on
[Classes](https://docs.python.org/3/tutorial/classes.html#private-variables)
explains how private methods are implemented, with name-mangling. Methods
(private or otherwise) can also be static ("static" in this context means
"does not take the normal `self`", so again, it doesn't make sense for
anything but methods). Either way, for a private method, you have to manually
demangle the name to call it from outside:
>>> thingy = Thingy()
>>> thingy._Thingy__private_method(5)
>>> Thingy._Thingy__private_static_method(5)
|
Little assistance with my tic-tac-toe program
Question: I need some help with my tic-tac-toe game that I created in Python 3. Have a
look at my fun program and try it out. After that, please help me creating a
`while` statement in my program. **That is while the user choice of square if
filled, you should continue to ask them until they choose an empty square.
When the choose an empty square, the program continues as before.** I am not
used to the `while` statements, please help me on this!
Here is my program:
from turtle import *
def setUp():
#Set up the screen and turtle
win = Screen()
tic = Turtle()
tic.speed(10)
#Change the coordinates to make it easier to tranlate moves to screen coordinates:
win.setworldcoordinates(-0.5,-0.5,3.5, 3.5)
#Draw the vertical bars of the game board:
for i in range(1,3):
tic.up()
tic.goto(0,i)
tic.down()
tic.forward(3)
#Draw the horizontal bars of the game board:
tic.left(90) #Point the turtle in the right direction before drawing
for i in range(1,3):
tic.up()
tic.goto(i,0)
tic.down()
tic.forward(3)
tic.up() #Don't need to draw any more lines, so, keep pen up
#Set up board:
board = [["","",""],["","",""],["","",""]]
return(win,tic,board)
def playGame(tic,board):
#Ask the user for the first 8 moves, alternating between the players X and O:
for i in range(4):
x,y = eval(input("Enter x, y coordinates for X's move: "))
tic.goto(x+.25,y+.25)
tic.write("X",font=('Arial', 90, 'normal'))
board[x][y] = "X"
x,y = eval(input("Enter x, y coordinates for O's move: "))
tic.goto(x+.25,y+.25)
tic.write("O",font=('Arial', 90, 'normal'))
board[x][y] = "O"
# The ninth move:
x,y = eval(input("Enter x, y coordinates for X's move: "))
tic.goto(x+.25,y+.25)
tic.write("X",font=('Arial', 90, 'normal'))
board[x][y] = "X"
def checkWinner(board):
for x in range(3):
if board[x][0] != "" and (board[x][0] == board[x][1] == board[x][2]):
return(board[x][0]) #we have a non-empty row that's identical
for y in range(3):
if board[0][y] != "" and (board[0][y] == board[1][y] == board[2][y]):
return(board[0][y]) #we have a non-empty column that's identical
if board[0][0] != "" and (board[0][0] == board[1][1] == board[2][2]):
return(board[0][0])
if board[2][0] != "" and (board[2][0] == board[1][1] == board[2][0]):
return(board[2][0])
return("No winner")
def cleanUp(tic,win):
#Display an ending message:
tic.goto(-0.25,-0.25)
tic.write("Thank you for playing!",font=('Arial', 20, 'normal'))
win.exitonclick()#Closes the graphics window when mouse is clicked
def main():
win,tic,board = setUp() #Set up the window and game board
playGame(tic,board) #Ask the user for the moves and display
print("\nThe winner is", checkWinner(board)) #Check for winner
cleanUp(tic,win) #Display end message and close window
main()
Answer: You're probably looking for something like this:
x,y = None,None
while x == None or y == None or board[x][y] != "";
x,y = eval(input("Enter x, y coordinates for X's move: "))
This will keep asking the user for input as long as `x` and `y` do not
indicate an empty tile on the board.
By the way, you might consider changing the way you process the input. Right
now you are using `eval`, which can be dangerous as any input can be executed.
It might be better to process the input manually, something like this:
x,y = map(int,input("Enter coordinates").split(','))
This splits the input at the comma, turning it into a list of strings. `map`
then applies the function `int` to each element in the list, turning them into
integers. These are then unpacked into `x` and `y`.
|
How to use Matlab's imresize in python
Question: I'm transferring Matlab's `imresize` code into python. I found the scipy's
`imresize`, but I get a different results from Matlab.
How to get the same results as Matlab by python.
Python/scipy `imresize`
from scipy.misc import imresize
import numpy as np
dtest = np.array(([1,2,3],[4,5,6],[7,8,9]))
scale = 1.4
dim = imresize(dtest,1/scale)
Matlab `imresize`
dtest = [1,2,3;
4,5,6;
7,8,9];
scale = 1.4;
dim = imresize(dtest,1/scale);
These two pieces of code return different results.
Answer: The `scipy.misc.imresize` function is a bit odd for me. For one thing, this is
what happens when I specify the sample 2D image you provided to a
`scipy.misc.imresize` call on this image with a scale of 1.0. Ideally, it
should give you the same image, but what we get is this (in IPython):
In [35]: from scipy.misc import imresize
In [36]: import numpy as np
In [37]: dtest = np.array(([1,2,3],[4,5,6],[7,8,9]))
In [38]: out = imresize(dtest, 1.0)
In [39]: out
Out[39]:
array([[ 0, 32, 64],
[ 96, 127, 159],
[191, 223, 255]], dtype=uint8)
Not only does it change the type of the output to `uint8`, but it **scales**
the values as well. For one thing, it looks like it makes the maximum value of
the image equal to 255 and the minimum value equal to 0. MATLAB's `imresize`
does not do this and it resizes an image in the way we expect:
>> dtest = [1,2,3;4,5,6;7,8,9];
>> out = imresize(dtest, 1)
out =
1 2 3
4 5 6
7 8 9
However, you need to be cognizant that MATLAB performs the resizing [with
anti-aliasing enabled by
default](http://www.mathworks.com/help/images/resizing-an-
image.html#f12-24819). I'm not sure what `scipy.misc.resize` does here but
I'll bet that there is no anti-aliasing enabled.
As such, I probably would not use `scipy.misc.imresize`. The closest thing to
what you want is either OpenCV's
[`resize`](http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html#resize)
function, or scikit-image's [`resize`](http://scikit-
image.org/docs/dev/api/skimage.transform.html#resize) function. Both of these
have no anti-aliasing. If you want to make both Python and MATLAB match each
other, use the bilinear interpolation method. `imresize` uses bicubic
interpolation by default and I know for a fact that MATLAB uses custom kernels
to do so, and so it will be much more difficult to match their outputs. See
this post for some more informative results:
[MATLAB vs C++ vs OpenCV -
imresize](http://stackoverflow.com/questions/26812289/matlab-vs-c-vs-opencv-
imresize)
For the best results, don't specify a scale - specify a target output size to
reproduce results. MATLAB, OpenCV and scikit-image, when specifying a floating
point scale, act differently with each other. I did some experiments and by
specifying a floating point size, I was unable to get the results to match.
Besides which, scikit-image does not support taking in a scale factor.
As such, `1/scale` in your case is close to a `2 x 2` size output, and so
here's what you would do in MATLAB:
>> dtest = [1,2,3;4,5,6;7,8,9];
>> out = imresize(dtest, [2,2], 'bilinear', 'AntiAliasing', false)
out =
2.0000 3.5000
6.5000 8.0000
With Python OpenCV:
In [93]: import numpy as np
In [94]: import cv2
In [95]: dtest = np.array(([1,2,3],[4,5,6],[7,8,9]), dtype='float')
In [96]: out = cv2.resize(dtest, (2,2))
In [97]: out
Out[97]:
array([[ 2. , 3.5],
[ 6.5, 8. ]])
With scikit-image:
In [100]: from skimage.transform import resize
In [101]: dtest = np.array(([1,2,3],[4,5,6],[7,8,9]), dtype='uint8')
In [102]: out = resize(dtest, (2,2), order=1, preserve_range=True)
In [103]: out
Out[103]:
array([[ 2. , 3.5],
[ 6.5, 8. ]])
|
Using an instance on form "doesn't raise an HttpResponse object" even though the object exists in the database
Question: I'm trying to fetch forms for floorplans for individual property's. I can
check that the object exists in the database, but when I try to create a form
with an instance of it I receive this error:
Traceback:
File "/Users/balrog911/Desktop/mvp/mvp_1/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
130. % (callback.__module__, view_name))
Exception Type: ValueError at /dashboard-property/253/
Exception Value: The view properties.views.dashboard_single_property didn't return an HttpResponse object. It returned None instead.
My models.py:
class Property(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, null=True, blank=True, related_name='user')
name = models.CharField(max_length=120, help_text="This is the name that will display on your profile")
image = models.ImageField(upload_to='properties/', null=True, blank=True)
options=(('House', 'House'),('Condo','Condo'),('Apartment','Apartment'))
rental_type = models.CharField(max_length=120, blank=True, null=True, choices=options, default='Apartment')
address = models.CharField(max_length=120)
phone_number = models.CharField(max_length=120, blank=True, null=True)
email = models.EmailField(max_length=120, blank=True, null=True)
website = models.CharField(max_length=250, blank=True, null=True)
description = models.CharField(max_length=500, blank=True, null=True)
lat = models.CharField(max_length=120, blank=True, null=True)
lng = models.CharField(max_length=120, blank=True, null=True)
coordinates =models.CharField(max_length=120, blank=True, null=True)
slug = models.SlugField(unique=True, max_length=501)
active = models.BooleanField(default= True)
date_added = models.DateTimeField(auto_now_add=True)
def save(self):
super(Property, self).save()
max_length = Property._meta.get_field('slug').max_length
slug_name = slugify(self.name)
self.slug = '%s-%d' % (slug_name, self.id)
self.coordinates = geo_lat_lng(self.address)
self.lat = self.coordinates[0]
self.lng = self.coordinates[1]
super(Property, self).save()
def __unicode__(self):
return '%s-%s-%s' % (self.id, self.name, self.address)
def get_absolute_url(self):
return reverse("single_property", kwargs={"slug": self.slug})
def get_dashboard_url(self):
return reverse("dashboard_single_property", kwargs={"id": self.id})
class FloorPlan(models.Model):
property_name = models.ForeignKey(Property, related_name='property_name')
floor_plan_name = models.CharField(max_length=120, blank=True, null=True)
numbers = (('0','0'),('1','1'),('2','2'),('3','3'),('4','4'),('5','5'),('6+','6+'),)
bedrooms = models.CharField(max_length=120, blank=True, null=True, choices=numbers)
bathrooms = models.CharField(max_length=120, blank=True, null=True, choices=numbers)
sqft = models.IntegerField(max_length=120, blank=True, null=True)
min_price = models.IntegerField(max_length=120, blank=True, null=True)
max_price = models.IntegerField(max_length=120, blank=True, null=True)
availability = models.DateField(null=True, blank=True, help_text='Use mm/dd/yyyy format')
image = models.ImageField(upload_to='floor_plans/', null=True, blank=True)
def __unicode__(self):
return '%s' % (self.property_name)
My views.py:
def dashboard_single_property(request, id):
if request.user.is_authenticated():
user = request.user
try:
single_property = Property.objects.get(id=id)
user_properties = Property.objects.filter(user=user)
if single_property in user_properties:
user_property = Property.objects.get(id=id)
#Beginning of Pet Policy Instances
user_floor_plan = FloorPlan.objects.select_related('Property').filter(property_name=user_property)
if user_floor_plan:
print user_floor_plan
plans = user_floor_plan.count()
plans = plans + 1
FloorPlanFormset = inlineformset_factory(Property, FloorPlan, extra=plans)
formset_floor_plan = FloorPlanFormset(instance=user_floor_plan)
print "formset_floor_plan is True"
else:
floor_plan_form = FloorPlanForm(request.POST or None)
formset_floor_plan = False
print 'formset is %s' % (formset_floor_plan)
#End
#Beginning of Pet Policy Instances
user_pet_policy = PetPolicy.objects.select_related('Property').filter(property_name=user_property)
print user_pet_policy
if user_pet_policy:
print user_pet_policy
#pet_policy_form = PetPolicyForm(request.POST or None, instance=user_pet_policy)
pet_policy_form = PetPolicyForm(request.POST or None)
else:
pet_policy_form = PetPolicyForm(request.POST or None)
#End
basic_form = PropertyForm(request.POST or None, instance=user_property)
context = {
'user_property': user_property,
'basic_form': basic_form,
'floor_plan_form': floor_plan_form,
'formset_floor_plan': formset_floor_plan,
'pet_policy_form': pet_policy_form,
}
template = 'dashboard/dashboard_single_property.html'
return render(request, template, context)
else:
return HttpResponseRedirect(reverse('dashboard'))
except Exception as e:
raise e
#raise Http404
print "whoops"
else:
return HttpResponseRedirect(reverse('dashboard'))
**EDIT:** Took Vishen's tip to make sure the error was raised, updated the
views and now I'm getting this error. Here's the full traceback:
Environment:
Request Method: GET
Request URL: http://localhost:8080/dashboard-property/253/
Django Version: 1.7.4
Python Version: 2.7.5
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.sites',
'django.contrib.sitemaps',
'django.contrib.staticfiles',
'base',
'properties',
'renters',
'allauth',
'allauth.account',
'crispy_forms',
'datetimewidget',
'djrill',
'import_export',
'multiselectfield')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django.middleware.locale.LocaleMiddleware')
Traceback:
File "/Users/balrog911/Desktop/mvp/mvp_1/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
111. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/balrog911/Desktop/mvp/mvp_1_live/src/properties/views.py" in dashboard_single_property
82. raise e
Exception Type: AttributeError at /dashboard-property/253/
Exception Value: 'QuerySet' object has no attribute 'pk'
**EDIT:** Per Vishen's suggestion, removed the try statement to see if the
error would be made more clear. It looks like the issues is with line 51:
formset_floor_plan = FloorPlanFormset(instance=user_floor_plan)
Here's the traceback:
Traceback:
File "/Users/balrog911/Desktop/mvp/mvp_1/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
111. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/balrog911/Desktop/mvp/mvp_1_live/src/properties/views.py" in dashboard_single_property
51. formset_floor_plan = FloorPlanFormset(instance=user_floor_plan)
File "/Users/balrog911/Desktop/mvp/mvp_1/lib/python2.7/site-packages/django/forms/models.py" in __init__
855. if self.instance.pk is not None:
Exception Type: AttributeError at /dashboard-property/253/
Exception Value: 'QuerySet' object has no attribute 'pk'
Answer: The problem is that if an `Exception` gets raised inside the `try` statement,
you catch it, print "whoops" and then the function ends without returning
anything; where it needs to return an `HttpResponse`. There is a few ways to
solve this, but I suggest making the `except` the following
except Exception as e:
raise e # This will throw a 500
I am guessing that something in the `try` statement is error'ing but you are
silencing it. But you should throw the error so you can see and fix whatever
is breaking.
|
Facebook not consistently scraping information as per OG meta tags
Question: I automatically post adverts for people's jobs on member's facebook pages
using Python's facebook module. I have applied for and got Facebook's
authority to do this. Now when I post to most people's pages, the image and
description are not being scraped, however when I post with the FB site
owner's access token of the business site associated with the application,
they are. The code I use to post the pages:
import facebook
graph = facebook.GraphAPI(access_token)
# record the id of the wall post so we can like it
fbWallPost = graph.put_wall_post(message=fbmsg, attachment=fbpost, profile_id=pageID)
Running this in the python shell posts a URL that gets scraped, but running it
from within the application it very frequently does not. The page has the
following meta tags:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Fair Work</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="author" content="Fair Work">
<!-- Facebook crawler uses these JRT -->
<meta property="og:site_name" content="Fair Work"/>
<meta property="fb:app_id" content="322643981237986" />
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" />
<meta http-equiv="Pragma" content="no-cache" />
<meta http-equiv="Expires" content="0" />
<meta property="og:url" content="http://fairwork.com.au/job/final-job-on-thursday-april-30th/" />
<meta property="og:description" content="My last test today"/>
<meta property="og:type" content="article"/>
<meta property="og:image" content="http://fairwork.com.au/media/cache/cd/b0/cdb05f8dd8885351925bf43076870937.jpg"/>
<meta property="og:image:width" content="400" />
<meta property="og:image:height" content="400" />
The application ID is correct. I go to Facebook's debugger, and I enter the
URL 
I click on "Debug", and I get an error that the "og:type" meta tag is missing,
which it is not as you can see:  I then click on "Fetch new scrape
information" ...  ...and the image and description are
loaded up fine. From this point if I post the link on my site, the image and
description display, however all previously posted links do not have the
information scraped. The two successive posts of the same link in the next
image were done before (bottom) and after (top) running through the debug
process mentioned previously: 
The page linked to is publicaly available, and served by Django. I want to be
able to automatically post the ad's (when this is requested by the FB page
owner) and have the image and description scraped by FB on posting. Any ideas?
Answer: It seems that NGINX is causing the http calls to the Facebook graph API not to
work properly. When
fbWallPost = graph.put_wall_post(message=fbmsg, attachment=fbpost, profile_id=pageID)
is run through Celery (which runs separately to django, and not through the
web server), facebook scrapes the posted link and shows the image and
description linked to by the OG meta tags. When django runs put_wall_post, the
post occurs, but facebook does not scrape the link. Other calls to the API
also don't work when run from a Django view. I can only think that NGINX is
interfering with the HTTP request in some way.
|
Node.JS onoff not picking up GPIO correctly
Question: I'm building a simple infrared breakbeam circuit to plug into my RPi 2. I have
some working code in python that successfully picks up when my infrared beam
is broken, but I want to use node.js instead of python.
Here is my python code, nice and simple:
import RPi.GPIO as GPIO
import time
GPIO.setmode(GPIO.BOARD)
GPIO.setup(7, GPIO.IN)
try:
while True:
print(GPIO.input(7))
time.sleep(0.01)
except KeyboardInterrupt:
GPIO.cleanup()
Now I did some reading up on different packages for node.js that let me use
GPIO on the Pi and decided that `onoff` looked like the best one because it
works asynchronously with callbacks which I would like to use.
This is the code I'm trying to use for node:
var Gpio = require('onoff').Gpio,
infrared = new Gpio(7, 'in');
var interval = setInterval(function()
{
console.log(infrared.readSync() ^ 1);
}, 100);
function exit() {
infrared.unexport();
process.exit();
}
process.on('SIGINT', exit);
The problem is with node I always get the same signal of 0 no matter what I
do. I have tried eliminating my circuit as the problem by just using a simple
button instead, and even that doesn't work (I tested the same circuit using
python and that worked fine).
This isn't even using the asynchronous part of it (which also doesn't work
since no interrupt ever happens).
I have tried using GPIO Admin to export the pin I'm using:
pi@counter ~ $ sudo gpio-admin export 7
gpio-admin: failed to change group ownership of /sys/devices/virtual/gpio/gpio7/direction: No such file or directory
`/sys/devices/virtual/gpio/` does not exist on my system.
Do I have to do anything specific to get node to play nicely with my GPIO?
Note that I wrote this using Adafruit WebIDE, and yes I have tried executing
it out of the context of the IDE and it still doesn't work. I am using the
latest stable build of raspbian as of 2015/04/29 with a fully updated system,
using nodejs v0.12.2.
Answer: The main problem with the node GPIO library is that the params to give is the
pin number not the GPIO number...
Try to use the pin number and normally it will work.
GPIO 7 is the pin 26 on a raspberry pi
|
separate a string 2 by 2 using Python3 macadress format
Question: i want to use a function like "{}".format (variable)
so i can transform a string like: "
> A1B345FS
" And i want to have something like: "
> A1:B3:45:FS
" i am not able to use a while or a for.
Can i use .join (variable) ????
Answer: I think the following code solves the problem.
import re
s = "A1B345FS"
s2=re.findall('..',s)
s3=":".join(s2)
print(s3)
|
Cannot import MySQLdb on Windows with Anaconda Python Stack
Question: I recently installed mysql-python with these methods but no one worked;
* conda install mysql-python
* pip install mysql-python
* easy_install mysql-python
* I followed [this](http://stackoverflow.com/questions/26705890/cannot-import-mysqldb-python-windows-8-1)
* I installed from [this link](http://www.lfd.uci.edu/~gohlke/pythonlibs/)
After all installation I am able to see the package in Anaconda\Lib\site-
packages and in ipython tab completation works for MySQLdb. However, if I try
to import it it raises;
ImportError Traceback (most recent call last)
<ipython-input-1-dd22983d5391> in <module>()
----> 1 import MySQLdb
ImportError: No module named MySQLdb
Do you have any suggestion to make it work ?
Answer: Try looking if the MySQLdb package is in the search path:
>>> import sys
>>> s = sorted(sys.path)
>>> for ss in s: print ss
The `Anaconda\Lib\site-packages` folder should appear in the list. If not, you
can:
* add the path to the sys.path (just an .append(), as sys.path is a list) in your client program,
* or globally, modifying the PYTHONPATH environment variable.
|
How to import module and use the methods in python
Question: I am trying to learn oops in python and I've created a class object. I am
trying to import the module and use the methods that I've defined in it. I am
learning from the book, Practical Programming. I've tried various things but
no success. Any help shall be appreciated. Thanks in advance.
This is my code:
class mynum:
def __init__(self, num, limit):
self.num = num
self.limit = limit
def numf(self):
num = int(input("enter a number"))
limit = int(input('enter the limit'))
total = 0
while (num < limit):
num = num + 9
if num >= limit:
break
else:
total = total + num
print(num)
print("total=",total)
And the last, error I got while trying:
Python 3.4.0 (default, Apr 11 2014, 13:05:18)
[GCC 4.8.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>>
>>> import eight
>>>
>>> numb = eight.mynum()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() missing 2 required positional arguments: 'num' and 'limit'
>>> numb = eight.mynum(3,40)
>>> numb
<eight.mynum object at 0xb710412c>
>>>
Answer: Your module import works, but your `__init__()` expects 2 params `num` and
`limit` which you're not passing on the `>>> numb = eight.mynum()` line.
When you then pass them here `>>> numb = eight.mynum(3,40)` you get an object
of your mynum type. So all is good
|
Run pythons as .exe files without python installed
Question: I'm trying to run a python file on a system without python installed. I'm
using py2exe, which gives me a .pyc file which runs fine on my system, but
when I give it to a friend without python it tells him Windows can't run the
file.
My `setup.py` file contains this;
from distutils.core import setup
import py2exe
setup(console=['PyVersionControl.py'])
When I run py2exe in the commandline, this is output
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Users\chdick>python setup.py py2exe
running py2exe
3 missing Modules
------------------
? readline imported from cmd, code, pdb
? win32api imported from platform
? win32con imported from platform
Building 'dist\PyVersionControl.exe'.
Building shared code archive 'dist\library.zip'.
Copy c:\windows\system32\python34.dll to dist
Copy C:\Python34\DLLs\pyexpat.pyd to dist\pyexpat.pyd
Copy C:\Python34\DLLs\_ctypes.pyd to dist\_ctypes.pyd
Copy C:\Python34\DLLs\unicodedata.pyd to dist\unicodedata.pyd
Copy C:\Python34\DLLs\_hashlib.pyd to dist\_hashlib.pyd
Copy C:\Python34\DLLs\_socket.pyd to dist\_socket.pyd
Copy C:\Python34\DLLs\_tkinter.pyd to dist\_tkinter.pyd
Copy C:\Python34\DLLs\_bz2.pyd to dist\_bz2.pyd
Copy C:\Python34\DLLs\select.pyd to dist\select.pyd
Copy C:\Python34\DLLs\_ssl.pyd to dist\_ssl.pyd
Copy C:\Python34\DLLs\_lzma.pyd to dist\_lzma.pyd
Copy DLL C:\Python34\DLLs\tk86t.dll to dist\
Copy DLL C:\Python34\DLLs\tcl86t.dll to dist\
C:\Users\chdick>
Answer: You need to use `py2exe` to create an executable file from your script (i.e.
create `script.exe` from `script.py`).
If you have the correct version of `py2exe` installed, you should be able to
type `python -m py2exe.build_exe script.py`.
See the [py2exe package page](https://pypi.python.org/pypi/py2exe/) for
details.
|
Python get value from input element with lxml xpath
Question: i want to make paypal cURL login, so i need auth value. i try to get auth
value from html source with this python code
import requests
import lxml.html
import StringIO
from xml.etree.ElementTree import ElementTree
r = requests.get("https://paypal.com/cgi-bin/webscr?cmd=_login-run")
login_page = r.text.encode('utf-8') #printing html source
html = lxml.html.fromstring(login_page) #printing <Element html at 0x7f19cb242e$
auth = html.xpath('//input[@name="auth"]') #printing [<InputElement 7fb0971e9f1$
print auth
but the above code printing this `[<InputElement 7fb0971e9f18 name='auth'
type='hidden'>]` , so how i can get auth value with decoded `- -
.` ? the input section looks like this
<input name="auth" type="hidden" value="ADPifNsidn-P0G6WmiMMeJbjEhnhIvZCNg7Fk11NUxc0DyYWzrH-xk5ydV.85WCzy">
thanks you very much.
Answer: If you'd like to retrieve the `auth` attribute of that element, use
auth = html.xpath('//input[@name="auth"]/@value')
there is no need to decode anything, entities are expanded automatically when
lxml parses HTML, and therefore the output will be
$ python sample.py
['AmmyYuqCDmZRcSs6MaQi2tKhzZiyAX0eSERKqTi3pLB5pdceB726lx7jhXU2MGDN6']
|
Linear Regression in Python
Question: I am a brand new to programming and am taking a course in python. I was asked
to do linear regression on a data set that my professor gave out. Below is the
program I have written (it doesn't work).
from math import *
f=open("data_setshort.csv", "r")
data = f.readlines()
f.close()
xvalues=[]; yvalues=[]
for line in data:
x,y=line.strip().split(",")
x=float(x.strip())
y=float(y.strip())
xvalues.append(x)
yvalues.append(y)
def regression(x,y):
n = len(x)
X = sum(x)
Y = sum(y)
for i in x:
A = sum(i**2)
return A
for i in x:
for j in y:
C = sum(x*y)
return C
return C
D = (X**2)-nA
m = (XY - nC)/D
b = (CX - AY)/D
return m,b
print "xvalues:", xvalues
print "yvalues:", yvalues
regression(xvalues,yvalues)
I am getting an error that says: line 23, in regression, A = sum(i**2).
TypeError: 'float' object is not iterable.
I need to eventually create a plot for this data set (which I know how to do)
and for the line defined by the regression. But for now I am trying to do
linear regression in python.
Answer: You can't sum over a single float, but you can sum over lists. E. g. you
probably mean `A = sum([xi**2 for xi in x])` to calculate `Sum of each element
in x to the power of 2`. You also have various `return` statements in your
code that don't really make any sense and can probably be removed completely,
e. g. `return C` after the loop. Additionally, multiplication of two variables
`a` and `b` can only be done by using `a*b` in python. Simply writing `ab` is
not possible and will instead be regarded as a single variable with name "ab".
The corrected code could look like this:
def regression(x,y):
n = len(x)
X = sum(x)
Y = sum(y)
A = sum([xi**2 for xi in x])
C = sum([xi*yi for xi, yi in zip(x,y)])
D = X**2 - n*A
m = (X*Y - n*C) / float(D)
b = (C*X - A*Y) / float(D)
return (m, b)
|
Python Flask SQL Register Login Page "hash must be unicode or bytes, not long"
Question: Im trying to create a login / register page. My register page works and I see
the information and hashed passwords added. When I try to login, I get "hash
must be unicode or bytes, not long" flashed. Please help!
@app.route('/login/', methods=['GET','POST'])
def login():
try:
c,conn = connection()
if request.method == 'POST':
data = c.execute("SELECT * FROM users WHERE username = %s",
thwart(request.form['username']))
if sha256_crypt.verify(request.form['password'], data):
session['logged_in'] = True
session['username'] = request.form['username']
flash('You are now logged in.'+str(session['username']))
return redirect(url_for('dashboard'))
else:
error = 'Invalid credentials. Try again'
return render_template('login.html', error=error)
except Exception, e:
flash(e)
------------------------------------------------------------------
import MySQLdb
def connection():
conn = MySQLdb.connect(host="localhost",
user = "root",
passwd = "julie774",
db = "PYTHONTUT")
c = conn.cursor()
return c, conn
Answer: > data = c.execute("SELECT * FROM users WHERE username = %s",
> thwart(request.form['username']))
`cursor.execute` just executes the query and returns the number of affected
rows. (see pydoc of cursor.execute). Thus in your `data` variable you have the
number of found rows.
Instead you have to fetch the data from the cursor. Also since you are
requesting all the columns from user (`*`), you will have to extract only a
particular column (the index of it β see the end note).
c.execute("SELECT password FROM users WHERE username = %s",
thwart(request.form['username']))
data = c.fetchone()
# c.fetchone() returns None if no row has been found
if sha256_crypt.verify(request.form['password'], data[0]):
...
In your example you are connecting to database without specifying the type of
the cursor, thus `c.fetchone()` will return a tuple (for example `(1L, 'John
Doe', '392347')`). And to select particular column you have to use numerical
index β `data[1]` to retrieve `'John Doe'`.
If you want named dictionary instead, you will have to specify it when
connecting to db.
conn = MySQLdb.connect(host="localhost",
user = "root",
passwd = "***",
db = "PYTHONTUT",
cursorclass=MySQLdb.cursors.DictCursor
)
Then `c.fetchone()` will return a `dict` instead (e.g. `{'id': 1L, 'name':
'John Doe', 'password': '392347'}`) so you can use more readable
`data['name']` etc.
|
How to check if a date has Passed in Python (Simply)
Question: I have looked around to see if I can find a simple method in Python to find
out if a date has passed.
For example:- If the date is `01/05/2015`, and the date; `30/04/2015` was in-
putted into Python, it would return True, to say the date has passed.
This needs to be as simple and efficient as possible.
Thanks for any help.
Answer: you may use datetime, first parse String to date, then you can compare
import datetime
d1 = datetime.datetime.strptime('05/01/2015', "%d/%m/%Y").date()
d2 = datetime.datetime.strptime('30/04/2015', "%d/%m/%Y").date()
d2>d1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.