text
stringlengths 226
34.5k
|
---|
Python Numba jit NotImplementedError list comprehension
Question: I want to speed up the calculation of a formula executing a list comprehension
with Numba.
from numba import jit
# General function to generate overlapping windows from a dataframe
@jit
def overlapping_windows(index, wl=256, noverlap=128):
l = len(index)
res = [[s,s+wl] for s in xrange(0, l, noverlap) if s+wl < l]
return res
overlapping_windows([1,2,3,4,5,6,7,8,9,10],4,2)
However I get a NotImplementedError. Not sure why.
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-45-ce0579185abe> in <module>()
6 return res
7
----> 8 overlapping_windows([1,2,3,4,5,6,7,8,9,10],4,2)
~/anaconda/lib/python2.7/site-packages/numba/dispatcher.pyc in _compile_and_call(self, *args, **kws)
123 assert not kws
124 sig = tuple([typeof_pyval(a) for a in args])
--> 125 self.jit(sig)
126 return self(*args, **kws)
127
~/anaconda/lib/python2.7/site-packages/numba/dispatcher.pyc in jit(self, sig, **kws)
118 """Alias of compile(sig, **kws)
119 """
--> 120 return self.compile(sig, **kws)
121
122 def _compile_and_call(self, *args, **kws):
~/anaconda/lib/python2.7/site-packages/numba/dispatcher.pyc in compile(self, sig, locals, **targetoptions)
106 cres = compiler.compile_extra(typingctx, targetctx, self.py_func,
107 args=args, return_type=return_type,
--> 108 flags=flags, locals=locs)
109
110 # Check typing error if object mode is used
~/anaconda/lib/python2.7/site-packages/numba/compiler.pyc in compile_extra(typingctx, targetctx, func, args, return_type, flags, locals)
85 Use ``None`` to indicate
86 """
---> 87 bc = bytecode.ByteCode(func=func)
88 if config.DEBUG:
89 print(bc.dump())
~/anaconda/lib/python2.7/site-packages/numba/bytecode.pyc in __init__(self, func)
275 raise ByteCodeSupportError("does not support cellvars")
276
--> 277 table = utils.SortedMap(ByteCodeIter(code))
278 labels = set(dis.findlabels(code.co_code))
279 labels.add(0)
~/anaconda/lib/python2.7/site-packages/numba/utils.pyc in __init__(self, seq)
44 self._values = []
45 self._index = {}
---> 46 for i, (k, v) in enumerate(sorted(seq)):
47 self._index[k] = i
48 self._values.append((k, v))
~/anaconda/lib/python2.7/site-packages/numba/bytecode.pyc in next(self)
195 ts = "offset=%d opcode=%x opname=%s"
196 tv = offset, opcode, dis.opname[opcode]
--> 197 raise NotImplementedError(ts % tv)
198 if info.argsize:
199 arg = self.read_arg(info.argsize)
NotImplementedError: offset=66 opcode=5e opname=LIST_APPEND
Answer: It most likely means that the version of `numba` you are using does not
support functions with list comprehensions.
|
ReactorNotRestartable error
Question: I have a tool, where i am implementing upnp discovery of devices connected in
network.
For that i have written a script and used datagram class in it.
Implementation: whenever scan button is pressed on tool, it will run that upnp
script and will list the devices in the box created in tool.
This was working fine.
But when i again press the scan button, it gives me following error:
Traceback (most recent call last):
File "tool\ui\main.py", line 508, in updateDevices
upnp_script.main("server", localHostAddress)
File "tool\ui\upnp_script.py", line 90, in main
reactor.run()
File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1191, in run
self.startRunning(installSignalHandlers=installSignalHandlers)
File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1171, in startRunning
ReactorBase.startRunning(self)
File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 683, in startRunning
raise error.ReactorNotRestartable()
twisted.internet.error.ReactorNotRestartable
Main function of upnp script:
def main(mode, iface):
klass = Server if mode == 'server' else Client
obj = klass
obj(iface)
reactor.run()
There is server class which is sending M-search command(upnp) for discovering
devices.
MS = 'M-SEARCH * HTTP/1.1\r\nHOST: %s:%d\r\nMAN: "ssdp:discover"\r\nMX: 2\r\nST: ssdp:all\r\n\r\n' % (SSDP_ADDR, SSDP_PORT)
In server class constructor, after sending m-search i am stooping reactor
reactor.callLater(10, reactor.stop)
From google i found that, we cannot restart a reactor beacause it is its
limitation.
http://twistedmatrix.com/trac/wiki/FrequentlyAskedQuestions#WhycanttheTwistedsreactorberestarted
Please guide me how can i modify my code so that i am able to scan devices
more than 1 time and don't get this "reactor not restartable error"
Answer: In response to "Please guide me how can i modify my code...", you haven't
provided enough code that I would know how to specifically guide you, I would
need to understand the (twisted part) of the logic around your scan/search.
If I were to offer a generic design/pattern/mental-model for the "twisted
reactor" though, I would say **think of it as your programs main loop**.
(thinking about the `reactor` that way is what makes the problem obvious to me
anyway...)
I.E. most long running programs have a form something like
def main():
while(True):
check_and_update_some_stuff()
sleep 10
That same code in twisted is more like:
def main():
# the LoopingCall adds the given function to the reactor loop
l = task.LoopingCall(check_and_update_some_stuff)
l.start(10.0)
reactor.run() # <--- this is the endless while loop
If you think of the reactor as "the endless loop that makes up the `main()` of
my program" then you'll understand why no-one is bothering to add support for
"restarting" the reactor. Why would you want to restart an endless loop?
Instead of stopping the core of your program, you should instead only
surgically stop the task inside that is complete, leaving the main loop
untouched.
You seem to be implying that the current code will keep "sending m-search"s
endlessly when the reactor is running. So change your sending code so it stops
repeating the "send" (... I can't tell you how to do this because you didn't
provide code, but for instance, a `LoopingCall` can be turned off by calling
its `.stop` method.
Runnable example as follows:
#!/usr/bin/python
from twisted.internet import task
from twisted.internet import reactor
from twisted.internet.protocol import Protocol, ServerFactory
class PollingIOThingy(object):
def __init__(self):
self.sendingcallback = None # Note I'm pushing sendToAll into here in main()
self.l = None # Also being pushed in from main()
self.iotries = 0
def pollingtry(self):
self.iotries += 1
if self.iotries > 5:
print "stoping this task"
self.l.stop()
return()
print "Polling runs: " + str(self.iotries)
if self.sendingcallback:
self.sendingcallback("Polling runs: " + str(self.iotries) + "\n")
class MyClientConnections(Protocol):
def connectionMade(self):
print "Got new client!"
self.factory.clients.append(self)
def connectionLost(self, reason):
print "Lost a client!"
self.factory.clients.remove(self)
class MyServerFactory(ServerFactory):
protocol = MyClientConnections
def __init__(self):
self.clients = []
def sendToAll(self, message):
for c in self.clients:
c.transport.write(message)
# Normally I would define a class of ServerFactory here but I'm going to
# hack it into main() as they do in the twisted chat, to make things shorter
def main():
client_connection_factory = MyServerFactory()
polling_stuff = PollingIOThingy()
# the following line is what this example is all about:
polling_stuff.sendingcallback = client_connection_factory.sendToAll
# push the client connections send def into my polling class
# if you want to run something ever second (instead of 1 second after
# the end of your last code run, which could vary) do:
l = task.LoopingCall(polling_stuff.pollingtry)
polling_stuff.l = l
l.start(1.0)
# from: https://twistedmatrix.com/documents/12.3.0/core/howto/time.html
reactor.listenTCP(5000, client_connection_factory)
reactor.run()
if __name__ == '__main__':
main()
This script has extra cruft in it that you might not care about, so just focus
on the `self.l.stop()` in `PollingIOThingy`s `polling try` method and the `l`
related stuff in `main()` to illustrates the point.
(this code comes from SO: [Persistent connection in
twisted](http://stackoverflow.com/q/23265609/3334178) check that question if
you want to know what the extra bits are about)
|
Python: Replacing a newline with a space between multiline comments
Question: I am relatively new to python and i need to print the multiline comments used
in the C program. I have a test.c file which is as below:
/* print multiline
comments */
I tried the following python code to parse the C code and print the multiline
comments
import re
fileopen = open('test.c', 'rw')
for var in fileopen:
if var.startswith("/*"):
var1 = re.sub(r'\n', " ", var)
var1 = var.rstrip()
print var1
The output I get is:
/* print multiline
The second line of the comment does not get printed even if I substitute
newline with a space. Please help me in this regard.
Answer: If your only requirement is finding comments that span multiple lines, thats
actually quite easy. Like so:
for match in re.finditer(r"\/\*(.*\n.*)\*\/", code, re.MULTILINE):
print match.group(1)
The important part is:
\/\*(.*\n.*)\*\/
Which finds a literal `/*`, any number of characters, a newline, any number of
characters and a literal `*/`, and captures the parts between the comment
delimiters.
Additionally, the flag
[`re.MULTILINE`](https://docs.python.org/2/library/re.html#re.MULTILINE)
allows the regex search to search across lines, which allows us to enforce
that it _must_ be a multiline comment.
The [full code can be run on codepad.org](http://codepad.org/NscedCDT):
code= """/* print multiline
comments */
// One line comment
/* Another one line comment */
/* Multiline
comment */
"""
import re
for match in re.finditer(r"\/\*(.*\n.*)\*\/", code, re.MULTILINE):
print match.group(1)
Which gives:
print multiline
comments
Multiline
comment
|
A local or global name can not be found error
Question: I'm trying to make a simple rock paper scissors game, and I get an error with
in the line, guess = input. It says I need to define the function or variable
before I use it in this way and I am unsure of how I can do that. This is
using Python/JES programming
#import random module
import random
#main function
def main():
#intro message
print("Let's play 'Rock, Paper, Scissors'!")
#call the user's guess function
number = user_guess()
#call the computer's number function
num = computer_number()
#call the results function
results(num, number)
#computer_number function
def computer_number():
#get a random number in the range of 1 through 3
num = random.randrange(1,4)
#if/elif statement
if num == 1:
print("Computer chooses rock")
elif num == 2:
print("Computer chooses paper")
elif num == 3:
print("Computer chooses scissors")
#return the number
return num
#user_guess function
def user_guess():
guess = input ("Choose 'rock', 'paper', or 'scissors' by typing that word. ")
#while guess == 'paper' or guess == 'rock' or guess == 'scissors':
if is_valid_guess(guess):
#if/elif statement
#assign 1 to rock
if guess == 'rock':
number = 1
#assign 2 to paper
elif guess == 'paper':
number = 2
#assign 3 to scissors
elif guess == 'scissors':
number = 3
return number
else:
print('That response is invalid.')
return user_guess()
def is_valid_guess(guess):
if guess == 'rock' or guess == 'paper' or guess == 'scissors':
status = True
else:
status = False
return status
def restart():
answer = input("Would you like to play again? Enter 'y' for yes or \
'n' for no: ")
#if/elif statement
if answer == 'y':
main()
elif answer == 'n':
print("Goodbye!")
else:
print("Please enter only 'y' or 'n'!")
#call restart
restart()
#results function
def results(num, number):
#find the difference in the two numbers
difference = num - number
#if/elif statement
if difference == 0:
print("TIE!")
#call restart
restart()
elif difference % 3 == 1:
print("I'm sorry! You lost :(")
#call restart
restart()
elif difference % 3 == 2:
print("Congratulations! You won :)")
#call restart
restart()
main()
Answer: Using raw_input instead of input seems to solve the problem.
guess = raw_input ("Choose 'rock', 'paper', or 'scissors' by typing that word. ")
and also in
answer = raw_input("Would you like to play again? Enter 'y' for yes or 'n' for no: ")
I'm using Python 2.7.x
|
Specify the developement version of a python module
Question: I want to add a new class to [PICOS, a python
module](http://picos.zib.de/v100/intro.html). I installed it the normal way a
long time ago. But now I have downloaded the source and I a am trying to make
some changes.
The problem is that I cannot manage to ask python to load the module from the
development folder and not the normal folder.
reload(picos.constraint)
Out[22]: <module 'picos.constraint' from '/home/optimi/bzffourn/python/lib/python2.7/site-packages/picos/constraint.pyc'>
while the source code are here:
/home/optimi/bzffourn/ZIB/python_scripts/pyMathProg/picos
So the changes I make are not considered.
Answer: This should help you do it: [Override Python import
order](http://www.hasenkopf2000.net/wiki/page/how-override-pythons-module-
import-order/). Just change your import to this:
import sys
sys.path.insert(0,"/home/optimi/bzffourn/ZIB/python_scripts/pyMathProg/picos")
import picos.constraint
|
python graph-tool library with graph database
Question: I would like to use some of the [graph-tool](http://graph-tool.skewed.de/)
functionality with data in a graph database (say neo4j, but any Blueprints
enabled graph DB would be good, see [Tinkerpop](http://www.tinkerpop.com/)
project).
I'm aware of (and have dabbled with some of) py2neo and would like to
investigate [Bulbs](http://bulbflow.com/overview/) as a way to access the
database and project like
[pyBlueprints](https://github.com/escalant3/pyblueprints).
My question is: How do I use graph-tool functions on data in a graph database
(such as neo4j) without exporting the whole graph to graphML (or one of the
exiting graph-tool import formats) etc?
I would like it to be more dynamic than `run query, find a subset of a graph,
export, process with graph-tool, put data back into graph`
I'm aware that Blueprints offers a "to GraphML reader/writer", is this the
solution?
Answer: I think that the workflow you present is probably the best and only one
available to you. You In TinkerPop terms, I would say that the workflow would
be more specifically:
1. run query - Use the [Gremlin Console](https://github.com/tinkerpop/gremlin/wiki/Getting-Started#using-the-gremlin-console)
2. find a subset of a graph - Write your traversal in the console and dump the results of it into a subgraph. Use an in-memory TinkerGraph to store that subgraph - read more [here](http://gremlindocs.com/#recipes/subgraphing).
3. export - call [saveGraphML](http://gremlindocs.com/#methods/graph-save) on your subgraph instance
4. process with graph-tool - import the GraphML into graph-tool and do what you need to do with it
5. put data back into graph - I don't know graph-tool and its capabilities, but the Gremlin Console let's you work with data in a variety of ways that makes it pretty easy to shovel data around - read more about that [here](http://thinkaurelius.com/2013/02/04/polyglot-persistence-and-query-with-gremlin/).
|
list index out of range when trying to access a specific list item
Question: I'm very new to Python and can't figure out why this simple code doesn't work:
I have a string (a message like `handposition/506.83047/388.1101/703.2166`
from my websocket client) with `/` as separator and want to split it into a
list:
coordinates = msg.split('/')
I can print the list with:
print(coordinates)
and get this:
['handposition', '495.0279', '443.24762', '976.6502']
Everything works until I try to access the second element in the list with:
print(coordinates[1])
I get this error message:
File "mouse_server.py", line 19, in onMessage
print(coordinates[2])
exceptions.IndexError: list index out of range
Whats wrong with `print(coordinates[1])`?
# Update 1
Here is the full code:
from autobahn.twisted.websocket import WebSocketClientProtocol, \
WebSocketClientFactory
from pymouse import PyMouse
mo = PyMouse()
class MyClientProtocol(WebSocketClientProtocol):
def onConnect(self, response):
print("Server connected: {0}".format(response.peer))
def onOpen(self):
print("WebSocket connection open.")
def onMessage(self, msg, isBinary):
coordinates = msg.split('/')
print(coordinates)
print(coordinates[1])
def onClose(self, wasClean, code, reason):
print("WebSocket connection closed: {0}".format(reason))
if __name__ == '__main__':
import sys
from twisted.python import log
from twisted.internet import reactor
log.startLogging(sys.stdout)
factory = WebSocketClientFactory("ws://localhost:4343", debug = False)
factory.protocol = MyClientProtocol
reactor.connectTCP("127.0.0.1", 4343, factory)
reactor.run()
Answer: Most likely, if you are running this on a bunch of lines, there might be one
input line that does not have all the expected fields. You can print the whole
list before accessing that element and see the last one that appears before
the exception.
|
how to find classpath in project written in netbeans to use in jpype
Question: I have a public class Stm:
package stm;
import zemberek.morphology.apps.TurkishMorphParser;
import zemberek.morphology.parser.MorphParse;
import java.io.IOException;
import java.util.List;
public class Stm {
TurkishMorphParser parser;
public Stm(TurkishMorphParser parser) {
this.parser = parser;
}
public void do_stm(String word) {
System.out.println("Word = " + word);
List<MorphParse> parses = parser.parse(word);
for (MorphParse parse : parses) {
System.out.println(parse.getStems());
}
}
public static void main(String[] args) throws IOException {
TurkishMorphParser parser = TurkishMorphParser.createWithDefaults();
new Stm(parser).do_stm("ankaraya");
}
}
and aa.py as below :
import jpype
from jpype import *
import os
classpath = "/home/jeren/Desktop/Project/TweetParse/Parse_Tweets/stm/jars/antlr-4.2.2-complete.jar:/home/jeren/Desktop/Project/TweetParse/Parse_Tweets/stm/jars/guava-15.0.jar:/home/jeren/Desktop/Project/TweetParse/Parse_Tweets/stm/jars/zemberek-core-0.9.0.jar:/home/jeren/Desktop/Project/TweetParse/Parse_Tweets/stm/jars/zemberek-lm-0.9.0.jar:/home/jeren/Desktop/Project/TweetParse/Parse_Tweets/stm/jars/zemberek-morphology-0.9.0.jar:/home/jeren/Desktop/Project/TweetParse/Parse_Tweets/stm/jars/zemberek-tokenization-0.9.0-2.jar:/home/jeren/Desktop/Project/TweetParse/Parse_Tweets/stm/build/classes/stm/"
startJVM(getDefaultJVMPath(), "-ea", "-Djava.class.path=%s" % classpath)
A = JClass('Stm')
a = A()
jpype.shutdownJVM()
running aa.py , I get this error:
Traceback (most recent call last):
File "aa.py", line 15, in <module>
A = JClass('Stm')
File "/usr/lib/python2.7/dist-packages/jpype/_jclass.py", line 54, in JClass
raise _RUNTIMEEXCEPTION.PYEXC("Class %s not found" % name)
jpype._jexception.ExceptionPyRaisable: java.lang.Exception: Class Stm not found
I can call normal class in python but have problem in the project I have
written in netbeans with imported some jar files. jar files are located
/home/jeren/Desktop/Project/TweetParse/Parse_Tweets/stm/jars/
I know problem is with classpath ! considering the jar files I used, how
should I fill classpath part ????????????
Answer: The last entry on your classpath should be
`/home/jeren/Desktop/Project/TweetParse/Parse_Tweets/stm/build/classes` and
you need to create class like `A = JClass('stm.Stm')`. Perhaps take a look how
classes are arranged into packages. For example
<http://docs.oracle.com/javase/tutorial/java/package/packages.html>
|
Closing 2nd window in wxpython
Question: I am using the below full script to process quicktime files. While the file is
processing I am opening a 2nd window, which stays on top of all other windows,
which informs the user "Processing files. Please wait" (Settings Class at the
bottom of code), until the processing is finished, it should then close the
second window and return to the main window. At the moment it runs, but when
the file finishes processing, instead of closing the always on top window it
gives me the error:
File "/Users/user1/Desktop/Python/File_Prrep.py", line 501, in __init__
self.Close()
File "/usr/local/lib/wxPython-3.0.0.0/lib/python2.7/site-packages/wx-3.0-osx_cocoa/wx/_core.py", line 9169, in Close
return _core_.Window_Close(*args, **kwargs)
TypeError: in method 'Window_Close', expected argument 1 of type 'wxWindow *'
I am not sure how to fix it. Here's my full code:
import wx
import os
import os.path
import inspect
import csv
import subprocess
import sys
import shutil
import re
import urllib2
import threading
import wx.lib.agw.pybusyinfo as PBI
from subprocess import Popen, PIPE
class ScrolledWindow(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, parent, id, title, size=(510, 370), style=wx.DEFAULT_FRAME_STYLE & ~ (wx.RESIZE_BORDER |
wx.RESIZE_BOX |
wx.MAXIMIZE_BOX))
self.tabbed = wx.Notebook(self, -1, style=(wx.NB_TOP))
run_params = {}
run_params["dropList1"] = ['HD 1920x1080', 'PAL 4x3', 'PAL 16x9', 'NTSC 4x3', 'NTSC 16x9']
run_params["dropList2"] = ['Progressive', 'Interlaced']
run_params["running"] = False
run_params["1stRun"] = True
self.CreateStatusBar()
menuBar = wx.MenuBar()
menu = wx.Menu()
self.SetMenuBar(menuBar)
panel = wx.Panel(self, -1)
self.Centre()
self.Show()
self.filePrep = PrepFile(self.tabbed, run_params)
self.settings = Settings('settings', run_params)
self.tabbed.AddPage(self.filePrep, "File Prep")
class PrepFile(wx.Panel):
def __init__(self, parent, run_params):
wx.Panel.__init__(self, parent)
self.run_params = run_params
self.fieldChoice = 'Progressive'
self.formatOption = 'HD 1920x1080'
outputOption = '''Format'''
wx.StaticText(self, -1, outputOption, (33, 22), style=wx.ALIGN_CENTRE)
self.choice1 = wx.Choice(self, pos=(35, 40), choices=self.run_params["dropList1"])
self.choice1.SetSelection(0)
self.choice1.SetFocus()
self.choice1.Bind(wx.EVT_CHOICE, self.selectOption)
fieldSetText = '''Fields'''
wx.StaticText(self, -1, fieldSetText, (33, 82), style=wx.ALIGN_CENTRE)
self.choice2 = wx.Choice(self, pos=(35, 100), choices=self.run_params["dropList2"])
self.choice2.SetSelection(0)
self.choice2.SetFocus()
self.choice2.Bind(wx.EVT_CHOICE, self.fieldSet)
self.buttonClose = wx.Button(self, -1, "Quit", pos=(195, 250))
self.buttonClose.Bind(wx.EVT_BUTTON, self.OnClose)
greyBox = wx.StaticBox(self, -1, '', pos=(20, 15), size=(235, 130))
outputtxt3 = '''Drag and Drop Quicktimes'''
wx.StaticText(self, -1, outputtxt3, pos=(35, 170), style=wx.ALIGN_CENTRE)
self.drop_target = MyFileDropTarget(self)
self.SetDropTarget(self.drop_target)
self.tc_files = wx.TextCtrl(self, wx.ID_ANY, pos=(38, 190), size=(200, 25))
self.buttonSubmit = wx.Button(self, -1, "Submit", pos=(250,190))
self.buttonSubmit.Bind(wx.EVT_BUTTON, self.submit)
def EvtRadioBox(self, event):
self.mode = (event.GetString())
def selectOption(self, e):
self.formatOption = self.choice1.GetStringSelection()
def fieldSet(self, e):
self.fieldChoice = self.choice2.GetStringSelection()
def setSubmissionDrop(self, dropFiles):
"""Called by the FileDropTarget when files are dropped"""
self.tc_files.SetValue(','.join(dropFiles))
self.selectedFiles = dropFiles
print self.selectedFiles
def submit(self, edit):
self.run_params["running"] = True
self.run_params["1stRun"] = False
Settings(None, self.run_params)
for item in self.selectedFiles:
if os.path.isdir(item):
for root, dirs, files in os.walk(item):
for file1 in files:
if file1.endswith(".mov"):
currentFile = os.path.join(root, file1)
self.jesFile(currentFile)
else:
if item.endswith(".mov"):
self.jesFile(item)
self.run_params["running"] = False
Settings(None, self.run_params)
def OnClose(self, e):
CloseApp()
def jesFile(self, currentFile):
if self.fieldChoice == "Interlaced":
if self.formatOption == 'HD 1920x1080':
self.preset = 'HD 1080i'
elif self.formatOption == 'PAL 4x3':
self.preset = 'PAL 4x3i'
elif self.formatOption == 'PAL 16x9':
self.preset = 'PAL 16x9i'
elif self.formatOption == 'NTSC 4x3':
self.preset = 'NTSC 4x3i'
elif self.formatOption == 'NTSC 16x9':
self.preset = 'NTSC 16x9i'
else:
if self.formatOption == 'HD 1920x1080':
self.preset = 'HD 1080p'
elif self.formatOption == 'PAL 4x3':
self.preset = 'PAL 4x3p'
elif self.formatOption == 'PAL 16x9':
self.preset = 'PAL 16x9p'
elif self.formatOption == 'NTSC 4x3':
self.preset = 'NTSC 4x3p'
elif self.formatOption == 'NTSC 16x9':
self.preset = 'NTSC 16x9p'
print self.preset
jesCommand = './JES/JES\ Extensifier.app/Contents/MacOS/JES\ Extensifier -p ' + '"' + self.preset + '"' + ' ' + '"' + currentFile + '"'
print jesCommand
self.process1 = Popen(jesCommand, shell=True, stdin=PIPE)
self.assignAudio(currentFile)
def assignAudio(self, currentFile):
changeScript = '''
on run argv
repeat with a in argv
set a's contents to a as POSIX file as alias
end repeat
open argv
end run
on open aa
set channel_layouts_map1 to {¬
{"Sound Track 1", "Sound Track 1", {"Left"}}, ¬
{"Sound Track 2", "Sound Track 2", {"Right"}}, ¬
{"Sound Track 3", "Sound Track 3", {"Center"}}, ¬
{"Sound Track 4", "Sound Track 4", {"LFE Screen"}}, ¬
{"Sound Track 5", "Sound Track 5", {"Left Surround"}}, ¬
{"Sound Track 6", "Sound Track 6", {"Right Surround"}}, ¬
{"Sound Track 7", "Sound Track 7", {"Left Total"}}, ¬
{"Sound Track 8", "Sound Track 8", {"Right Total"}} ¬
}
set channel_layouts_map2 to {¬
{"Sound Track 1", "Sound Track 1", {"Left"}}, ¬
{"Sound Track 2", "Sound Track 2", {"Right"}}, ¬
{"Sound Track 3", "Sound Track 3", {"Center"}}, ¬
{"Sound Track 4", "Sound Track 4", {"LFE Screen"}}, ¬
{"Sound Track 5", "Sound Track 5", {"Left Surround"}}, ¬
{"Sound Track 6", "Sound Track 6", {"Right Surround"}}, ¬
{"Sound Track 7", "Sound Track 7", {"Left Total", "Right Total"}} ¬
}
set channel_layouts_map3 to {¬
{"Sound Track", "Sound Track", {"Left", "Right"}} ¬
}
set channel_layouts_map4 to {¬
{"Sound Track 1", "Sound Track 1", {"Left"}}, ¬
{"Sound Track 2", "Sound Track 2", {"Right"}} ¬
}
repeat with a in aa
set f to a's POSIX path
set k to count_sound_tracks(f, {_close:false})
if k = 8 then
remap_audio_channels(f, channel_layouts_map1)
else if k = 7 then
remap_audio_channels(f, channel_layouts_map2)
else if k = 1 then
remap_audio_channels(f, channel_layouts_map3)
else if k = 2 then
remap_audio_channels(f, channel_layouts_map4)
else
-- ignore it (just close it)
close_document(f, {_save:false})
end if
end repeat
end open
on count_sound_tracks(f, {_close:_close})
tell application id "com.apple.quicktimeplayer" -- QuickTime Player 7 Pro
open (f as POSIX file)
tell (document 1 whose path = f)
repeat until exists
delay 0.2
end repeat
set k to count (tracks whose audio channel count > 0)
if _close then close
end tell
end tell
return k
end count_sound_tracks
on close_document(f, {_save:_save})
tell application id "com.apple.quicktimeplayer" -- QuickTime Player 7 Pro
tell (document 1 whose path = f)
if exists then
if _save and modified then save
close
end if
end tell
end tell
end close_document
on remap_audio_channels(f, channel_layouts_map)
script o
property map : channel_layouts_map
property pp : {}
property qq : {}
-- get name and id of sound tracks
tell application id "com.apple.quicktimeplayer" -- QuickTime Player 7 Pro
activate
open (f as POSIX file)
tell (document 1 whose path = f)
repeat until exists
delay 0.2
end repeat
tell (tracks whose audio channel count > 0)
set {pp, qq} to {name, id} -- name and id of sound tracks
end tell
end tell
end tell
-- remap audio channel layouts as specified
tell application "System Events"
tell (process 1 whose bundle identifier = "com.apple.quicktimeplayer")
-- open movie properties window
keystroke "j" using {command down}
tell (window 1 whose subrole = "AXDialog") -- properties for movie
repeat until exists
delay 0.2
end repeat
repeat with m in my map
set {trk, undef, layouts} to m
-- [TRK:
repeat 1 times
if trk's class = integer then
if trk < 1 or trk > (count my pp) then exit repeat -- TRK:
set trk to my pp's item trk
end if
tell scroll area 1
tell table 1
tell (row 1 whose text field 1's value = trk) -- target sound track whose name = trk
if not (exists) then exit repeat -- TRK:
select
end tell
end tell
end tell
tell tab group 1
click radio button 3 -- audio settings
tell scroll area 1
tell table 1 -- channel assignment table
set ix to count layouts
repeat with i from 1 to count rows
if i > ix then exit repeat
tell row i -- channel i
tell pop up button 1
click
tell menu 1 -- channel assignment menu
tell (menu item 1 whose title = layouts's item i)
if exists then click
end tell
end tell
end tell
end tell
end repeat
end tell
end tell
end tell
end repeat
-- /TRK:]
end repeat
-- close movie properties window
click (button 1 whose subrole = "AXCloseButton")
end tell
end tell
end tell
-- rename sound tracks as specified
tell application id "com.apple.quicktimeplayer"
set scale of document 1 to normal
tell document 1
repeat with m in my map
end repeat
if modified then save
close
end tell
end tell
end script
# tell o to run
run script o
end remap_audio_channels
on _index_of(xx, x) -- renamed _bsearch() v0.1
script o
property aa : xx
local i, j, k
if {x} is not in my aa then return 0
set i to 1
set j to count my aa
repeat while j > i
set k to (i + j) div 2
if {x} is in my aa's items i thru k then
set j to k
else
set i to k + 1
end if
end repeat
return i
end script
tell o to run
end _index_of'''
p = Popen(['osascript', '-'] + [currentFile], stdin=PIPE, stdout=PIPE, stderr=PIPE)
stdout, stderr = p.communicate(changeScript)
print (p.returncode, stdout, stderr)
print "Done"
class MyFileDropTarget(wx.FileDropTarget):
""""""
def __init__(self, window):
wx.FileDropTarget.__init__(self)
self.window = window
def OnDropFiles(self, x, y, filenames):
self.window.setSubmissionDrop(filenames)
class CloseApp(wx.Frame):
def __init__(e):
sys.exit(0)
class Settings(wx.Frame):
def __init__(self, parent, run_params):
self.run_params = run_params
if self.run_params["running"] == True:
wx.Frame.__init__(self, parent, -1, 'Please Wait', size=(350,150), pos=(35, 100), style=wx.STAY_ON_TOP | wx.DEFAULT_FRAME_STYLE)
wx.StaticText(self, -1, "Processing files. Please wait", style=wx.ALIGN_CENTRE)
self.Centre()
self.Show()
else:
if self.run_params["1stRun"] != True:
self.Close()
app = wx.App()
ScrolledWindow(None, -1, 'iTunes Quicktime File Prep')
app.MainLoop()
Answer: You are trying to close the settings window in the constructor. You can't do
that as the window does not even exist at that point.
Modify your settings class to _not_ do this:
class Settings(wx.Frame):
def __init__(self, parent, run_params):
self.run_params = run_params
if self.run_params["running"] == True:
wx.Frame.__init__(self, parent, -1, 'Please Wait', size=(350,150), pos=(35, 100), style=wx.STAY_ON_TOP | wx.DEFAULT_FRAME_STYLE)
wx.StaticText(self, -1, "Processing files. Please wait", style=wx.ALIGN_CENTRE)
self.Centre()
self.Show()
def OnClose(self):
if self.run_params["running"] == False:
self.Close()
|
Return list python
Question: well I have this code returns me a list:
from pysnmp.entity import engine, config
from pysnmp import debug
from pysnmp.entity.rfc3413 import cmdrsp, context, ntforg
from pysnmp.carrier.asynsock.dgram import udp
from pysnmp.smi import builder
import threading
import collections
import time
MibObject = collections.namedtuple('MibObject', ['mibName',
'objectType', 'valueFunc'])
class Mib(object):
"""Stores the data we want to serve.
"""
def __init__(self):
self._lock = threading.RLock()
self._test_count = 0
self._test_get = 10
self._test_set = 0
def getTestDescription(self):
return "My Description"
def getTestCount(self):
with self._lock:
return self._test_count
def setTestCount(self, value):
with self._lock:
self._test_count = value
def getTestGet(self):
return self._test_get
def getTestSet(self):
return self._test_set
def setTestSet(self):
with self._lock:
self._test_set = value
class ListObject:
def __init__(self):
mib = Mib()
self.objects = [
MibObject('MY-MIB', 'testDescription', mib.getTestDescription),
MibObject('MY-MIB', 'testCount', mib.getTestCount),
MibObject('MY-MIB', 'testGet', mib.getTestGet),
MibObject('MY-MIB', 'testSet', mib.getTestSet)
]
def returnTest(self):
return ListObject()
class main ():
print ListObject()
but sometimes it returns the object variable and it returns me this:
<__main__.ListObject instance at 0x16917e8>
What am I doing wrong?
Answer: When you pass an object to python `print`, the printed string is given by its
class '**str** ' method.
`<main.ListObject instance at 0x16917e8>` is the string returned by the
default implementation of `__str__` method. If you haven't overridden this
method for your class, the result is just the expected.
**EDIT:**
If what you want is to print `objects` when you write `print ListObject()` you
have to override `ListObject` `__str__(self)` method:
class ListObject:
def __str__(self):
return self.objects.__str__()
|
String Matching: Only first iteration match is caught
Question: I am attempting to create a small iterative string match function in Python. I
am getting stumped as the first set of matching strings is caught, but the
second set of matching strings is not. I added a string conversion to ensure
the objects were strings. It is also my understanding that the == operator
matches values, not objects. (Correct me if my terminology is wrong). Any help
would be appreciated.
#!/usr/bin/python
import re
NameLG_file = open("Name2.txt", "r")
NameSM_file = open("Inc2.txt", "r")
SMList = []
LGList = []
# Assign LG to List and Format
for a in NameLG_file:
a = a.strip('\n')
a = a.replace('\"', '')
a = str(a)
LGList.extend([a])
# Assign SM to list and format
for c in NameSM_file:
c = c.strip('\n')
c = str(c)
SMList.extend([c])
# Identify and list orphans.
for e in LGList:
for f in SMList:
if e == f:
print True
print e
print f
print ""
# break
else:
print False
print e
print f
print ""
NameLG_file.close()
NameSM_file.close()
Name2.txt contains
"teardown"
"Elite"
Binary
Inc2.txt contains
teardown
Elite
The output is:
True
teardown
teardown
False
teardown
Elite
False
Elite
teardown
False
Elite
Elite
False
Binary
teardown
False
Binary
Elite
As such I want the matching Elite strings to show True. Thanks in advance!
Answer: For my point of the view the problem is that your files use windows EOL (\r\n)
So when you do a.strip('\n') the returned value is "Elite\r" and not "Elite"
Solution is to use strip() instead of strip('\n')
|
Using exceptions to return exceptional values: is this good practice?
Question: Is it good Python practice to have a function return None most of the time,
while _exceptionally_ returning useful _values_ by _raising an exception where
the values are stored_?
I am a little uneasy with this because exceptions are most often used to
signal some kind of problem, with some attributes of the exception giving some
details about the problem. Here, I would like the exception to actually mean
"here is an exceptional _result_ of this function".
Using an exception for this is tempting because (1) this is only done in
exceptional circumstances and (2) this is efficient, in Python (more than an
`if … is not None:…`). On the other hand, the exception itself is not the sign
of an error of any sort, just a vehicle for the exceptionally returned values.
Is there any official recommendation against using exceptions for
exceptionally returning values from a function?
**PS** : Here is a use case:
* An object method updates the internal state of the object based on new data (it's a finite state automaton).
* At some point (usually after getting many data points), the method considers that some action must be taken (in my case: some date from the object should be stored in a database, and the object's state is reset to the initial state, where it is ready to get more data).
Thus, the sequence of events, for the method, is: get data, update state, get
data, update state,… ah! we reached an special state where information about
the object should be stored! reset the objet's state and send the relevant
information out; get data, update state, get data, update state,… Thus, most
of the time, the function updates the internal state of the object and does
not return anything. Exceptionally, it must send important information about
the object.
Answer: Exceptions can model _any_ exception to the normal flow of the code. Errors
are the most common use-case, but anything where the _normal_ return type
doesn't really make sense can be seen as an opportunity to raise an exception
instead.
Raising an exception for non-error use-cases is _perfectly fine_ , if it
simplifies your flow.
Remember, the `Exception` object raised and caught, is itself a value.
Attaching information to that is a perfectly acceptable method to communicate
with a caller to signal a deviation from 'normal'.
I'd prefer an exception over a sentinel value like `None`; it makes the
handling explicit, and can actually lead to cleaner handling and better
feedback to a developer when they forget to handle the exceptional return
value. If your code normally returns a list, but in specific, exceptional
circumstances, you return `None`, _and the caller doesn't handle it_ , then
you get weird bugs down the line somewhere. `TypeError: 'NoneType' object is
not iterable` is a lot more cryptic than an explicit unhandled exception.
|
Error importing/installing PyQt
Question: I am having a ton of trouble with PyQt. I downloaded the binary installer,
made sure it was the right version, (4.15.5, 64-bit) and thought I was done.
Now, I have two problems, which totally stop me from using it with Python.
First of all, when I enter 'from pyqt4 import qtcore' (or something along
those lines, I'm not looking at the command right now) it returns
'ImportError: No module named "pyqt4"'. So I thought I might try making a form
in designer and using pyuic to convert it. Nope. Apparently, 'pyuic is not a
recognized command or file'. It seems like I did something wrong, but I've re-
installed both Python and PyQt several times and spent hours searching the
web. What is happening?
PS, there is no 'bin' folder, I looked at that question too...
PPS, I'm running Python 3.3.3 on a Windows 7 machine.
Answer: On Windows, Python packages are installed to Python's installation directory
in Lib\site-packages (e.g. C:\Python33\Lib\site-packages\PyQt4). Scripts
install to the "Scripts" directory (e.g. C:\Python33\Scripts). The installer
should have also created a shortcut to Qt Designer in the Start menu.
Imports are case sensitive even on Windows:
from PyQt4 import QtCore
For the `pyuic` command, there's a batch file named pyuic4.bat in the package
directory. Personally I'd skip on this1. The actual script is
PyQt4\uic\pyuic.py. I'd use a Windows shortcut2, or use an NTFS symlink3
created with cmd's [`mklink`](http://ss64.com/nt/mklink.html) or Python's
[`os.symlink`](https://docs.python.org/3/library/os.html#os.symlink). You
could also just copy the script, but that's a problem if the original gets
updated.
* * *
1\. A batch file's `Ctrl-C` handler is an annoying prompt that's only relevant
to actual batch processing.
2\. To use the shortcut like a normal command, just add the `.LNK` file
extension to the `PATHEXT` environment variable. Here's the way to create a
simple shortcut in Python (requires PyWin32):
import win32com.client
ws = win32com.client.Dispatch('wscript.shell')
shortcut = ws.CreateShortcut('SHORTCUT_NAME.lnk')
shortcut.TargetPath = r'"PATH\TO\TARGET"'
shortcut.Save()
3\. Creating symbolic links requires NT 6.0+ and
`SeCreateSymbolicLinkPrivilege`. Check `whoami /priv` to see whether your
account has the privilege. Accounts in the Administrators group have the
privilege, but require access token elevation if UAC is enabled. Regular users
can be granted the privilege.
|
Python, A MySQL statement works when I put in actual value of variable, but not when using variable?
Question: Have the following code, it is part of a script used to read stdin, and
process logs.
jobId = loglist[19]
deliveryCount += 1
dbcur.execute('UPDATE campaign_stat_delivered SET pmta_delivered = pmta_delivered + %s WHERE id = %s') % (deliveryCount,jobId)
dbcon.commit()
dbcon.close()
I can run the following:
dbcur.execute('UPDATE campaign_stat_delivered SET pmta_delivered = pmta_delivered + 1 WHERE id=1')
dbcon.commit()
dbcon.close()
and it will work. Not really sure whats going on, and its hard for me to test
quickly because I can't actually see the script running since my program feeds
directly into it. I have to make changes, restart program that feeds, send an
email, then check database. Have other scripts, and am able to use variables
in SQL statements with no problem.
Any suggestions as to what may be going on? And, any suggestions on how I can
test quicker?
full code:
import os
import sys
import time
import MySQLdb
import csv
if __name__=="__main__":
dbcon = MySQLdb.connect(host="tattoine.mktrn.net", port=3306, user="adki", passwd="pKhL9vrMN8BsFrJ5", db="adki")
dbcur = dbcon.cursor()
#type, timeLogged,timeQueued,orig,rcpt,orcpt,dsnAction,dsnStatus,dsnDiag,dsnMta,bounceCat,srcType,srcMta,dlvType,dlvSourceIp,dlvDestinationIp,dlvEsmtpAvailable,dlvSize,vmta,jobId,envId,queue,vmtaPool
while True:
line = sys.stdin.readline()
fwrite = open("debug.log","w")
# fwrite.write(str(deliveryCount))
fwrite.write("test2")
dbcur.execute("INSERT INTO test(event_type) VALUES ('list')")
dbcon.commit()
loglist = line.split(',')
deliveryCount = 0
bounceType = loglist[0]
bounceCategory = loglist[10]
email = loglist[4]
jobId = loglist[19]
if bounceType == 'd':
deliveryCount += 1
fwrite = open("debug2.log","w")
# fwrite.write(str(deliveryCount))
fwrite.write("test3")
dbcur.execute("INSERT INTO test(event_type) VALUES (%d)", deliveryCount)
dbcon.commit()
dbcur.execute('UPDATE campaign_stat_delivered SET pmta_delivered = pmta_delivered + %s WHERE id = %s',(deliveryCount,jobId))
dbcon.commit()
dbcon.close()
Answer: Never use string interpolation to run a sql query.
You should do:
dbcur.execute(
'UPDATE campaign_stat_delivered SET pmta_delivered = pmta_delivered + %s WHERE id = %s',
(deliveryCount,jobId)
)
There's two arguments to the execute function. The query with placeholders,
and a tuple of parameters. This way, mysql will escape your parameters for you
and prevent sql injection attacks (
<http://en.wikipedia.org/wiki/SQL_injection> )
Your error must have come from the fact that you use the % operator on the
result of the query `'UPDATE campaign_stat_delivered SET pmta_delivered =
pmta_delivered + %s WHERE id = %s'`. This query in itself (without parameters)
is syntactically incorrect for mysql. You have to pass the tuple of parameters
to the execute function as a second argument.
|
Using cx_freeze with PyQT5 and Python 3 on MacOSX
Question: I'm trying to use cx_freeze 4.3.3 on a MacOS running 10.9.2 on a very simple
PyQt5 script with Python 3.3.
No errors are returned and the .app is output. However when running the .app
from terminal I obtain the error:
LSOpenURLsWithRole() failed with error -10810
which according to Apple's documentation is "Unknown Error".
The very simple code I try to run (PyQt5app.py) is:
import sys
from PyQt5.QtWidgets import QApplication, QDialog
app = QApplication(sys.argv)
form = QDialog()
form.show()
app.exec_()
The file setup.py is:
import sys
from cx_Freeze import setup, Executable
base = None
if sys.platform == 'win32':
base = 'Win32GUI'
options = {
'build_exe': {
'excludes': ['Tkinter'] # Sometimes a little finetuning is needed
}
}
executables = [
Executable('PyQt5app.py', base=base)
]
setup(name='PyQt5app',
version='0.1',
description='Sample PyQt5 GUI',
executables=executables,
options=options
)
and when running I call:
sudo python cx_freeze bdist_mac
obtaining this log : <http://pastebin.com/VBxyyBRn>
with app returning error above.
So, reading around I see it might be a problem related to including qt files
in the app (or at least this was the issue on PyQt4) so I try specifying the
qt-menu-nib directory:
sudo python setup.py bdist_mac --qt-menu-nib=/Users/franco/Qt5.2.1/5.2.1/clang_64/plugins/platforms/
obtaining this log: <http://pastebin.com/TpRdrSmT>
and the same not working error.
If I run the app from PyQt5app.app/Contents/MacOS/PyQt5app I get a lot of
bootstrap errors:
Traceback (most recent call last):
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/site-packages/cx_Freeze-4.3.3-py3.3-macosx-10.9-x86_64.egg/cx_Freeze/initscripts/Console.py", line 27, in <module>
exec(code, m.__dict__)
File "PyQt5app.py", line 5, in <module>
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/importlib/_bootstrap.py", line 1565, in _find_and_load
return _find_and_load_unlocked(name, import_)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/importlib/_bootstrap.py", line 1532, in _find_and_load_unlocked
loader.load_module(name)
File "ExtensionLoader_PyQt5_QtWidgets.py", line 22, in <module>
File "ExtensionLoader_PyQt5_QtWidgets.py", line 14, in __bootstrap__
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/importlib/_bootstrap.py", line 1565, in _find_and_load
return _find_and_load_unlocked(name, import_)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/importlib/_bootstrap.py", line 1532, in _find_and_load_unlocked
loader.load_module(name)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/importlib/_bootstrap.py", line 584, in _check_name_wrapper
return method(self, name, *args, **kwargs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/importlib/_bootstrap.py", line 495, in set_package_wrapper
module = fxn(*args, **kwargs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/importlib/_bootstrap.py", line 508, in set_loader_wrapper
module = fxn(self, *args, **kwargs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/importlib/_bootstrap.py", line 1132, in load_module
fullname, self.path)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/importlib/_bootstrap.py", line 313, in _call_with_frames_removed
return f(*args, **kwds)
SystemError: initialization of sip raised unreported exception
Needless to say the script works fine when launching from terminal:
python PyQt5app.py
I most certainly am doing something wrong, so please, can anybody help me?
Answer: So, after long fighting here is the issue: libzmq . I installed libzmq and
specified the --qt-menu-nib option and the simple example above runs in both:
sudo python setup.py build
and
sudo python setup.py bdist_mac
Step by step instructions:
I used mac ports for most of my python33 packages so I sticked to it for the
rest. Libzmq is not available on macports but its dependencies are. So:
1) install libtool, autoconf, automake:
sudo port install libtool
sudo port install autoconf
sudo port install automake
2) grab the latest version of libzmq from <https://github.com/zeromq/libzmq> (
I downloaded the ZIP for sake of order ) and unzip/navigate to the folder
/libzmq-master
now the instructions provided in the INSTALL document in the folder are pretty
clear, if you installed all the dependencies then you will be fine. run:
sudo ./autogen.sh
sudo ./configure
sudo make
sudo make install
3) download the latest cx_freeze from
<https://bitbucket.org/anthony_tuininga/cx_freeze/downloads> then unzip/untar
navigate to folder and run:
sudo python setup.py build
sudo python setup.py install
now when compiling code for MacOSX that uses Python3.3 and PyQt5 you can run:
sudo python setup.py build
then navigate in the build folder and run the program as:
./nameoftheprogram
once you made sure this work then build the app or dmg (as you prefer) with:
sudo python setup.py bdist_mac --qt-menu-nib=/Users/username/Qt5.2.1/5.2.1/clang_64/plugins/platforms/
where the path is the path to your Qt5 installation. If I don't use the --qt-
menu-nib option the app crashes on startup whereas the build works fine.
Hope this will help someone in the future.
|
Downloading Requests for python error on ubuntu
Question: I'm running ubuntu on my computer and I'm trying to download
[requests](http://docs.python-requests.org/en/latest/).
However, when I do `pip install requests` it gives me an error:
writing manifest file 'requests.egg-info/SOURCES.txt'
running install_lib
creating /usr/local/lib/python2.7/dist-packages/requests
error: could not create '/usr/local/lib/python2.7/dist-packages/requests': Permission denied
----------------------------------------
Command /usr/bin/python -c "import setuptools;__file__='/home/alejandro/build/requests/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --single-version-externally-managed --record /tmp/pip-tT3Boe-record/install-record.txt failed with error code 1
Does anyone have any tips on how to get past this or fix it?
Answer: Your error
could not create '/usr/local/lib/python2.7/dist-packages/requests': Permission denied
suggests that you tried to install the package system-wide as a regular user -
which you don't have permission to do.
You can either install the package just for yourself with the `--user` option:
pip install --user requests
... or install it system-wide as root using `sudo`:
sudo pip install requests
Alternatively, you could look into using a [virtual
environment](http://virtualenvwrapper.readthedocs.org/en/latest/).
|
Cannot set an array element with a sequence
Question: I'm using the `NumPy` python library to run large-scale edits on a `.csv`
file. I'm using this python code:
import numpy as np
def main():
try:
e,a,ad,c,s,z,ca,fn,ln,p,p2,g,ssn,cn,com,dob,doh,em = np.loadtxt('c:\wamp\www\_quac\carryover_data\SI\Employees.csv',delimiter=',',unpack=True,dtype='str')
x=0
dob = dob.split('/')
for digit in dob:
if len(digit) == 1:
digit = str('0'+digit)
dob = str(dob[2]+'-'+dob[0]+'-'+dob[1])
doh = doh.split('/')
for digit in doh:
if len(digit) == 1:
digit = str('0'+digit)
doh = str(doh[2]+'-'+doh[0]+'-'+doh[1])
for eID in e:
saveLine=eID+','+a[x]+','+ad[x]+','+c[x]+','+s[x]+','+z[x]+','+ca[x]+','+fn[x]+','+ln[x]+','+p[x]+','+p2[x]+','+g[x]+','+ssn[x]+','+cn[x]+','+com[x]+','+dob[x]+','+doh[x]+','+em[x]+'\n'
saveFile = open('fixedEmployees.csv','a')
saveFile.write(saveLine)
saveFile.close()
x+=1
except Exception, e:
print str(e)
main()
`dob` and `doh` contain a string, e.g. `4/26/2012` and I'm trying to convert
these to `mysql` friendly `DATE` forms, e.g. `2012-04-26`. The error that is
printed when I run this script is
`cannot set an array element with a sequence`
It does not specify a line and so I don't know what this really means. I'm
pretty new to python; I've checked other questions with this same error but I
can't make sense of their code. Any help is very appreciated.
Answer: Try using `zfill` to reformat the date string so you can have a '0' before
your '4'. (`zfill` pads a string on the left with zeros to fill the width.)
doh = '4/26/2012'
doh = doh.split('/')
for i, s in enumerate(doh):
doh[i] = s.zfill(2)
doh = doh[2]+'-'+doh[0]+'-'+doh[1]
# result: '2012-04-26'
As for the `cannot set an array element with a sequence` it would be helpful
to know where that is occurring. I'm guessing there is something wrong with
structure of the array.
|
Google App Engine + Flask + Stripe: Attribute Error: AttributeError: 'function' object has no attribute 'Customer'
Question: I'm starting a mini blog for my honeymoon with Google App Engine, Flask and
Stripe that we can send to family and friends. Everythings working great,
except for Stripe.
Error Received:
File "/Users/MDev/Desktop/Steph_Max/Wedding/main.py", line 131, in charge
customer = stripe.Customer.create(
AttributeError: 'function' object has no attribute 'Customer'
This is the code line it's referring to in the error.
@app.route('/stripe')
def stripe():
return render_template('stripe.html', key=stripe_keys['publishable_key'])
@app.route('/charge', methods=['POST'])
def charge():
# Amount in cents
amount = 500
customer = stripe.Customer.create(
email = '[email protected]',
card = request.form['stripeToken']
)
charge = stripe.Charge.create(
customer=customer.id,
amount=amount,
currency='usd',
description='Flask Charge'
)
return render_template('charge.html', amount=amount)
I have my `main.py` setup with just the default checkout as
[documented](https://stripe.com/docs/checkout/guides/flask) in Stripe docs:
from flask import Flask
from flask import render_template, request, redirect, url_for
from werkzeug.utils import secure_filename
from flask import send_from_directory
import stripe
import os
app = Flask(__name__)
#app.config['DEBUG'] = True
stripe_keys = {
'secret_key' : os.environ['SECRET_KEY'],
'publishable_key' : os.environ['PUBLISHABLE_KEY']
}
stripe.api_key = stripe_keys['secret_key']
My `index` page works (I've renamed it `stripe.html`) but the issue is just
when it goes to the `charge` page. The script for the stripe button is
working, I can submit a fake card, everything works except unto the charge
page.
I'm on the noob side of noob so please be patient with me and if I'm missing
something I'll update ASAP :)
Versions:
* stripe: stripe-1.14.0-py2.7.egg-info
* Python: 2.7
* GAE: 1.9.3
Answer: The problem is you have imported the stripe library and then defined a
function with the same name.
|
what is python runpy module?
Question: I was wondering the purpose of `runpy` module in Python and how does it help
in day to day development of the Python user community?
Can someone please explain the necessity, usage and advantages of `runpy`
module in python?
Answer: The docs say:
> The runpy module is used to locate and run Python modules without importing
> them first. Its main use is to implement the -m command line switch that
> allows scripts to be located using the Python module namespace rather than
> the filesystem.
You can run a python module like this:
python -m SimpleHTTPServer 8000
The `runpy` module enables this functionality. It may not be useful outside of
this context.
|
Suggestions on ways to store file information using python
Question: I've been looking around online for ways to store information in python and i
came across a variety of ways to do this. I wanted to come to stackoverflow
and see what you guys 'who are more experienced with python' would suggest in
this scenario.
I'm doing a for-loop collecting all the files in a specific directory. What I
want to do is collect each file along with it's file size, filename, and
filetype. What would you guys recommend for this? The method used below i know
is not good practice.
import os
cacheDir = os.path.normpath('Q:\Qoros\Project_Assets\car\PublishedMXS')
mxsFiles = []
mxsNames = []
maxFilesize = []
for file in os.listdir(cacheDir):
if file.endswith(".mxs"):
filePath = cacheDir + '/' + file
prefix = file.split('_')
mxsFiles.append(filePath)
mxsNames.append(prefix[0])
# maxFilesize.append(filesize)
Answer:
from collections import namedtuple
import glob
import os
cache_dir = os.path.normpath('Q:\Qoros\Project_Assets\car\PublishedMXS')
filespec = "*.mxs"
FileData = namedtuple("FileData", ["name", "prefix", "size"])
files = []
for fname in glob.glob(os.path.join(cache_dir, filespec)):
name = os.path.abspath(fname)
prefix = fname.split("_")[0]
size = os.stat(fname).st_size
files.append(FileData(name, prefix, size))
|
MySQLdb is caching SELECT results?
Question: i'm running a SELECT query in a loop.
Once in a while, the database table is updated (by another program).
The first SELECT retrieves the correct data, but **further calls in the loop
return the first values**.
How can I retrieve up-to-date data?
The only workaround I found so far, is reconnect to the DB on each iteration!
In my example, uncommenting BOTH comment #1# and #2#. Uncommenting only #2# is
not enough (i.e., cursor is recreated), the results are still cached.
Here's a working sample that gives the error.
import MySQLdb
from time import sleep
class DB:
def __init__(self):
self.connection = MySQLdb.connect(mysql_host, mysql_user, mysql_pass, mysql_db)
self.cursor = self.connection.cursor()
def get(self):
sql = ''' SELECT id, message FROM mps_messages
WHERE topic=%s ORDER BY id LIMIT 1 '''
#1# self.connect()
#2# self.cursor = self.connection.cursor()
self.cursor.execute(sql, ("topic",) )
rec = self.cursor.fetchone()
print rec
def loop(self):
while True:
self.get()
sleep(4)
db=DB()
db.loop()
* OS: ubuntu,
* python: 2.7.4
* mysqldb: 1.2.3
* mysql: 5.5.34
Answer: i had to add
connection.autocommit(True)
Adding SQL_NO_CACHE had no effect on the presented case, apparently because
there was no caching involved.
I still don't understand why a SELECT needs COMMIT.
I'll open a new question about it.
|
How do I access Windows Azure account token using Python 3.3
Question: I am new to programming and I am looking to access Windows translate token
using Python 3.3. MS have guidelines [here](http://msdn.microsoft.com/en-
us/library/ff512433.aspx) for C# and PHP but I don't seem to be able to
replicate their success in Python. The code is falling over at the
HTTPConnection stage.
**Edit [14 May]:** I found some improvements that seem to be helping
My simple code to connect is:
from suds.client import Client
import http.client
# Initialize variables:
wsdlUrl = "http://api.microsofttranslator.com/V2/Soap.svc"
clientID = "ID";
clientSecret = "SECRET"
authUrl = "https://datamarket.accesscontrol.windows.net/v2/OAuth2-13/"
scopeUrl = "http://api.microsofttranslator.com"
grantType = "client_credentials"
def getTokens(grantType, scopeUrl, clientID, clientSecret, authUrl):
headers = {"grant_type": grantType, "client_id": clientID, "client_secret": clientSecret, "scope": scopeUrl}
conn = http.client.HTTPSConnection('datamarket.accesscontrol.windows.net')
conn.request("POST", "/v2/OAuth2-13", "", headers)
response = conn.getresponse()
print(response.status, response.reason)
getTokens(grantType, scopeUrl, clientID, clientSecret, authUrl)
The error now receive is: 400 Bad Request
I have researched this error and where answers were provided it tended to be
reasonably straightforward to fix. I have checked the code with these fixes
but to no avail.
I guess this is a reasonably common problem for anyone wanting to work with
Azure and needing to access the token?
If you have any suggestions on how to diagnose this error or better methods to
get the access key, kindly share please.
Answer: I found a solution. MS have a useful tool for debugging here:
<http://oauthdevconsole.cloudapp.net/PartialOAuth>
In the end the solution was that the headers needed to be converted to a
string and a workaround for an unusual character in the secret had to be
modified using the tool above.
The final code is here:
def getTokens(grantType, scopeUrl, clientID, clientSecret, authUrl):
conn = http.client.HTTPSConnection('datamarket.accesscontrol.windows.net')
conn.request("POST", "/v2/OAuth2-13/", "client_id="+clientID+"&client_secret="+clientSecret+"&grant_type=client_credentials&scope="+scopeUrl)
response = conn.getresponse()
print(response.status, response.reason)
Best regards,
* Rob
|
python - variable not clearing inbetween function calls
Question: This program basically flattens an xml file and writes it to csv.
My issue is that the ‘row’ variable in ‘values_loop’ isn’t resetting between
calls. Every time I call it the new values are appended to the old ones.
Basically I’m getting this:
A B C
A B C D E F
A B C D E F G H I
When I should get this
A B C
D E F
G H I
My code:
import csv
import xml.etree.ElementTree as ET
file_name = '2.xml'
root = ET.ElementTree(file=file_name).getroot()
csv_file_name = '.'.join(file_name.split('.')[:-1]) + ".txt"
print csv_file_name
with open(csv_file_name, 'w') as file_:
writer = csv.writer(file_, delimiter="\t")
def header_loop(root, i=0, row=[]):
for child in root:
#print "\t"*i, child.tag.replace("{http://www.tes.com/aps/response}",""), child.attrib, i
row.extend([child.tag.replace("{http://www.tes.com/aps/response}","")])
header_loop(child,i+1)
if i==0: return row
def values_loop(root, i=0, row=[]):
for child in root:
#print "\t"*i, child.tag.replace("{http://www.tes.com/aps/response}",""), child.attrib, i
row.extend([child.text])
#print child.text
values_loop(child,i+1)
if i==0: return row
#write the header
writer.writerow(header_loop(root[3]))
#write the values
writer.writerow(values_loop(root[3]))
writer.writerow(values_loop(root[4]))
Answer: Set None as the default value for row.
def values_loop(root, i=0, row=None):
if row == None:
row = []
|
Python if any() does not work
Question: I want to check if any string elements in a list `phrases` contains certain
keywords in a set `phd_words`. I want to use `any` but it doesn't work.
In[19]:
import pandas as pd
import psycopg2 as pg
def test():
phd_words = set(['doctor', 'phd'])
phrases = ['master of science','mechanical engineering']
for word in phrases:
if any(keyword in word for keyword in phd_words):
return 'bingo!'
test()
Out[20]:
bingo!
How should I fix this?
Answer: That may happen if you use IPython's `%pylab` magic:
In [1]: %pylab
Using matplotlib backend: Qt4Agg
Populating the interactive namespace from numpy and matplotlib
In [2]: if any('b' in w for w in ['a', 'c']):
...: print('What?')
...:
What?
Here's why:
In [3]: any('b' in w for w in ['a', 'c'])
Out[3]: <generator object <genexpr> at 0x7f6756d1a948>
In [4]: any
Out[4]: <function numpy.core.fromnumeric.any>
`any` and `all` get shadowed with `numpy` functions, and those behave
differently than the builtins. This is the reason I stopped using `%pylab` and
started using `%pylab --no-import-all` so that it doesn't clobber the
namespace like that.
To reach the builtin function when it is already shadowed, you can try
`__builtin__.any`. The name `__builtin__` seems to be available in IPython on
both Python 2 and Python 3, which is probably on itself enabled by IPython. In
a script, you would first have to [`import
__builtin__`](https://docs.python.org/2/library/__builtin__.html) on Python 2
and [`import builtins`](https://docs.python.org/3/library/builtins.html) on
Python 3.
|
Project a 2D Point to 3D using a depth value. Maya Python API
Question: I'm trying to figure out how to project a 3D point from a 2D Point. I'd like
to be able to give it a depth value to project to. Anyone have any examples
for maya?
Thanks!
Here's the best I've been able to do:
def screenToWorld(point2D=None,
depth=None,
viewMatrix=None,
projectionMatrix=None,
width=None,
height=None):
'''
@param point2D - 2D Point.
@param viewMatrix - MMatrix of modelViewMatrix (World inverse of camera.)
@param projectionMatrix - MMatrix of camera's projectionMatrix.
@param width - Resolution width of camera.
@param height - Resolution height of camera.
Returns worldspace MPoint.
'''
point3D = OpenMaya.MPoint()
point3D.x = (2.0 * (point2D[0] / width)) - 1.0
point3D.y = (2.0 * (point2D[1] / height)) - 1.0
viewProjectionMatrix = (viewMatrix * projectionMatrix)
point3D.z = viewProjectionMatrix(3, 2)
point3D.w = viewProjectionMatrix(3, 3)
point3D.x = point3D.x * point3D.w
point3D.y = point3D.y * point3D.w
point3D = point3D * viewProjectionMatrix.inverse()
return point3D
As you can tell it does not use the depth value. I'm not sure how to
incorporate it using the projection matrix and viewMatrix.
Any help is greatly appreciated! -Chris
Answer: So I think i got a solution for this:
import maya.OpenMaya as OpenMaya
def projectPoint(worldPnt, camPnt, depth):
'''
@param worldPnt - MPoint of point to project. (WorldSpace)
@param camPnt - MPoint of camera position. (WorldSpace)
@param depth - Float value of distance.
Returns list of 3 floats.
'''
#Get vector from camera to point and normalize it.
mVec_pointVec = worldPnt - camPnt
mVec_pointVec.normalize()
#Multiply it by the depth and the camera offset to it.
mVec_pointVec *= depth
mVec_pointVec += OpenMaya.MVector(camPnt.x, camPnt.y, camPnt.z)
return [mVec_pointVec.x, mVec_pointVec.y, mVec_pointVec.z]
I didn't really need to convert it to 2d then back to 3d. I just needed to
extend the vector from camera.
|
Pickling weakref in Python
Question: I am still pretty new to Python and even newer to pickling. I have a class
`Vertex(ScatterLayout)` with a
[`__getnewargs__()`](https://docs.python.org/2/library/pickle.html#object.__getnewargs__):
def __getnewargs__(self):
return (self.pos, self.size, self.idea.text)
My understanding is that this will cause the pickle to pickle the object from
`__getnewargs__()` rather than the object's dictionary.
The pickle is called in the following method (in a different class
`MindMapApp(App)`):
def save(self):
vertices = self.mindmap.get_vertices()
edges = self.mindmap.get_edges()
output = open('mindmap.pkl', 'wb')
#pickle.dump(edges, output, pickle.HIGHEST_PROTOCOL)
pickle.dump(vertices, output, pickle.HIGHEST_PROTOCOL)
output.close()
When I call the `save()` method I get the following error:
pickle.PicklingError: Can't pickle <type 'weakref'>: it's not found as __builtin__.weakref
What am I missing or not understanding? I have also tried implementing the
`__getstate__()` / `__setstate__(state)` combination, with the same result.
Answer: You definitely can pickle a `weakref`, and you can pickle a `dict` and a
`list`. However, it actually matters what they contain. If the `dict` or
`list` contains unpicklable intems, then the pickling will fail. If you want
to pickle a `weakref`, you have to use `dill` and not `pickle`. The unpicked
`weakref` however deserialize as dead references.
>>> import dill
>>> import weakref
>>> dill.loads(dill.dumps(weakref.WeakKeyDictionary()))
<WeakKeyDictionary at 4528979192>
>>> dill.loads(dill.dumps(weakref.WeakValueDictionary()))
<WeakValueDictionary at 4528976888>
>>> class _class:
... def _method(self):
... pass
...
>>> _instance = _class()
>>> dill.loads(dill.dumps(weakref.ref(_instance)))
<weakref at 0x10d748940; dead>
>>> dill.loads(dill.dumps(weakref.ref(_class())))
<weakref at 0x10e246a48; dead>
>>> dill.loads(dill.dumps(weakref.proxy(_instance)))
<weakproxy at 0x10e246b50 to NoneType at 0x10d481598>
>>> dill.loads(dill.dumps(weakref.proxy(_class())))
<weakproxy at 0x10e246ba8 to NoneType at 0x10d481598>
|
pySDL2 Display without End Loop
Question: In every pySLD2 example I've found, I've seen a loop at the end of the code to
keep the window open until closure. For example:
#!/usr/bin/env python
"""
The code is placed into public domain
by anatoly techtonik <[email protected]>
"""
import sdl2
import sdl2.ext as lib
lib.init()
window = lib.Window('', size=(300, 100))
window.show()
renderer = lib.Renderer(window)
renderer.draw_point([10,10], lib.Color(255,255,255))
renderer.present()
####Specifically this loop####
running = True
while running:
for e in lib.get_events():
if e.type == sdl2.SDL_QUIT:
running = False
break
if e.type == sdl2.SDL_KEYDOWN:
if e.key.keysym.sym == sdl2.SDLK_ESCAPE:
running = False
break
All event handlers I've seen have been blocking. Is there a way, like I have
seen done in standard SDL, to simply call **initialization and updating
functions on a window that stays open**. I am writing to write this as an
external library that I can call independently from any project.
Any ideas? Thanks!
EDIT: As per request for a way to do this in standard SDL,
[this](https://github.com/aacoppa/Graphics_Engine_America/blob/master/src/screen.c)
works. Just call the init function to set up the screen and it will stay until
you close it.
Answer: You are misinterpreting the example - this is an additional hook (screen.c) to
be set up in the main loop.
SDL uses the main loop approach to deal with events, update the window
display, etc. Yes, there is a way to work around that. Create your very own
window and use whatever approach to keep it open, get the window handle via
the SDL_SysWM* functions and update the window's display buffer.
This however involves some glue code and also requires you to do the window
handling on your own.
|
How get line from web site with Python
Question: I want to print only line containing --> REQUIRES="berusky-data"
I try this but not work :
import urllib2
import re
f = urllib2.urlopen('http://slackbuilds.org/slackbuilds/14.1/games/berusk/berusky.info')
r = f.read()
for line in r:
if "REQUIRES" in r:
print line,
Answer: Use
[`splitlines()`](https://docs.python.org/2/library/stdtypes.html#str.splitlines)
for splitting the data by new-lines and check if `REQUIRES` string is in
`line`:
for line in r.splitlines():
if "REQUIRES" in line:
print line,
Demo:
>>> import urllib2
>>> f = urllib2.urlopen('http://slackbuilds.org/slackbuilds/14.1/games/berusky/berusky.info')
>>> r = f.read()
>>> for line in r.splitlines():
... if "REQUIRES" in line:
... print line,
...
REQUIRES="berusky-data"
|
How can I run python script on Windows 7, with ability to enable/disable apperance of window and on taskbar?
Question: I found only [How to start a python script in the background once it's
run?](http://stackoverflow.com/questions/6345298/how-to-start-a-python-script-
in-the-background-once-its-run) and [how to run a python script in the
background?](http://stackoverflow.com/questions/783531/how-to-run-a-python-
script-in-the-background) \- but it is permanent and is not allowing switching
to a normal mode.
I want to periodically check for some situation (change of website) and notify
me once it happens - without appearing as window and taking space in taskbar
all the time.
Answer: You need to build a Windows SysTray App/Service using the [Python for Windows
Extensions](http://sourceforge.net/projects/pywin32/)
Please see similar question: [How to build a SystemTray app for
Windows?](http://stackoverflow.com/questions/9494739/how-to-build-a-
systemtray-app-for-windows)
This is by no means trivial, so please feel free to post follow-up questions
of a more specific nature!
**Update:**
You may also find this 3rd-party module quite useful that seems to implement
the hardest part of this for you as a reuseable library:
<https://github.com/Infinidat/infi.systray>
Example using `infi.systray`:
from infi.systray import SysTrayIcon
def say_hello(systray):
print "Hello, World!"
menu_options = (("Say Hello", None, say_hello),)
systray = SysTrayIcon("icon.ico", "Example tray icon", menu_options)
systray.start()
Looking at the source code for `infi.systray` the nice thing about it is that
it doesn't depend on `pywin32`. It uses `ctypes` which is available as part of
the standard Python library.
|
Python surface real position coordinates of pygame.mouse.get_pos and Rect.collidepoint
Question: In my python prog i have 2 surfaces :
* `ScreenSurface` : the screen
* `FootSurface` : another surface blited on `ScreenSurface`.
I put some rect blitted on the `FootSurface`, the problem is that
`Rect.collidepoint()` gives me relative coordinates linked to the
`FootSurface` and `pygame.mouse.get_pos()` gives absolute coordinates.
for example :
`pygame.mouse.get_pos()` \--> (177, 500) related to the main surface named
`ScreenSurface`
`Rect.collidepoint()` \--> related to the second surface named `FootSurface`
where the rect is blitted
Then that can't work. Is there an elegant python way to do this things: have
the relative position of mouse on the `FootSurface` or the absolute position
of my `Rect`; or must I change my code to split `Rect` in the `ScreenSurface`.
Answer: You can calculate the relative mouse position to any surface with a simple
subtraction.
Consider the following example:
import pygame
pygame.init()
screen = pygame.display.set_mode((400, 400))
rect = pygame.Rect(180, 180, 20, 20)
clock = pygame.time.Clock()
d=1
while True:
for e in pygame.event.get():
if e.type == pygame.QUIT:
raise
screen.fill((0, 0, 0))
pygame.draw.rect(screen, (255, 255, 255), rect)
rect.move_ip(d, 0)
if not screen.get_rect().contains(rect):
d *= -1
pos = pygame.mouse.get_pos()
# print the 'absolute' mouse position (relative to the screen)
print 'absoulte:', pos
# print the mouse position relative to rect
print 'to rect:', pos[0] - rect.x, pos[1] - rect.y
clock.tick(100)
pygame.display.flip()
|
Split string with regex not working
Question: I'm trying to split big file with some regex. Problem is that I want to keep
delimiter in text after split, and I tried to add ?= on the beggining of
regex, but then it doesn't split. I tried modified regex in Sublime, and it's
working there.
Text is like this:
Aug 07, 2014 01:01:01 PM
some text
Aug 07, 2014 02:02:02 PM
So, date, then some text and date. I want to get split text with regex which
recognize that date.
First version of regex, which works perfectlly for my purpose:
\w{3}\s\d{2}\,\s\d{4}\s\d{1,2}\:\d{2}\:\d{2}\s[AM|PM].)
Code in Python is this:
allparts = re.compile(r'\w{3}\s\d{2}\,\s\d{4}\s\d{1,2}\:\d{2}\:\d{2}\s[AM|PM].').split(alltext)
After adding ?=, it looks like this:
allparts2 =re.compile(r'(?=\w{3}\s\d{2}\,\s\d{4}\s\d{1,2}\:\d{2}\:\d{2}\s[AM|PM].)').split(alltext)
What I'm doing wrong in second code?
Answer: Sorry, my first answer was wrong:) Try not adding ?=, only put it in
parentheses like this:
allparts2 =re.compile(r'(\w{3}\s\d{2},\s\d{4}\s\d{1,2}\:\d{2}\:\d{2}\s[AM|PM].)').split(alltext)
Then try it without compile...
allparts2 = re.split('(\w{3}\s\d{2},\s\d{4}\s\d{1,2}\:\d{2}\:\d{2}\s[AM|PM].)', alltext)
When using:
#!/usr/local/bin/python2.7
import re
alltext = "Aug 07, 2014 01:01:01 PM some text Aug 07, 2014 02:02:02 PM another text Aug 07, 2014 03:03:03 AM "
allparts2 = re.split('(?=\w{3}\s\d{2},\s\d{4}\s\d{1,2}\:\d{2}\:\d{2}\s[AM|PM].)', alltext)
print(allparts2)
Result was:
Executing the program....
$python2.7 main.py
['Aug 07, 2014 01:01:01 PM some text Aug 07, 2014 02:02:02 PM another text Aug 07, 2014 03:03:03 AM ']
When using:
#!/usr/local/bin/python2.7
import re
alltext = "Aug 07, 2014 01:01:01 PM some text Aug 07, 2014 02:02:02 PM another text Aug 07, 2014 03:03:03 AM "
allparts2 = re.split('(?:\w{3}\s\d{2},\s\d{4}\s\d{1,2}\:\d{2}\:\d{2}\s[AM|PM].)', alltext)
print(allparts2)
Result was:
Executing the program....
$python2.7 main.py
['', ' some text ', ' another text ', ' ']
When using:
#!/usr/local/bin/python2.7
import re
alltext = "Aug 07, 2014 01:01:01 PM some text Aug 07, 2014 02:02:02 PM another text Aug 07, 2014 03:03:03 AM "
allparts2 = re.split('(\w{3}\s\d{2},\s\d{4}\s\d{1,2}\:\d{2}\:\d{2}\s[AM|PM].)', alltext)
print(allparts2)
Result was:
Executing the program....
$python2.7 main.py
['', 'Aug 07, 2014 01:01:01 PM', ' some text ', 'Aug 07, 2014 02:02:02 PM', ' another text ', 'Aug 07, 2014 03:03:03 AM', ' ']
Just to compare different forms.
|
Using Python CGI for a big HTML file
Question: I have a big html file named exercise.html, I need generate to one som stuff
by Python CGI. I want to ask you what is the best way to print this HTML.
I know that it is possible by print method with using format methods %s, %i
etc.:
print '''<html>
<head><title>My first Python CGI app</title></head>
<body>
<p>Hello, 'world'!</p>
.
.
<div>%s</div>
.
.
</body>
</html>''' % generated_text
But this HTML is really big,so is this only one solution?
Answer: You should consider using a templating language like
[Jinja2](http://jinja.pocoo.org/docs/intro/#basic-api-usage).
Here is a simple example straight from the link above:
>>> from jinja2 import Template
>>> template = Template('Hello {{ name }}!')
>>> template.render(name='John Doe')
Generally, though you save templates in a file, and then load / process them:
from jinja2 import Environment, PackageLoader
# The env object below finds templates that are lcated in the `templates`
# directory of your `yourapplication` package.
env = Environment(loader=PackageLoader('yourapplication', 'templates'))
template = env.get_template('mytemplate.html')
print template.render(the='variables', go='here')
As demonstrated above, templates let you put `variables` into the template.
Placing text inside {{ }} makes it a template variable. When you render the
template, pass in the variable value with a keyword argument. For instance,
the template below has a name variable that we pass via template.render
> This is my {{name}}.
template.render(name='Jaime')
|
Output missing when exe built with py2exe is run from Win CLI
Question: I have a script, `my_script.py` that includes these functions.
def output_results(scrub_count, insert_count):
print "Scrubbing Complete"
print "Valid requests: "+ str(scrub_count["Success"])
if scrub_count["Error"] > 0:
print "Requests with errors can be found in " + error_log
print "\n"
print "Inserting Complete"
print str(insert_count["Success"]) + " rows inserted into " + table + "."
print str(insert_count["Error"]) + " rows were NOT inserted. Please see " + error_log + " for more details."
def main():
scrub_count={"Success":0,"Error":0}
insert_count={"Success":0,"Error":0}
someOtherFunctions()
output_results(scrub_count, insert_count)
When I run the `python my_script.py` from the Windows CLI, `output_results`
prints out the following as expected.
Scrubbing Complete
Valid requests: 7
Invalid requests: 3
Requests with errors can be found in error.log
Inserting Complete
5 rows inserted into Table.
2 rows were NOT inserted. Please see error.log for more details.
I then build `my_script.py` into `my_script.exe` using py2exe and this setup
file:
from distutils.core import setup
import py2exe, sys
sys.argv.append('py2exe')
setup(
options = {'py2exe':{'bundle_files':1, 'compressed':True,'includes':["socket","decimal","uuid"]}},
windows = [{'script':"C:\Path\To\my_script.py"}],
zipfile = None,
)
When I run `my_script.exe` from the Windows CLI, it functions properly
_except_ for the fact that `output_results` does not output anything to the
command line like when I run `python my_script.py`.
* How can I fix this and still only have a single executable created?
* I haven't tried pyinstaller or cx_freeze. Do these handle options handle this better?
Answer: As previously explained by @Avaris in [Print not working when compiled with
py2exe](http://stackoverflow.com/questions/12505383/print-not-working-when-
compiled-with-py2exe) \- `windows` option in setup section is intended to
build GUI executables, which do not handle printing to console. The proper
option is to use `console` section.
So, instead of
setup(
options = {'py2exe':{'bundle_files':1, 'compressed':True,'includes':["socket","decimal","uuid"]}},
windows = [{'script':"C:\Path\To\my_script.py"}],
zipfile = None,
)
use
setup(
options = {'py2exe':{'bundle_files':1, 'compressed':True,'includes':["socket","decimal","uuid"]}},
console = [{'script': "C:\Path\To\my_script.py"}],
zipfile = None,
)
I can't say for cx_freeze, but pyinstaller also has separate routines for
compiling into gui and console executables.
|
Variance inflation factor in ridge regression in python
Question: I'm running a ridge regression on somewhat collinear data. One of the methods
used to identify a stable fit is a ridge trace and thanks to the great example
on [scikit-learn](http://scikit-
learn.org/stable/auto_examples/linear_model/plot_ridge_path.html#example-
linear-model-plot-ridge-path-py), I'm able to do that. Another method is to
calculate variance inflation factors (VIFs) for each variable as k increases.
When the VIFs decrease to <5 it is an indication the fit is satisfactory.
[Statsmodels](http://statsmodels.sourceforge.net/devel/_modules/statsmodels/stats/outliers_influence.html#variance_inflation_factor)
has code for VIFs, but it is for an OLS regression. I've attempted to alter it
to handle a ridge regression.
I'm checking my results against Regression Analysis by Example, 5th edition,
chapter 10. My code generates the correct results for k = 0.000, but not after
that. Working SAS code is available, but I'm not a SAS user and I don't know
the differences between that implementation and scikit-learn's (and/or
statsmodels's).
I've been stuck on this for a few days so any help would be much appreciated.
#http://www.ats.ucla.edu/stat/sas/examples/chp/chp_ch10.htm
from __future__ import division
import numpy as np
import pandas as pd
example = pd.read_csv('by_example_import.csv')
example.dropna(inplace=True)
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(example)
scaler.transform(example)
X = example.drop(['year', 'import'], axis=1)
#c_matrix = X.corr()
y = example['import']
#w, v = np.linalg.eig(c_matrix)
import pylab as pl
from sklearn import linear_model
###############################################################################
# Compute paths
alphas = [0.000, 0.001, 0.003, 0.005, 0.007, 0.009, 0.010, 0.012, 0.014, 0.016, 0.018,
0.020, 0.022, 0.024, 0.026, 0.028, 0.030, 0.040, 0.050, 0.060, 0.070, 0.080,
0.090, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.5, 2.0]
clf = linear_model.Ridge(fit_intercept=False)
clf2 = linear_model.Ridge(fit_intercept=False)
coefs = []
vif_list = [[] for x in range(X.shape[1])]
for a in alphas:
clf.set_params(alpha=a)
clf.fit(X, y)
coefs.append(clf.coef_)
for j, data in enumerate(X.columns):
cols = [col for col in X.columns if col not in [data]]
Z = X[cols]
yy = X.iloc[:,j]
clf2.set_params(alpha=a)
clf2.fit(Z, yy)
r_squared_j = clf2.score(Z, yy)
vif = 1. / (1. - r_squared_j)
print r_squared_j
vif_list[j].append(vif)
pd.DataFrame(vif_list, columns = alphas).T
pd.DataFrame(coefs, index=alphas)
###############################################################################
# Display results
ax = pl.gca()
ax.set_color_cycle(['b', 'r', 'g', 'c', 'k', 'y', 'm'])
ax.plot(alphas, coefs)
pl.vlines(ridge_cv.alpha_, np.min(coefs), np.max(coefs), linestyle='dashdot')
pl.xlabel('alpha')
pl.ylabel('weights')
pl.title('Ridge coefficients as a function of the regularization')
pl.axis('tight')
pl.show()
Answer: Variance inflation factor for Ridge regression is just three lines. I checked
it with the example on the UCLA statistics page.
A variation of this will make it into the next statsmodels release. Here is my
current function:
def vif_ridge(corr_x, pen_factors, is_corr=True):
"""variance inflation factor for Ridge regression
assumes penalization is on standardized variables
data should not include a constant
Parameters
----------
corr_x : array_like
correlation matrix if is_corr=True or original data if is_corr is False.
pen_factors : iterable
iterable of Ridge penalization factors
is_corr : bool
Boolean to indicate how corr_x is interpreted, see corr_x
Returns
-------
vif : ndarray
variance inflation factors for parameters in columns and ridge
penalization factors in rows
could be optimized for repeated calculations
"""
corr_x = np.asarray(corr_x)
if not is_corr:
corr = np.corrcoef(corr_x, rowvar=0, bias=True)
else:
corr = corr_x
eye = np.eye(corr.shape[1])
res = []
for k in pen_factors:
minv = np.linalg.inv(corr + k * eye)
vif = minv.dot(corr).dot(minv)
res.append(np.diag(vif))
return np.asarray(res)
|
Pass keyboard input to a windows executable
Question: I am creating a Python batch script for a piece of software that must run as a
windows executable in C:\ The pipeline is almost set up. But the executable
requires some keyboard entry before it starts. Before trying to pass keyboard
entry it worked with `subprocess.call()` but I couldn't get the syntax to work
for `communicate()`. This is commented out below.
I have tried now with win32com add on SendKeys (see below), but now the main
script carries on before the executable finishes. Any advice on my code? Can I
log the exit status of the executable to use a while loop just to sleep the
main process until the executable finishes?
#subprocess.call(r"C:\LTR_STRUC\LTR_STRUC_1_1.exe")
shell = win32com.client.Dispatch("WScript.shell")
Return = shell.Run(r"C:\LTR_STRUC\LTR_STRUC_1_1.exe")
time.sleep(2)
shell.AppActivate(r"C:\LTR_STRUC\LTR_STRUC_1_1.exe")
shell.SendKeys("y", 0)
shell.SendKeys("{Enter}", 0)
time.sleep(1)
shell.SendKeys("y", 0)
shell.SendKeys("{Enter}", 0)
time.sleep(1)
shell.SendKeys("{Enter}", 0)
###...and on goes the code...
Any other clever suggestions will be much appreciated!!
Answer: Give this a try:
import os
from subprocess import Popen, PIPE
cmd = r"C:\LTR_STRUC\LTR_STRUC_1_1.exe"
app = Popen(cmd, stdin=PIPE, shell=True)
app.stdin.write("y" + os.linesep)
...
Also if you wanted respond to a prompt you could PIPE stdout as well and
access it via:
app.stdout.read()
reference: <https://docs.python.org/2/library/subprocess.html#subprocess.PIPE>
|
Python: os.system(ping) argument not working?
Question: I am trying to make a def statement that uses os.system(ping) with a variable,
and it does not want to take the variable:
import os
def myping(myip):
print(myip)
ip1 = os.system("ping -c 1 myip")
print(ip1)
myping("127.0.0.1")
This returns a code of 512 (fail, rather than a 0, which is success) and then
"ping: unknown host myip". Depending on how I format the myip in the os.system
statement I can also get a 256 with another error. I have tried various ways
of formatting it, including quotation marks, parentheses, percent signs and
such, and I'm just not sure what I'm doing wrong.
I can tell I am making it think I want to scan for myip (literal string). What
syntax do I have wrong here? Thanks.
By the way, print(myip) within the def statement does give me 127.0.0.1.
Answer: You probably want to get the value of `myip` into the argument to `os.system`:
ip1 = os.system('ping -c 1 {}'.format(myip))
or (old style, deprecated in python 3.1):
ip1 = os.system("ping -c 1 %s" % myip)
|
How can I print the contents of a ConfigParser to my log?
Question: How can I print the contents of a Python 2.7 `ConfigParser` to `logging`?
The only solution I can find is to write to a temporary file and read that
file back in. Another idea I had was to get a fake "file handle" from the
logging utility and pass that to the ConfigParser write method, but I don't
know how to get such a handle form logging.
Answer: You should be able to create a writable object that writes to the log.
Something like this (if you want to keep the string around you could modify
the ConfigLogger to save it as well):
import ConfigParser
import logging
class ConfigLogger(object):
def __init__(self, log):
self.__log = log
def __call__(self, config):
self.__log.info("Config:")
config.write(self)
def write(self, data):
# stripping the data makes the output nicer and avoids empty lines
line = data.strip()
self.__log.info(line)
config = ConfigParser.ConfigParser()
config.add_section("test")
config.set("test", "a", 1)
# create the logger and pass it to write
logging.basicConfig(filename="test.log", level=logging.INFO)
config_logger = ConfigLogger(logging)
config_logger(config)
This yields the following output:
INFO:root:Config:
INFO:root:[test]
INFO:root:a = 1
INFO:root:
|
is there a module that enables property editing for general windows files
Question: so recently, a technologically clever fellow hid all the files in my school's
most frequently accessed and most important public networked mounted drives.
now, as the problem hasn't yet been fixed, i see it as an opportunity to
expand my knowledge on python. I would like if there is a module that allows
me to edit the properties of files in folders in general. !!EDIT: Windows
folders.
not the /contents/, the properties. like the date/time accessed, images of mp3
music, and more importantly, the hidden/not hidden status of files.
Answer: You can use the [Python for Windows](https://www.python.org/download/windows)
package to access a wide range of Windows API's. In particular, the
[SetFileAttributes](http://msdn.microsoft.com/en-
us/library/windows/desktop/aa365535%28v=vs.85%29.aspx) function will be
useful. You can see [an example
here](http://code.activestate.com/recipes/303343-changing-file-attributes-on-
windows/). There are other API's such as GetFileAttributes and
GetFileAttributesEx to can use to get more information. they'll be defined in
the same place as SetFileAttributes.
|
setting up Django Virtual Env error "The executable /var/bin/python (from --python=/var/bin/python) does not exist"
Question: I was given a project to work on and am now trying to run that project in a
virtual environment. I am new to python, but in the past, I was comfortable
with the "manage.py runserver" concept. I'm having trouble learning virtual
environments.
I know that I have virtualenv installed.
My first direction given to run the virtual environment for this project was
to run `virtualenv --python=/var/bin/python --clear --no-site-packages
--unzip-setuptools --setuptools ~/virtualenvs/project_name`
That results in this error:
`The executable /var/bin/python (from --python=/var/bin/python) does not
exist`
I already have python installed, what does this even mean? I am also confused
about this syntax, `--python=/var/bin/python`, was that a relative path that I
should have switched out "python=/" for something else? what does the "=/"
actually represent?
Am I running the command in the wrong folder? I have tried running it in both
the outer project_name folder, containing a subfolder of the same name, and
also, inside that subfolder (which contains the manage.py). However, I can't
find the var/bin/... paths anywhere in either folder. Where should the bin
paths be located?
Any help or insights would be much appreciated, thanks!
Answer: If you are new to virtual environments, these are the steps I would take to
install a virtual environment. I hope this helps.
**Setuptools**
First to check if you already have it installed type the following:
python
>>>import setuptools
If you get another >>> then you have it installed, otherwise you'll get an
error. If you happen to blow up setuptools, here's how you reinstall it:
http://pypi.python.org/pypi/setuptools
1.Download Python 2.7 egg
2.Change directory into new unzipped folder
3.Run the following command:
sudo sh ~/folder/you/downloaded/to/setuptools-0.6c11-py2.7.egg
**Virtialenvwrapper**
sudo pip install virtualenvwrapper
**Setup**
1.Create your directories
sudo mkdir /project_name
sudo chown -R yourusername:admin /project_name
2.Find virtualenvwrapper.sh to use in step 3 below, check the following paths:
/Library/Frameworks/Python.framework/Versions/2.6/bin/virtualenvwrapper.sh
/usr/local/bin/virtualenvwrapper.sh
3.Update your profile script (~/.bash_profile or ~/.profile) in a text editor,
adding the lines below at the bottom of the file. If you don't have either of
these files in your home directory, create a file named .bash_profile in your
home directory.
export WORKON_HOME=$HOME/.virtualenvs
source /insert/your/path/to/virtualenvwrapper.sh
4.Quit your Terminal app and restart it. You should see a bunch of folders get
created when you restart it. This will only happen once.
5.Make your environment
mkvirtualenv django
(django)$ <- now you are in your new virtualenv
6.To leave your environment:
(django)$ deactivate
7.To enter your environment, quit Terminal again to reset paths so we can test
our setup and move into your working directory to checkout a project:
workon django
(django)$ <- you are back in your environment
|
Intercept C function call from Python
Question: Is there a way to intercept a C function call on a binary (e.g: write to a
file) in Python?
I need to execute the binary on Linux from python and capture the output that
it writes to the log file. The log file name is unpredictable.
Answer: The way to go is to create a named pipe. If `logfile` is the name of your
logfile:
import os
import subprocess
LOGFILE = 'somefile.log'
if not os.path.exists(LOGFILE):
os.mkfifo(LOGFILE)
proc = subprocess.Popen([YOUR_PROGRAM])
with open(LOGFILE) as log:
data = log.read() # process your data
proc.wait()
|
Possible Import loop?
Question: I made some changes to my code, adding a model, and added to some imports and
now all of a sudden when I try to run a few management style commands I've
scripted they fail with the following traceback:
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_line
utility.execute()
File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 272, in fetch_command
klass = load_command_class(app_name, subcommand)
File "/usr/lib/python2.6/site-packages/django/core/management/__init__.py", line 75, in load_command_class
module = import_module('%s.management.commands.%s' % (app_name, name))
File "/usr/lib/python2.6/site-packages/django/utils/importlib.py", line 40, in import_module
__import__(name)
File "/home/site/contracts/core/management/commands/job.py", line 13, in <module>
from contracts import jobs
File "/home/site/contracts/jobs.py", line 8, in <module>
from contracts.imports import crrs, crrs_bids
File "/home/site/contracts/imports/crrs.py", line 17, in <module>
from contracts.core.models import Company, Contract, Bid, Department, Category
File "/home/site/contracts/core/models.py", line 26, in <module>
class Company(models.Model):
File "/usr/lib/python2.6/site-packages/django/db/models/base.py", line 243, in __new__
new_class._prepare()
File "/usr/lib/python2.6/site-packages/django/db/models/base.py", line 307, in _prepare
signals.class_prepared.send(sender=cls)
File "/usr/lib/python2.6/site-packages/django/dispatch/dispatcher.py", line 185, in send
response = receiver(signal=self, sender=sender, **named)
File "/usr/lib/python2.6/site-packages/simple_history/models.py", line 50, in finalize
history_model = self.create_history_model(sender)
File "/usr/lib/python2.6/site-packages/simple_history/models.py", line 77, in create_history_model
app = models.get_app(model._meta.app_label)
File "/usr/lib/python2.6/site-packages/django/db/models/loading.py", line 179, in get_app
self._populate()
File "/usr/lib/python2.6/site-packages/django/db/models/loading.py", line 78, in _populate
self.load_app(app_name)
File "/usr/lib/python2.6/site-packages/django/db/models/loading.py", line 99, in load_app
models = import_module('%s.models' % app_name)
File "/usr/lib/python2.6/site-packages/django/utils/importlib.py", line 40, in import_module
__import__(name)
File "/usr/lib/python2.6/site-packages/debug_toolbar/models.py", line 9, in <module>
dt_settings.patch_all()
File "/usr/lib/python2.6/site-packages/debug_toolbar/settings.py", line 215, in patch_all
patch_root_urlconf()
File "/usr/lib/python2.6/site-packages/debug_toolbar/settings.py", line 203, in patch_root_urlconf
reverse('djdt:render_panel')
File "/usr/lib/python2.6/site-packages/django/core/urlresolvers.py", line 503, in reverse
app_list = resolver.app_dict[ns]
File "/usr/lib/python2.6/site-packages/django/core/urlresolvers.py", line 329, in app_dict
self._populate()
File "/usr/lib/python2.6/site-packages/django/core/urlresolvers.py", line 267, in _populate
for pattern in reversed(self.url_patterns):
File "/usr/lib/python2.6/site-packages/django/core/urlresolvers.py", line 365, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/usr/lib/python2.6/site-packages/django/core/urlresolvers.py", line 360, in urlconf_module
self._urlconf_module = import_module(self.urlconf_name)
File "/usr/lib/python2.6/site-packages/django/utils/importlib.py", line 40, in import_module
__import__(name)
File "/home/site/contracts/urls.py", line 11, in <module>
admin.autodiscover()
File "/usr/lib/python2.6/site-packages/django/contrib/admin/__init__.py", line 29, in autodiscover
import_module('%s.admin' % app)
File "/usr/lib/python2.6/site-packages/django/utils/importlib.py", line 40, in import_module
__import__(name)
File "/home/site/contracts/core/admin.py", line 3, in <module>
from contracts.core.models import Company, Contract, Bid, Department, Category
ImportError: cannot import name Company
I've tried adding try/except blocks around the imports in the jobs.py file
(that defines the command line commands), the views.py, the admins.py and the
crrs.py module that has the function I'm trying to run. When I try/except them
all out the script works but fails because the models aren't defined anywhere.
As well, when I use shell_plus and import the job file or the crrs file
directly I can execute the function I want to without any error. I've stepped
through all of the code that was changed between commits and there's no
obvious reason for this problem.
Any insight would be appreciated!
Answer: Your Company model, via a whole wack of imports, relies on your urls.py, which
relies on admin.py, which relies on Company. So you have a circular import.
I suggest attempting to resolve this by commenting out admin.autodiscover(),
but showing your django version would help determine if this is actually the
culprit.
Given the version of Django is < 1.7, autodiscover() still recommended. Next
thing to check is if the django-debug-toolbar auto patching is the problem,
see the [installation instructions](http://django-debug-
toolbar.readthedocs.org/en/1.0/installation.html) especially the part called
'WARNING...may cause circular imports'.
|
cx_freeze and pycrypto is missing modules?
Question: Here is my setup.py file for Python 3.3:
#/usr/bin/env python3
import sys
from cx_Freeze import setup, Executable
# Dependencies are automatically detected, but it might need fine tuning.
build_exe_options = {
"packages": [
"os","io","copy","struct","hashlib","random",
"urllib","pycurl","json","Crypto"
],
"includes": [ "urllib.parse", ],
"excludes": ["tkinter"],
"icon":"backup.ico"
}
setup( name = "BlindBackup",
version = "1.0",
description = "BlindBackup client",
options = {"build_exe": build_exe_options},
executables = [Executable("backup.py", base=None)])
I can execute "py -3 setup.py build_exe" but the exe won't work. By starting
the generated backup.exe I get this error message:
ImportError: No module named 'Crypto.Cipher'; Crypto is not a package
However, Crypto is a package! I have also tried to add these into the includes
section:
"includes": ["urllib.parse",
"Crypto","Crypto.Cipher","Crypto.Cipher.AES",],
But then I cannot even build the exe:
File "C:\Python33\lib\site-packages\cx_Freeze\dist.py", line 362, in setup
distutils.core.setup(**attrs)
File "C:\Python33\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "C:\Python33\lib\distutils\dist.py", line 929, in run_commands
self.run_command(cmd)
File "C:\Python33\lib\distutils\dist.py", line 948, in run_command
cmd_obj.run()
File "C:\Python33\lib\site-packages\cx_Freeze\dist.py", line 232, in run
freezer.Freeze()
File "C:\Python33\lib\site-packages\cx_Freeze\freezer.py", line 603, in Freeze
self.finder = self._GetModuleFinder()
File "C:\Python33\lib\site-packages\cx_Freeze\freezer.py", line 343, in _GetMouleFinder
finder.IncludeModule(name)
File "C:\Python33\lib\site-packages\cx_Freeze\finder.py", line 678, in IncludeModule
namespace = namespace)
File "C:\Python33\lib\site-packages\cx_Freeze\finder.py", line 386, in _ImportModule
raise ImportError("No module named %r" % name)
ImportError: No module named 'Crypto.Cipher'
Which makes no sense, because **there is** a module named Crypto.Cipher.
You can test the same setup.py script with python 3 - just create a backup.py
script and put this inside:
from Crypto.Cipher import AES
It has been suggested that I install precompiled voidspace modules ( see
[Error executing the result of cx_freeze using
pycrypto](http://stackoverflow.com/questions/19066437/error-executing-the-
result-of-cx-freeze-using-pycrypto) ) but it doesn't work either. I did not
want to write comment to a 7 month old question, maybe that is what I should
have done? Anyway, I have this problem now and I cannot fix this on my own.
Please help me!
Answer: Okay, I was silly. I have created a module in my project called "crypto.py".
It is true that this module was a different one under Linux. However, under
Windows, the package "Crypto" and the module "crypto" appeared to be the same.
cx_Freeze confused them, and tried to find Cipher module under the crypto.py
"packacge" which was a module instead.
Refactored my module to a different name and now it works!
|
How do I compare values in dictionaries
Question: Currently I'm working on a little project and I keep running into trouble with
this code.
import xmlrpclib
import glob
import os
from SimpleXMLRPCServer import SimpleXMLRPCServer
# keep track of all files in directory
fileList = {}
# Search for all files in directory
for file in glob.glob("*.txt"):
fileList[file] = "closed"
def openFile(filename):
if filename in fileList.keys(): # Check if file exists
value = fileList.get(filename)
if value == 'closed' # If closed then open
fileList[filename] = open(filename, 'r+b')
return "File open"
else # File must have been open
return "File is already open"
else # File doesn't exist, create one
# if file not found
fileList[filename] = open(filename, 'w') # Create file
return "File created"
def closeFile(filename):
if filename in fileList.keys(): # Check if file exists
fileList[filename].close() # close file
fileList[filename] = 'closed' # Change value
return "File closed"
else
return "file does not exist"
def readFile(filename):
if filename in fileList.keys(): # Check if file exists
return fileList[filename].read() # Return contents
else
return "File does not exist"
def writeFile(filename, content):
if filename in fileList.keys(): # Check if file exists
fileList[filename].write(content) # Write content to file
return "Wrtie successful"
else
return "File does not exist"
server = SimpleXMLRPCServer(("localhost", 8000)) # Giving server a port
print "Listening on port 8000..."
server.register_function(openFile,'openFile')
server.register_function(closeFile,'closeFile')
server.register_function(readFile, 'readFile')
server.register_function(writeFile, 'writeFile')
server.serve_forever()
For some reason I keep getting:
File "server.py", line 16
if value == 'closed' # If closed then open
^
SyntaxError: invalid syntax
As you can see the syntax is wrong but I have no idea why and any help would
be greatly appreciated.
I have tried to remove the comment but I get the same error message with the
arrow pointing at the last character in the if statement.
I'm new with python but have a year experience with Java so if this also
happens in Java I would probably understand it better in Java terminology.
Answer: You forgot the colon:
if value == 'closed':
^-----colon
|
copying or referencing objects in python
Question: I was doing some problems in python specifically this one:
> Implement an algorithm to delete a node in the middle of a singly linked
> list, given only access to that node
The solution is this:
def deleteNode(linkedlist, node):
if node.next != None:
node.value = node.next.value
node.next = node.next.next
else:
node.value = None
I've done some Java before and the `A = B` with objects means A refers to B.
According to this problem. Does this mean in the python expression `A = B`
that **A deep-copies B?**
Answer: No. `A = B` works pretty much the same in Python as it does in Java. (There
are a few differences in certain advanced use cases, but those aren't
important here.)
This function doesn't actually delete the node. It cheats; the next node's
data gets copied into this node, and the next node is deleted. As long as no
one else was relying on the identity of specific nodes in the list, and as
long as there is a next node, it works okay. (The `else` that looks like it
handles the case where there is no next node is actually bugged; instead of
deleting the data from the list, it replaces the data with a spurious `None`
value.)
|
integer division gives different result in CPython 2.7 and Spyder
Question: I have encountered a quite weird case in Python.
In Spyder:
>>> 274/365
0.7506849315068493
>>> sys.version
'2.7.6 (default, Dec 20 2013, 14:08:04) [MSC v.1700 64 bit (AMD64)]'
>>>
However in command line it returns 0.
>>> 274/365
0
>>> 274/365 * 1.0
0.0
>>> 274/365.0
0.7506849315068493
Same version of Python.
Could anyone tell what is wrong here? Do I need to put some other options
ahead of the program? This is really nauseous since my code gave weird results
if I call it through command line..
Answer: Spyder executes `from __future__ import division` in its console.
This is discussed at
<https://code.google.com/p/spyderlib/issues/detail?id=1646> \- it looks like
this will be deactivated by default to avoid confusion.
|
Python reports different "java -version" from Windows shell
Question: When I run "java -version" from the windows command line it says:
> java -version
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
> C:\Windows\System32\java.exe -version
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
...But in Python:
> python
Python 2.7.6 (default, Nov 10 2013, 19:24:18) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import subprocess
>>> subprocess.check_output(['java','-version'])
java version "1.6.0_18"
Java(TM) SE Runtime Environment (build 1.6.0_18-b07)
Java HotSpot(TM) Client VM (build 16.0-b13, mixed mode, sharing)
Answer: It turns out that python (perhaps because I've installed a 32 bit version) is
insisting on using a 32 bit JRE. I have Java 7 64 bit and Java 6 32 bit - so
python opted for the latter.
|
python + wsgi on a multi-threaded web-server: is this a race condition?
Question: Suppose that I've written a wsgi `application`. I run this application on
`Apache2` on `Linux` with multi-threaded `mod-wsgi` configuration, so that my
application is run in many threads per single process:
WSGIDaemonProcess mysite processes=3 threads=2 display-name=mod_wsgi
WSGIProcessGroup mysite
WSGIScriptAlias / /some/path/wsgi.py
The application code is:
def application(environ, start_response):
from foo import racer
status = '200 OK'
response_headers = [('Content-type', 'text/plain')]
start_response(status, response_headers)
return [racer()] #call to racer creates a race condition?
module foo.py:
a = 1
def racer():
global a
a = a + 1
return str(a)
Did I just create a race condition with variable `a`? I guess, `a` is a
module-level variable, that exists in `foo.py` and is the same (shared) among
threads?
More theoretical questions derived from this:
1. Concurrent threads within the same process access and modify the same `a` variable so my example is not thread-safe?
2. If my web-server is `Apache`, each thread of my application on Linux is created on C-level with `pthreads` API and the function, which the `pthread` must execute is some kind of python interpreter's main function? Or does Apache protect me somehow from this error?
3. What if I were running this on a python-written web-server like `Tornado`'s `HTTPServer`? Web server, written in python, implements threads as python-level `threading.Thread` objects, and runs `application` function in each thread. So, I suppose it's a race condition? (I also suppose, in this case I can abstract from underlying C-level `pthreads` below `threading.Thread` implementation and worry only about python functions, because the interpreter won't allow me to modify C-level shared data and screw its functioning. So the only way to break thread-safety for me is to deal with global variables? Is that right?)
Answer: Yes, you have a race condition there, but it's not related to the imports. The
global state in `foo.a` is subject to a data race between `a + 1` and `a =
...`; since two threads can see the same value for `a`, and thus compute the
same successor.
The import machinery itself does protect against duplicate imports by multiple
threads, by means of a process wide lock (see
[`imp.lock_held()`](https://docs.python.org/2/library/imp.html#imp.lock_held)).
Although this could, in theory, lead to a deadlock, this almost never happens,
because few python modules lock other resources at import time.
This also suggests that it's probably safe to modify `sys.path` at will; since
this usually happens only at import time (for the purpose of additional
imports), and so that thread is already holds the import lock, other threads
cannot cause imports that would also modify that state.
Fixing the race in `racer()` is quite easy, though:
import threading
a = 1
a_lock = threading.Lock()
def racer():
global a
with a_lock:
my_a = a = a + 1
return str(my_a)
which will be needed for any global, mutable state in your control.
|
unable to login into admin in Django unit tests but can login in dev server?
Question: I am facing a strange problem. I am able to login to admin by running `python
manage.py runserver` and giving the correct credentials
but my test fails, if I give the exact same credentials that I used in
development server. I am following this [TTD
tutorial](http://www.realpython.com/blog/python/django-1-6-test-driven-
development/#.U3EJLK1dWrh).
Here is the code:
class AdminTest(LiveServerTestCase):
fixtures = ['admin.json']
def setUp(self):
self.browser = webdriver.Firefox()
self.browser.maximize_window()
def tearDown(self):
self.browser.quit()
def test_admin_site(self):
# user opens web browser, navigates to admin page
self.browser.get(self.live_server_url + '/admin/')
body = self.browser.find_element_by_tag_name('body')
self.assertIn('Django administration', body.text)
# users types in username and passwords and presses enter
username_field = self.browser.find_element_by_name('username')
username_field.send_keys('admin')
password_field = self.browser.find_element_by_name('password')
password_field.send_keys('admin')
password_field.send_keys(Keys.RETURN)
time.sleep(3)
# login credentials are correct, and the user is redirected to the main admin page
body = self.browser.find_element_by_tag_name('body')
self.assertIn('Site administration', body.text)
The test fails with error message:
AssertionError: 'Site administration' not found in u'Django administration\nPlease enter the correct username and password for a staff account. Note that both fields may be case-sensitive.\nUsername:\nPassword:\n '
PS: I use `Postgres` for `backend database`. All the tutorials I see uses
'sqlite3'
Answer: You're starting from a fresh test database so your admin user does not exist
yet. You need to create it manually in the test case.
This should work:
from django.contrib.auth.models import User
User.objects.create_superuser('admin', '[email protected]', 'admin')
|
Why are my locals not being updated outside rof?
Question: After experimenting with trying to implement C-for loops in Python, the
following function was developed:
import sys
def rof(init, cond, post):
init, cond, post, context = compile(init, '<rof>', 'exec'), \
compile(cond, '<rof>', 'eval'), \
compile(post, '<rof>', 'exec'), \
sys._getframe(1)
context = context.f_globals, context.f_locals
exec(init, *context)
while eval(cond, *context):
yield None
exec(post, *context)
As any programmer knows, the new function needed to be tested to make sure it
works:
**Setup**
class Employee:
def __init__(self, employee_id, category, hired, salary, years):
vars(self).update(locals())
def __repr__(self):
return '{}({})'.format(self.__class__.__name__,
', '.join(map(repr, self)))
def __iter__(self):
yield self.employee_id
yield self.category
yield self.hired
yield self.salary
yield self.years
database = [Employee(123, 'P', 2014, 2000, 0),
Employee(234, 'F', 2000, 20000, 14),
Employee(123, 'F', 2010, 10000, 4)]
The code runs without errors in some cases (such as the one below):
**Trial 1**
for _ in rof('a = 0', 'a < len(database)', 'a += 1'):
employee_id = database[a].employee_id
for _ in rof('b = len(database) - 1', 'b > a', 'b -= 1'):
if database[b].employee_id == employee_id:
print(database[b], 'is being removed.')
del database[b]
However, it does not work when the loops are in a separate function.
**Trial 2**
def remove_duplicates(database):
a = b = int
for _ in rof('a = 0', 'a < len(database)', 'a += 1'):
employee_id = database[a].employee_id
for _ in rof('b = len(database) - 1', 'b > a', 'b -= 1'):
if database[b].employee_id == employee_id:
print(database[b], 'is being removed.')
del database[b]
remove_duplicates(database)
An error is generatated instead (`TypeError: list indices must be integers,
not type`).
* * *
We can all agree that this code in not Pythonic, but can anyone identify what
is causing the problem and how to fix it?
Answer: In Python 3 it is not possible to create new local variables into `locals()`
as the set of local variables is deducted at compile time. Especially if you
modify `remove_duplicates` so that it does not have the `a = b = int` line,
Python does not consider these names to refer to a local but a global
variable. With the presence of that line, they are considered to be a local
variable, yes.
Also, changing the locals is not possible through the frame object, as in
Python 3 the local variables are not stored in a dictionary anymore. Instead,
on CPython 3 the `frame.f_locals` access creates a copy of the variables using
`PyFrame_FastToLocals`, but it is normally a one-way trip. Thus while you can
read the values of the variables, no changes will be propagated, and `a` and
`b` continue to `is int`. However (module) global variables are still stored
in a dictionary that is directly accessible through the `frame.f_globals`; and
the dictionary is open for changes.
However, there is [a blog post](http://pydev.blogspot.fi/2014/02/changing-
locals-of-frame-frameflocals.html) by the PyDev maintainer on how to achieve
this on CPython 3. Thus the following `rof` implementation seems to do the
trick _for me_ :
def apply(frame):
ctypes.pythonapi.PyFrame_LocalsToFast(ctypes.py_object(frame), ctypes.c_int(0))
def rof(init, cond, post):
init, cond, post, context = compile(init, '<rof>', 'exec'), \
compile(cond, '<rof>', 'eval'), \
compile(post, '<rof>', 'exec'), \
sys._getframe(1)
exec(init, context.f_globals, context.f_locals)
apply(context)
while eval(cond, context.f_globals, context.f_locals):
apply(context)
yield None
exec(post, context.f_globals, context.f_locals)
apply(context)
I think this is code is an abomination if anything, and recommend that instead
of this, the hypothetical programmers would know how to change a C for loop
into a C while loop... and work it into Python from there. And it still cannot
work without giving initial value to these variables within the function body
anyway.
Thus I propose an alternative `rof` implementation:
def rof(init, cond, post):
print(init)
print('while {}:'.format(cond))
print(' # code goes here')
print(' ' + post)
rof('b = len(database) - 1', 'b > a', 'b -= 1')
prints:
b = len(database) - 1
while b > a:
# code goes here
b -= 1
which is what ought to be written anyway.
though there is not much wrong in this case with:
for a in range(len(database)):
for b in range(len(database) - 1, a, -1):
...
|
Printing a file and configure printer settings
Question: I'm trying to code a printer automation using Python on Windows, but can't get
it done.
I'm not really understanding the topic and i'm a little surprised - a "simple"
way to get this done doesn't seem to exist..? There are so many APIs that
allow to access common things in a nice and easy way, but printing seems to be
something "special"..?
Here is what i have and what i want to do:
* There is a PDF file. The PDF already exists, i do not want to create PDFs or any other filetype. I like to print this PDF file. One file at a time.
* The file could be in landscape and portrait layout. The file could have one of this sizes: A4, A3, A2, A1 and A0.
* I like to print the file using a normal, "physical" printer. The printer is a network device and connected using its IP. There are various network printers and i'd like to be able to use more than one of them. Some are just small A4-printers, some are big office devices (all-in-one scan, copy, print, ...) - and there are big plotters, too (up to A0 sized paper).
* I'd like to code: "Print this PDF file on this printer".
* I like to configure the printing size. I'd like to print the PDF "as it is", in its original size - but i want to be able to print big formats on small paper sizes, too. Like, the PDF itself is an A0 size, but i want to print it on A3 paper. Or the original PDF size is A2 and i want to print it on A4.
* I'd like to use that on Windows 7 computers (SP1, 64bit). And i'm trying to code that in python. I'm using python 2.7, as i'm using some third-party-modules that are not available in python 3. In general, every working solution that can be triggered through a python script is welcome.
To me, that doesn't seem to be a very "complex" task to do. Doing that "by
hand" is very easy and straightforward - select the document, start printing,
select the printer, select the paper size - and print.
Doing this by code seems to be rather difficult. Here is what i've come across
until now.
* There are various programs that could be used for command line printing, programs like "Acrobat Reader", "Foxit Reader" or similar. While it works perfect to print using the commands those programs provide, it's not possible to access the printer settings to configure the paper size.
* There are special command line printing tools, but i couldn't find any useful freeware. I have tried the "VeryPDF" command line tool, but had some problems using it when it comes to paper sizes. While there is perfect support for a whole range of letter-formats and various other things, the paper sizes i need (A4 to A0) somehow aren't supported. There are presets for A4 and A3, those work. The tool has the option to use custom paper sizes by just passing the measurements (in/pt/mm) - but that didn't work as the examples show, it always prints to A4 when using this method.
* I've found the win32-package for python, including win32print. I don't get this thing. The API provides functions to find, add or remove printers, list a printer queue, start and stop printer jobs and so on - but no simple possibility to print a file. It seems like this API could be used to add a printer job by creating the printing data by python coding, push some text and/or graphics in something like a "file" and send that to the printer. In a format the printer already understands.
And when using this win32print module, i can't get it to work correctly. Here
is an exmple snippet i tried to use:
from win32print import *
printer = GetDefaultPrinterW()
handle = OpenPrinter(printer)
info = GetPrinter(handle, 2)
cap = DeviceCapabilities(info['pPrinterName'], info['pPortName'], DC_PAPERS)
ClosePrinter(handle)
...as described here:
<http://timgolden.me.uk/pywin32-docs/win32print__DeviceCapabilities_meth.html>
But that just returns:
NameError: name 'DC_PAPERS' is not defined
That happens whenever i'm trying to use a function that needs such constants
passed. Not a single constant is defined on my system and i don't know why.
But i don't know if i could use this API even when working correctly, all the
usage examples are just showing how to send a text-string to a printer. That's
not what i need, that's not what i want to know.
Is there any working solution to print a file and to set the printing size in
a simple and straightforward way?
Ideas and hints are welcome!
Answer: Look at the "[How To
Print](http://timgolden.me.uk/python/win32_how_do_i/print.html)" page on Tim
Golden's website. This page was the same in 2014 when you asked your question.
There's an example printing a JPG file that also manipulates the printer
settings.
That's not quite a perfect example for what you're doing but it should get you
on the right track.
DC_PAPERS is defined in win32con:
import win32con
x = win32con.DC_PAPERS
How you're supposed to know that, I have no idea. Perhaps it's "obvious" to
those already familiar with the Win32API outside of Python...
|
win32com in Python Spyder console results in an error
Question: I'm just running the following code, straight from [this
documentation/tutorial](http://pythonexcels.com/python-excel-mini-cookbook/).
import win32com.client as win32
excel = win32.gencache.EnsureDispatch('Excel.Application')
wb = excel.Workbooks.Add()
wb.SaveAs('add_a_workbook.xlsx')
excel.Application.Quit()
And got this:
execfile(filename, namespace)
File "C:/Users/Username/Desktop/script.py", line 106, in <module>
wb = excel.Workbooks.Add()
File "C:\Users\Username\AppData\Local\Temp\gen_py\2.7\00020813-0000-0000-C000-000000000046x0x1x7\Workbooks.py", line 34, in Add
ret = self._oleobj_.InvokeTypes(181, LCID, 1, (13, 0), ((12, 17),),Template
TypeError: an integer is required
Does anyone have any idea why? I've tried using an xlsx vs. xls file, and
changing the file address, and trying multiple examples from that tutorial,
and they all give me similar errors, and I have no idea why.
I can get as far as `wb = excel.Workbooks.Add()` before I get the `TypeError:
an integer is required` warning, and if I try `wb = excel.Workbooks.Add`, it
will run and I won't get the error, but I can't do anything from there on.
Does anyone know what this is? Thanks in advance.
[Edit:]
I tried a word file for comparison and I works fine.
Does anyone know why one of these works and one doesn't?
word = win32.Dispatch('Word.Application')
word.Documents.Open('C:\Users\username\Desktop\test.docx')
excel = win32.Dispatch('Excel.Application')
excel.Workbooks.Open('C:\Users\username\Desktop\output.xlsx')
[Edit 2:]
Okay, I found the problem is with the Spyder IDE. If I write the same code in
Anaconda, it'll work fine. Does anyone know why Anaconda works but Sypder
doesn't? I checked the system paths and they're identical, and even trying to
execute a .py program in Anaconda doesn't work.
Answer: I seem to be the only person on the internet with this problem, but my
workaround was I used a different spyder python interpretter.
`Python interpreter` gave me all sorts of errors doing pretty much every
win32com excel command, but `IPython console` works fine. No idea why.

|
Countdown in Python using permutations and lambda
Question: I am trying to make a program that solves the following with `permutations`
and `lambda`:
You pick 5 numbers and a random number is generated, the aim is to use those 5
numbers to reach the target number. You are allowed to use each number once
with as many operators as you want (`+-*/`). I want the program to `print()`
all of the solutions
this is the code i have created so far
from itertools import permutations
import operator
num1=3
num2=5
num3=7
num4=2
num5=10
class Infix:
def __init__(self, function):
self.function = function
def __ror__(self, other):
return Infix(lambda x, self=self, other=other: self.function(other, x))
def __or__(self, other):
return self.function(other)
def __rlshift__(self, other):
return Infix(lambda x, self=self, other=other: self.function(other, x))
def __rshift__(self, other):
return self.function(other)
def __call__(self, value1, value2):
return self.function(value1, value2)
"""what the class infix does is it allows me to substitute operators such as +-*/ to functions as follows:"""
multiply=Infix(lambda x,y: x*y) #this is a way of telling it that whenever i call x multiply y is the same as x*y
add=Infix(lambda x,y: x+y) #same here just for adding
minus=Infix(lambda x,y: x-y)
divide=Infix(lambda x,y: x/y)
"""the way infix can be called is if i say 3 |add| 4, which allows me to substitute
variables as the operators"""
target = 50
for w,x,y,z in permutations((|multiply|,|add|,|minus|,|divide|), 4): #telling it to substitute operators to the variables
for a,b,c,d,e in permutations(num1,num2,num3,num4,num5), 5):
if a(w)b(x)c(y)d(z)e == target: #works out the sum
print(a(w)b(x)c(y)d(z)e) #prints it if it equals my target
The only error in this code is the `for` loops because i don't know how to
substitute both operators and numbers at the same time, all the other parts of
the code are absolutely fine.
The expected output is all answers that work e.g. `3*2+5-7=4` if `target==4`.
What i asked this to do is run through all of the numbers with all of the
operators again and again to find answers that match the target, and then
print them out, but i am having difficulties trying to substitute the number
together with the operator.
I thought that `a(X)b(y)...` would work because if you substitute it, it is
`a(|add|)b(|multiply|)d...` and so on, but it turns out that it doesn't.
None of these work but i am trying to find a solution that does. It needs to
loop through the numbers and the operators at the same time and i don't know
how to do it!
## Error when running:

## `Infix` working correctly:

Answer: Finally, I think we get to the bottom of it. You are expecting the "bitwise
OR" operators `|` to somehow modify the `Infix` objects, so that when you do
permutations((|multiply|,|add|,|minus|,|divide|), 4)
they "stay with" the objects. That can't happen; for example, you wouldn't do:
x, y = 1, 2
a = x+
print(ay)
and expect `3`. Instead, you need to do:
permutations((multiply, add, minus, divide), 4)
i.e. actually shuffle the `Infix` instances, then apply the operators later,
when calculating the comparison with `target`:
if a |w| b |x| c |y| d |z| e == target:
There are a few other minor errors, but that should allow you to move
forwards.
|
Python pygame mac import
Question: I installed pygame from `pygame-1.9.1release-python.org-32bit-
py2.7-macosx10.3.dmg`. I have Python 2.7.6 and OSX 10.9.2. For some reason,
when I do the following I get an `ImportError`:
>>> import pygame
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
import pygame
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pygame/__init__.py", line 95, in <module>
from pygame.base import *
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pygame/base.so, 2): no suitable image found. Did find:
/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pygame/base.so: no matching architecture in universal wrapper
How can I get pygame to work? And is there a way to get pygame for Python 3.4?
I currently have both Python 2.7.6 and Python 3.4 installed.
Answer: Perhaps try using [Macports](https://www.macports.org/) to install pygame.
That package that you installed was for version OSX 10.3 which used PowerPC
architecture whereas you are running OSX 10.9 which uses Intel.
|
Defining a variable after calling it?
Question: I've run into a problem in terms of assigning objects. I want to be able to
assign the object via a function, but because the variable `name` is not
defined until during the function, when I first call the function (with name =
john) it says: `NameError: name 'john' is not defined`.
I'm using this script to assist me while I run a game session, and I want to
be able to create new NPC's at will, but is there anyway to do that without
first using: `>>> blahblah = Character()` in a python prompt?
Sorry for the block of code and I'm sorry if there is a simple solution.
import random
import math
combat_participants = []
npc_list = []
class Character():
"""Basic class for character statistics
"""
def get_stats(self):
"""Refreshes the stats after new assignment
"""
self.hp = self.st
self.will = self.iq
self.per = self.iq
self.fp = self.ht
self.bspeed = (self.ht + self.dx) / 4
self.dodge = math.floor(self.bspeed + 3)
self.bmove = math.floor(self.bspeed)
self.blift = round(self.st*self.st/5)
def get_status(self):
"""Checks the status of the character
"""
if self.hp < -self.hp:
self.status = 'Dead'
elif self.hp < 0:
self.status = 'Unconscious'
else:
self.status = 'Alive'
#Primary Attributes
st = 10
dx = 10
iq = 10
ht = 10
#Secondary Attributes
hp = st
will = iq
per = iq
fp = ht
bspeed = (ht + dx) / 4
dodge = math.floor(bspeed + 3)
bmove = math.floor(bspeed)
blift = round(st*st/5)
#Other
status = 'Alive'
weapon = 'None'
shield = 'None'
char_name = ''
def create_npc(name, str_name, level, combat):
"""
/long and irrelevant/
"""
name = Character()
name.char_name = str_name
#Randomly assigning attributes
prim_attr_list = [name.st, name.dx, name.iq, name.ht]
for attr in prim_attr_list:
attr = random.randint(7+level,9+level)
name.get_stats(name)
#Randomly assigns a weapon
temp_weapon_list = []
for weapon in weapon_list:
if weapon.st <= name.st:
temp_weapon_list += weapon
name.weapon = temp_weapon_list[random.randint(0,len(temp_weapon_list)-1)]).wep_name
#Adds them to the npc list
global npc_list
npc_list += [name]
#Adds them to combat list if they are in combat
global combat_participants
if combat:
combat_participants += [name]
EDIT:EDIT:After calling the function with create_npc('john', 'John Smithson
II', 2, True)
Once I have assigned the Class obect as `'john'` rather than `john`, I don't
know how to access it. For instance: `>>> john.char_name` will give me this
error `NameError: name 'john' is not defined`, and `>>> 'john'.char_name`
gives me `AttributeError: 'str' object has no attribute 'char_name'`
Answer: You don't need to pass name to the function, remove that param from the
function definition and your code should work fine.
|
How to read dynamic values on a site?
Question: What's the best way to scrape dynamic data from a site?
I want to read the ticker value on the top of this page:
[https://www.google.com/finance?q=INDEXBOM%3ASENSEX&ei=M1B1U_iEG8OPkAWhuYGIDA](https://www.google.com/finance?q=INDEXBOM%3ASENSEX&ei=M1B1U_iEG8OPkAWhuYGIDA)
using python. I can't seem to be able to figure a clear way out.
Answer: Since Google Finance API was [shut
down](https://developers.google.com/finance/), here's an option to get it via
[`urllib2`](https://docs.python.org/2/library/urllib2.html) and
[`BeautifulSoup`](https://beautiful-soup-4.readthedocs.org/en/latest/):
>>> from urllib2 import urlopen
>>> from bs4 import BeautifulSoup
>>> url = 'https://www.google.com/finance?q=INDEXBOM%3ASENSEX&ei=M1B1U_iEG8OPkAWhuYGIDA'
>>> soup = BeautifulSoup(urlopen(url))
>>> soup.find('div', id='price-panel').span.text.strip()
u'25,050.96'
Just as an alternative, consider using [Yahoo Finance
API](https://finance.yahoo.com/q?s=API), see
[`ystockquote`](https://pypi.python.org/pypi/ystockquote) module.
|
urllib2 returning nothing in python
Question: I am confused !!! can anybody tell me where the problem is??? this code used
to work properly but it started returning nothing since yesterday !! I did not
make any changes on it !!! does anybody have any idea???
import re
from re import sub
import time
import cookielib
from cookielib import CookieJar
import urllib2
from urllib2 import urlopen
import difflib
import requests
def twitParser():
try:
cj = CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
res=opener.open('https://twitter.com/haberturk')
html=res.read()
splitSource=re.findall(r'<p class="js-tweet-text tweet-text">(.*?)</p>',html)
print len(splitSource)
for item in splitSource:
aTweet = re.sub(r'<.*?>','',item)
print aTweet
except Exception, e:
print str(e)
print 'ERROR IN MAIN TRY'
twitParser()
Answer: If your code did not change, than propably something else did:
this tag does not exists anymore:
<p class="js-tweet-text tweet-text">
Instead there is something like:
ProfileTweet-text js-tweet-text u-dir
Although it is possible to get what you want using regexp, do not use it, use
a xml parser instead:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html)
ptags = soup.find_all("p")
texts = [p.text for p in ptags if "js-tweet-text" in p["class"]]
Propably split up the function, first making sure you get the html, then if
you find p tags, then if you find any that meet your criteria.
As Wooble said, use the twitter api instead, these companies offer it so you
don't have to scrape and cost them resources.
|
regex to match a word and everything after it?
Question: I need to dump some http data as a string from the http packet which i have in
string format am trying to use the regular expression below to match
'data:'and everything after it,Its not working . I am new to regex and python
>>>import re
>>>pat=re.compile(r'(?:/bdata:/b)?\w$')
>>>string=" dnfhndkn data: ndknfdjoj pop"
>>>res=re.match(pat,string)
>>>print res
None
Answer: [`re.match`](https://docs.python.org/2/library/re.html#re.match) matches only
at the beginning of the string. Use
[`re.search`](https://docs.python.org/2/library/re.html#re.search) to match at
any position. (See [`search()` vs.
`match()`](https://docs.python.org/2/library/re.html#search-vs-match))
>>> import re
>>> pat = re.compile(r'(?:/bdata:/b)?\w$')
>>> string = " dnfhndkn data: ndknfdjoj pop"
>>> res = re.search(pat,string)
>>> res
<_sre.SRE_Match object at 0x0000000002838100>
>>> res.group()
'p'
To match everything, you need to change `\w` with `.*`. Also remove `/b`.
>>> import re
>>> pat = re.compile(r'(?:data:).*$')
>>> string = " dnfhndkn data: ndknfdjoj pop"
>>> res = re.search(pat,string)
>>> print res.group()
data: ndknfdjoj pop
|
pygame window does not remain fullscreen
Question: I am making a game with the pygame module and now I got a problem. The program
itself works fantastic, but the fullscreen mode which I wanted to enable does
not work. I made a test program for fullscreen mode which works perfect, but
when I tried to make the game fullscreen, the display does very strange. First
the program starts. You can see it entering in fullscreen mode and displaying
a text saying: "Loading..." Then the window disappears and reappears in it's
original non-fullscreen size. the explorer bar on the bottom of the screen is
displayed double and then the 2e explorer bar disappears. The game then runs
in non-fullscreen mode. This is the program I use:
import pygame, sys, os
from pygame.locals import *
pygame.mixer.pre_init(44100, -16, 4, 2048)
pygame.init()
DISPLAYSURF = pygame.display.set_mode((476, 506), FULLSCREEN)
pygame.display.set_caption('Title of the game')
DISPLAYSURF.fill((128,128,128))
FONT = pygame.font.Font('freesansbold.ttf',20)
LoadingText = FONT.render('Loading...', True, (0,255,0))
LoadingRect = LoadingText.get_rect()
LoadingRect.center = (238,253)
DISPLAYSURF.blit(LoadingText, LoadingRect)
pygame.display.update()
# These files will be created when needed. They are now removed to prevent interference later.
try:
os.remove('LOAD.txt')
except IOError:
pass
try:
os.remove('OPEN.txt')
except IOError:
pass
try:
os.remove('RUN.txt')
except IOError:
pass
try:
os.remove('TEMP.txt')
except IOError:
pass
# All parts of the program are split into small programs that are callable with a main function
import ROIM
import ROIM_CreateNewGame
import ROIM_LevelMenu
import ROIM_Menu
import ROIM_SmallMenus
import ROIM_GameIntroduction
import SetupController
# RUN.txt is a file that says wich program to run
Run = 'Menu'
RUN = open('RUN.txt','w')
RUN.write('RUN\n')
RUN.write(Run)
RUN.close()
ChangeRun = False
FPS = 35
fpsClock = pygame.time.Clock()
while True: # MAIN GAME LOOP
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
Preferences = open('Preferences.txt')
PreferencesLines = Preferences.read().split('\n')
x = 1
Volume = -1
Brightness = -1
for Item in PreferencesLines:
if Item == 'BRIGHTNESS':
Brightness = int(PreferencesLines[x])
if Item == 'VOLUME':
Volume = int(PreferencesLines[x])
x += 1
Preferences.close()
assert Volume != -1
assert Brightness != -1
# The colors will be changed to the right brightness.
GREEN = (0,255 * (Brightness / 100),0)
YELLOW = (255 * (Brightness / 100),255 * (Brightness / 100),0)
RED = (255 * (Brightness / 100),0,0)
BLUE = (0,0,255 * (Brightness / 100))
WHITE = (255 * (Brightness / 100),255 * (Brightness / 100),255 * (Brightness / 100))
BLACK = (0,0,0)
GREY = (128 * (Brightness / 100),128 * (Brightness / 100),128 * (Brightness / 100))
# Every small program gets the main variables and constants as arguments
if Run == 'Menu':
ROIM_Menu.RunMenu(FONT, ChangeRun, GREEN, YELLOW, RED, BLUE, WHITE, BLACK, GREY, DISPLAYSURF, Volume, Brightness, fpsClock, FPS)
elif Run == 'NewGame':
ROIM_CreateNewGame.RunNewGame(FONT, ChangeRun, GREEN, YELLOW, RED, BLUE, WHITE, BLACK, GREY, DISPLAYSURF, Volume, Brightness, fpsClock, FPS)
elif Run == 'Game':
ROIM.RunGame(FONT, ChangeRun, GREEN, YELLOW, RED, BLUE, WHITE, BLACK, GREY, DISPLAYSURF, Volume, Brightness, fpsClock, FPS)
elif Run == 'SmallMenu':
ROIM_SmallMenus.RunSmallMenu(FONT, ChangeRun, GREEN, YELLOW, RED, BLUE, WHITE, BLACK, GREY, DISPLAYSURF, Volume, Brightness, fpsClock, FPS)
elif Run == 'LevelMenu':
ROIM_LevelMenu.RunLevelMenu(FONT, ChangeRun, GREEN, YELLOW, RED, BLUE, WHITE, BLACK, GREY, DISPLAYSURF, Volume, Brightness, fpsClock, FPS)
elif Run == 'Introduction':
ROIM_GameIntroduction.RunIntro(FONT, ChangeRun, GREEN, YELLOW, RED, BLUE, WHITE, BLACK, GREY, DISPLAYSURF, Volume, Brightness, fpsClock, FPS)
elif Run == 'Setup':
SetupController.Run_Controller_Setup(FONT, ChangeRun, GREEN, YELLOW, RED, BLUE, WHITE, BLACK, GREY, DISPLAYSURF, Volume, Brightness, fpsClock, FPS)
else:
assert False
# Every program edits the RUN file before finishing
ChangeRun = False
RUN = open('RUN.txt')
assert RUN.readline() == 'RUN\n'
Run = RUN.readline().split('\n')[0]
RUN.close()
The game runs fine, but not in fullscreen mode. DISPLAYSURF is not edited in
the programs. Which means that I do not call `pygame.display.set_mode()`. I
use windows 8 and python 3.4 . Is it because I pass the window object as an
argument? I have no clue of what I did wrong.
Answer: You might need some additional flags passed in to the display surface's
.set_mode() function. The following works for me on Windows 7:
DISPLAYSURF = pygame.display.set_mode((SCREEN_WIDTH, SCREEN_HEIGHT), FULLSCREEN | HWSURFACE | DOUBLEBUF)
|
TemplateDoesNotExist at /polls/ - in Django Tutorial
Question: I have been trying out Django tutorial(documentation) and is stuck with this
error for 2 days now. I will paste my views.py, settings.py and my directory
structure below. Views.Py
from django.shortcuts import render
from django.http import HttpResponse
from django.template import RequestContext, loader
from polls.models import Poll
# Create your views here.
#def index(request):
#return HttpResponse("Hello, world. You are at the poll index.")
def index(request):
latest_poll_list = Poll.objects.order_by('-pub_date')[:5]
template = loader.get_template('polls/index.html')
context = RequestContext(request, {
'latest_poll_list': latest_poll_list,
})
return HttpResponse(template.render(context))
def detail(request,poll_id):
return HttpResponse("You're looking at the results of the poll %s." % poll_id)
def results(request, poll_id):
return HttpResponse("You're looking at the results of poll %s." % poll_id)
def vote(request,poll_id):
return HttpResponse("You're voting on poll %s." % poll_id)
Settings.Py
"""
Django settings for mysite project.
For more information on this file, see
https://docs.djangoproject.com/en/1.6/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.6/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.6/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'ma_x5+pnvp$o7#5g#lb)0g$sa5ln%k(z#wcahwib4dngbbe9^='
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'polls',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'mysite.urls'
WSGI_APPLICATION = 'mysite.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.6/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'C://Python34/mysite/db.sqlite3'),
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.6/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
TEMPLATE_DIRS = [os.path.join(BASE_DIR, 'templates')]
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.6/howto/static-files/
STATIC_URL = '/static/'
Urls.Py
from django.conf.urls import patterns, url
from polls import views
urlpatterns = patterns('',
# ex: /polls/
url(r'^$', views.index, name='index'),
# ex: /polls/5/
url(r'^(?P<poll_id>\d+)/$', views.detail, name='detail'),
# ex: /polls/5/results/
url(r'^(?P<poll_id>\d+)/results/$', views.results, name='results'),
# ex: /polls/5/vote/
url(r'^(?P<poll_id>\d+)/vote/$', views.vote, name='vote'),
)
This is the Error Traceback i get.
Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/polls/
Django Version: 1.6.4
Python Version: 3.4.0
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'polls')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware')
Template Loader Error:
Django tried loading these templates, in this order:
Using loader django.template.loaders.filesystem.Loader:
C:\templates\polls\index.html (File does not exist)
Using loader django.template.loaders.app_directories.Loader:
C:\Python34\lib\site-packages\django\contrib\admin\templates\polls\index.html (File does not exist)
C:\Python34\lib\site-packages\django\contrib\auth\templates\polls\index.html (File does not exist)
C:\Python34\mysite\polls\templates\polls\index.html (File does not exist)
Traceback:
File "C:\Python34\lib\site-packages\django\core\handlers\base.py" in get_response
114. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Python34\mysite\polls\views.py" in index
14. template = loader.get_template('polls/index.html')
File "C:\Python34\lib\site-packages\django\template\loader.py" in get_template
138. template, origin = find_template(template_name)
File "C:\Python34\lib\site-packages\django\template\loader.py" in find_template
131. raise TemplateDoesNotExist(name)
Exception Type: TemplateDoesNotExist at /polls/
Exception Value: polls/index.html
I believe the problem is with pointing my index.html file which is inside
/mysite/polls/templates/polls/index.html. This is my file structure
mysite/polls/templates/polls/index.html.
It will be a great help if someone can solve this problem. Any help would be
appreciated. thanks in advance!
Answer: Please try replacing your TEMPLATES with the code block I've written below,
copy it and replace your TEMPLATES:
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': ['polls/templates'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.contrib.auth.context_processors.auth',
'django.template.context_processors.debug',
'django.template.context_processors.i18n',
'django.template.context_processors.media',
'django.template.context_processors.static',
'django.template.context_processors.tz',
'django.contrib.messages.context_processors.messages',
],
},
},
]
I don't think your template_dirs=[] variable is working properly. If your on
1.9 then its depreciated and being over written/ignored. If your templates
folder is in a different place then just write something crazy in there and
look for it in the traceback, then compare that to your desired path and go
from there...
hope this helped
|
Pass existing Webdriver object to custom Python library for Robot Framework
Question: I am trying to create a custom Python library for Robot Framework, but I'm new
to Python and Robot and I'm not sure how to accomplish what I'm trying to do.
I want to pass the Webdriver object that Robot creates using Selenium2Library
to my custom Python library so that I could use the Webdriver's methods, such
as `find_element_by_id`. I've seen some suggestions about how to do it
[here](http://stackoverflow.com/questions/18220386/robotframework-
selenium2library-and-external-libraries-pass-webdriver) and
[here](https://groups.google.com/forum/#!topic/robotframework-
users/rayouGl1YC8), but they're for Java libraries - I can't find any Python
instructions.
How would I go about doing this in Python? Or would I want to do this
differently, without passing the Webdriver object?
Answer: There's nothing built into the library to let you do what you want _per se_.
However, you can create your own library that can access selenium features.
There are two ways to accomplish this, both which require creating your own
library in python. These methods are to to subclass Selenium2Library, or to
get a reference to the Selenium2Library instance.
## Creating a custom library that subclasses Selenium2Library
One way to access the internals of Selenium2Library is to write a library
class that inherits from Selenium2Library. When you do that, you have access
to everything in the original library. You can then return a reference to the
WebDriver object, or you can just write your own keywords in python.
As an example, here's a custom selenium library that has a new keyword that
will return the current WebDriver instance. It does this by calling the
private (to the original Selenium2Library) method `_current_browser`. Since
that's a private method, there's no guarantee it will stand the test of time,
but at the time that I write this it exists.
### Create a custom selenium library
First, create a new python file named CustomSeleniumLibrary.py. Put it where
robot can find it -- the easiest thing is just put it in the same folder as a
test suite that is going to use it. Put the following into that file:
from Selenium2Library import Selenium2Library
# create new class that inherits from Selenium2Library
class CustomSeleniumLibrary(Selenium2Library):
# create a new keyword called "get webdriver instance"
def get_webdriver_instance(self):
return self._current_browser()
### Create a testcase that uses the library
Next, write a test case that uses this instead of Selenium2Library. For
example:
*** Settings ***
| Library | CustomSeleniumLibrary.py
| Suite Teardown | close all browsers
*** Test Cases ***
| Example using custom selenium library
| | Open browser | http://www.example.com | browser=chrome
| | ${webdriver}= | Get webdriver instance
| | log | webdriver: ${webdriver}
### Run the test
Run the test as you would any other test. When it completes you should see
something like this in the log:
16:00:46.887 INFO webdriver: <selenium.webdriver.chrome.webdriver.WebDriver object at 0x10b849410>
### Using the object in a testcase
The cryptic `...<selenium....WebDriver object...>` message proves that the
variable actually holds a reference to the python WebDriver object. Using the
[extended variable
syntax](http://robotframework.googlecode.com/hg/doc/userguide/RobotFrameworkUserGuide.html?r=2.8.4#extended-
variable-syntax) of robot you could then call methods and access attributes on
that object if you want. I do not recommend doing it in this way, but I think
it's really interesting that robot supports it:
| | log | The page title is ${webdriver.title}
## Creating a custom library that references Selenium2Library
The second way to accomplish this is to use robot's method of getting an
instance of a library, at which point you can access the object however you
want. This is documented in the robot user guide; see [Getting active library
instance from Robot
Framework](http://robotframework.googlecode.com/hg/doc/userguide/RobotFrameworkUserGuide.html?r=2.8.4#getting-
active-library-instance-from-robot-framework) in the [Robot Framework User's
Guide](http://robotframework.googlecode.com/hg/doc/userguide/RobotFrameworkUserGuide.html?r=2.8.4).
For example, the get_library_instance keyword from the above example would
look like this:
from robot.libraries.BuiltIn import BuiltIn
def get_webdriver_instance():
se2lib = BuiltIn().get_library_instance('Selenium2Library')
return se2lib._current_browser()
Note that in this case you must include both the Selenium2Library _and_ your
custom library:
*** Settings ***
| Library | Selenium2Library
| Library | CustomSeleniumKeywords.py
| Suite Teardown | close all browsers
*** Test Cases ***
| Example using custom selenium keyword
| | Open browser | http://www.example.com | browser=chrome
| | ${webdriver}= | Get webdriver instance
| | log | webdriver: ${webdriver}
|
Is it possible to compare day + month(not year) against current day + month in python?
Question: I'm getting data in the format of 'May 10' and I am trying to figure out if
its for this year or next. The date is for only a year so May 10 would mean
May 10 2015 while May 20 would be May 20 2014.
To do this, I wanted to convert the string into a date format and compare but
without the year I'm getting this error: ValueError: unconverted data remains:
Here's a rough idea of my current approach:
if datetime.strptime(DATE_TO_COMPARE, '%b %d') > time.strftime("%d %m"): #need to figure out may 11 is for this year or next year. Year is not provided.
print datetime.strptime(DATE_TO_COMPARE, '%b %d'), ' > ', time.strftime("%d %m")
Is there a better way to do it? Right now I'm thinking of making a dictionary
of months and then converting the month into a number then adding it the
day(i.e.Dec 5 = 512) and comparing it. But not sure if there's a better way?
Answer: Parse, then replace the year in the result (with
[`date.replace()`](https://docs.python.org/2/library/datetime.html#datetime.date.replace)
and test against today, _not_ a string:
from datetime import date, datetime
today = date.today()
parsed = datetime.strptime(DATE_TO_COMPARE, '%b %d').date().replace(year=today.year)
if parsed > today:
# last year
parsed = parsed.replace(year=today.year - 1)
I used `date` objects here as the time of day shouldn't figure in your
comparisons.
Demo:
>>> from datetime import date, datetime
>>> today = date.today()
>>> DATE_TO_COMPARE = 'May 10'
>>> parsed = datetime.strptime(DATE_TO_COMPARE, '%b %d').date().replace(year=today.year)
>>> parsed
datetime.date(2014, 5, 10)
>>> parsed > today
False
>>> DATE_TO_COMPARE = 'May 20'
>>> parsed = datetime.strptime(DATE_TO_COMPARE, '%b %d').date().replace(year=today.year)
>>> parsed
datetime.date(2014, 5, 20)
>>> parsed > today
True
|
How do I make make spiral in python?
Question: I want to make a function that I give it a number and the function returns a
spiral from 1 to that number(in 2 dimensional array). For example if I give
the number 25 to the function it will return something like this:

I tried different ways but nothing worked out. I just cant figure it out.
Hope I explained myself properly.
Answer: Mostly the issue here is one of enumerating coordinates - match numbers to
coordinates, then print it out however you want.
Start by noticing the two fundamental patterns:
* (Direction) Move right, then down, then left, then up, then... (this is hopefully obvious)
* (Magnitude) Move one, then one, then two, then two, then three...
So with those rules, write a generator that yields `number, coordinates`
tuples.
It's clearest if you set up some helper functions first; I'll be extra
verbose:
def move_right(x,y):
return x+1, y
def move_down(x,y):
return x,y-1
def move_left(x,y):
return x-1,y
def move_up(x,y):
return x,y+1
moves = [move_right, move_down, move_left, move_up]
Easy enough, now the generator:
def gen_points(end):
from itertools import cycle
_moves = cycle(moves)
n = 1
pos = 0,0
times_to_move = 1
yield n,pos
while True:
for _ in range(2):
move = next(_moves)
for _ in range(times_to_move):
if n >= end:
return
pos = move(*pos)
n+=1
yield n,pos
times_to_move+=1
demo:
list(gen_points(25))
Out[59]:
[(1, (0, 0)),
(2, (1, 0)),
(3, (1, -1)),
(4, (0, -1)),
(5, (-1, -1)),
(6, (-1, 0)),
(7, (-1, 1)),
(8, (0, 1)),
(9, (1, 1)),
(10, (2, 1)),
(11, (2, 0)),
(12, (2, -1)),
(13, (2, -2)),
(14, (1, -2)),
(15, (0, -2)),
(16, (-1, -2)),
(17, (-2, -2)),
(18, (-2, -1)),
(19, (-2, 0)),
(20, (-2, 1)),
(21, (-2, 2)),
(22, (-1, 2)),
(23, (0, 2)),
(24, (1, 2)),
(25, (2, 2))]
|
Previous error being masked by current exception context
Question: The following is an example I found at the website for Doug Hellman in a file
named "masking_exceptions_catch.py". I can't locate the link at the moment.
The exception raised in throws() is discarded while that raised by cleanup()
is reported.
In his article, Doug remarks that the handling is non-intuitive. Halfway
expecting it to be a bug or limitation in the Python version at the time it
was written (circa 2009), I ran it in the current production release of Python
for the Mac (2.7.6). It still reports the exception from cleanup(). I find
this somewhat amazing and would like to see a description of how it is
actually correct or desirable behavior.
#!/usr/bin/env python
import sys
import traceback
def throws():
raise RuntimeError('error from throws')
def nested():
try:
throws()
except:
try:
cleanup()
except:
pass # ignore errors in cleanup
raise # we want to re-raise the original error
def cleanup():
raise RuntimeError('error from cleanup')
def main():
try:
nested()
return 0
except Exception, err:
traceback.print_exc()
return 1
if __name__ == '__main__':
sys.exit(main())
**Program output:**
$ python masking_exceptions_catch.py
Traceback (most recent call last):
File "masking_exceptions_catch.py", line 24, in main
nested()
File "masking_exceptions_catch.py", line 14, in nested
cleanup()
File "masking_exceptions_catch.py", line 20, in cleanup
raise RuntimeError('error from cleanup')
RuntimeError: error from cleanup
Answer: Circling back around to answer. I'll start by not answering your question. :-)
**Does this really work?**
def f():
try:
raise Exception('bananas!')
except:
pass
raise
So, what does the above do? Cue Jeopardy music.
* * *
Alright then, pencils down.
# python 3.3
4 except:
5 pass
----> 6 raise
7
RuntimeError: No active exception to reraise
# python 2.7
1 def f():
2 try:
----> 3 raise Exception('bananas!')
4 except:
5 pass
Exception: bananas!
Well, that was fruitful. For fun, let's try naming the exception.
def f():
try:
raise Exception('bananas!')
except Exception as e:
pass
raise e
What now?
# python 3.3
4 except Exception as e:
5 pass
----> 6 raise e
7
UnboundLocalError: local variable 'e' referenced before assignment
# python 2.7
4 except Exception as e:
5 pass
----> 6 raise e
7
Exception: bananas!
Exception semantics changed pretty drastically between python 2 and 3. But if
python 2's behavior is at all surprising to you here, consider: it's basically
in line with what python does everywhere else.
try:
1/0
except Exception as e:
x=4
#can I access `x` here after the exception block? How about `e`?
`try` and `except` are not scopes. Few things are, actually, in python; we
have the "LEGB Rule" to remember the four namespaces - Local, Enclosing,
Global, Builtin. Other blocks simply aren't scopes; I can happily declare `x`
within a `for` loop and expect to still be able to reference it after that
loop.
So, awkward. Should exceptions be special-cased to be confined to their
enclosing lexical block? Python 2 says no, python 3 says
[yes](http://www.python.org/dev/peps/pep-3110/#semantic-changes). But I'm
oversimplifying things here; bare `raise` is what you initially asked about,
and the issues are closely related but not actually the same. Python 3 _could_
have mandated that named exceptions are scoped to their block without
addressing the bare `raise` thing.
**What does bare`raise` _do‽_**
Common usage is to use bare `raise` as a means to preserve the stack trace.
Catch, do logging/cleanup, reraise. Cool, my cleanup code doesn't appear in
the traceback, works 99.9% of the time. But things can go south when we try to
handle nested exceptions within an exception handler. _Sometimes._ (see
examples at the bottom for when it is/isn't a problem)
Intuitively, no-argument `raise` would properly handle nested exception
handlers, and figure out the correct "current" exception to reraise. That's
not exactly reality, though. Turns out that - getting into implementation
details here - exception info is saved as a member of the current [frame
object](https://docs.python.org/2/reference/datamodel.html#frame-objects). And
in python 2, there's simply no plumbing to handle pushing/popping exception
handlers on a stack within a single frame; there's just simply a field that
contains the last exception, irrespective of any handling we may have done to
it. That's what bare `raise` grabs.
> # 6.9. [The raise
> statement](https://docs.python.org/2/reference/simple_stmts.html#the-raise-
> statement)
>
> `raise_stmt ::= "raise" [expression ["," expression ["," expression]]]`
>
> If no expressions are present, raise re-raises the last exception that was
> active in the current **scope**.
So, yes, this is a problem deep within python 2 related to how traceback
information is stored - in Highlander tradition, there can be only one
(traceback object saved to a given stack frame). As a consequence, bare
`raise` reraises what the current frame believes is the "last" exception,
which isn't necessarily the one that our human brains believe is the one
specific to the lexically-nested exception block we're in at the time. Bah,
_scopes!_
**So, fixed in python 3?**
Yes. How? [New bytecode
instruction](https://docs.python.org/3.2/library/dis.html#opcode-POP_EXCEPT)
(two, actually, there's another implicit one at the start of except handlers)
but really who cares - it all "just works" intuitively. Instead of getting
`RuntimeError: error from cleanup`, your example code raises `RuntimeError:
error from throws` as expected.
I can't give you an official reason why this was not included in python 2. The
issue has been known [since PEP
344](http://www.python.org/dev/peps/pep-0344/), which mentions Raymond
Hettinger raising the issue in _2003_. If I had to _guess_ , fixing this is a
breaking change (among other things, it affects the semantics of
`sys.exc_info`), and that's often a good enough reason not to do it in a minor
release.
**Options if you're on python 2:**
**1)** Name the exception you intend to reraise, and just deal with a line or
two being added to the bottom of your stack trace. Your example `nested`
function becomes:
def nested():
try:
throws()
except BaseException as e:
try:
cleanup()
except:
pass
raise e
And associated traceback:
Traceback (most recent call last):
File "example", line 24, in main
nested()
File "example", line 17, in nested
raise e
RuntimeError: error from throws
So, the traceback is altered, but it works.
**1.5)** Use the 3-argument version of `raise`. A lot of people don't know
about this one, and it is a legitimate (if clunky) way to preserve your stack
trace.
def nested():
try:
throws()
except:
e = sys.exc_info()
try:
cleanup()
except:
pass
raise e[0],e[1],e[2]
`sys.exc_info` gives us a 3-tuple containing (type, value, traceback), which
is exactly what the 3-argument version of `raise` takes. Note that this 3-arg
syntax only works in python 2.
**2)** Refactor your cleanup code such that it cannot _possibly_ throw an
unhandled exception. Remember, it's all about **scopes** \- move that
`try/except` out of `nested` and into its own function.
def nested():
try:
throws()
except:
cleanup()
raise
def cleanup():
try:
cleanup_code_that_totally_could_raise_an_exception()
except:
pass
def cleanup_code_that_totally_could_raise_an_exception():
raise RuntimeError('error from cleanup')
Now you don't have to worry; since the exception never made it to `nested`'s
scope, it won't interfere with the exception you intended to reraise.
**3)** Use bare `raise` like you were doing before you read all this and live
with it; cleanup code doesn't usually raise exceptions, right? :-)
|
correct way to setup teardown login logout in django
Question: To test a polling app that I made using django, the pre-requisite for
voting/viewing_results is that the user should be logged in. I wanted to
create a testsuite where setup involves creating testuser, logging him in and
teardown involves logging out the user and deleting the user.
I came across the setUp() and tearDown() methods - but got to know that they
are done for each method in the test. I wanted to have this functionality at
the class level and saw that django has setUpClass and tearDownClass - but as
they are class methods I cannot do self.client.login , self.client.logout on
them. I then created a LoginMixin which has setUp and tearDown methods.
I wanted to know which of these 2 methods is more pythonic and is there a
better alternative rather than logging in and out before every test?
1. Have setUp, tearDown methods:
def setUp(self):
self.user = utils.create_user()
self.client.login(username='testuser', password='testpasswd')
def tearDown(self):
self.client.logout()
utils.delete_user(self.user)
2. Have a LoginMixin and let each test class derive from it:
class LoginMixin():
def setUp(self):
self.user = create_user()
self.client.login(username='testuser', password='testpasswd')
def tearDown(self):
self.client.logout()
class MyTest(LoginMixin, TestCase):
....
....
I am using django_nose as the TestRunner.
Answer: If the only reason you're not using the `setUpClass` classmethod is because
you don't have access to the default Client, you could just create your own
and refer to that:
from django.test.client import Client
class LoginMixin():
client = Client()
# ...
|
settings up logging in console script
Question: I have a python console script, which source I don't want to modify.
But I want to modify the logging which is done by the script and its
libraries.
Examples:
* I want messages at level ERROR to be mailed to [email protected]
* I want INFO messages of file foo.py to be ignored.
* I want to include the PID in the loggings messages.
The script uses this logger:
import logging
logger=logging.getLogger(__name__)
del(logging)
How can I configure the logging, if I don't want to modify the sources of the
console script?
Answer: You can load it using a wrapper script. In the wrapper, set the logging
configuration as desired (e.g. `logging.basicConfig()`, or add logging
handlers as desired), and then run the script.
If the script has a main function (look for `if __name__ == "__main__":` in
the script), you can simply import the file and run the function:
import sys
import logging
logging.basicConfig(...)
import my_console_script
sys.exit(my_console_script.main())
If it doesn't have such a function, then simply importing it will run its
contents (you can omit the `sys.exit()` call).
|
How to subtract two 2D lists in python?
Question: **I need to subract two 2D lists like this** :
list1= [['some',2],['other',1],['thing',5]]
list2= [['some',1],['thing',5]]
**result should be like this** :
result= [['some',1],['other',1],['thing',0]]
**or**
result= [['some',1],['other',1]]
It should be a list, not a dictionary; the order doesn't matter.
Answer:
from collections import Counter
list1= [['some',2],['other',1],['thing',5]]
list2= [['some',1],['thing',5]]
c1 = Counter({item[0]: item[1] for item in list1})
c2 = Counter({item[0]: item[1] for item in list2})
result = [[key, value] for key, value in (c1-c2).items()]
result
[['other', 1], ['some', 1]]
|
Python / Dictionary / List / Mongo insert issues - beginner
Question: Sorry, trying to understand and get used to dictionary and list objects.
I'm calling eBay's API through their ebaysdk, and want to store the items from
it to a collection as documents in Mongo. Simple.
Here's a sample of the schema that will be returned:
<timestamp>2009-09-04T00:47:12.456Z</timestamp>
<searchResult count="2">
<item>
<itemId>230371938681</itemId>
<title>Harry Potter and the Order of the Phoenix HD-DVD</title>
<globalId>EBAY-US</globalId>
<primaryCategory>
<categoryId>617</categoryId>
<categoryName>DVD, HD DVD & Blu-ray</categoryName>
</primaryCategory>
I've tried 500 iterations of this code, stripped down to the most basic here's
what I have.
from ebaysdk import finding
from pymongo import MongoClient
api = finding(appid="billy-40d0a7e49d87")
api.execute('findItemsByKeywords', {'keywords': 'potter'})
listings = api.response_dict()
client = MongoClient('mongodb://user:[email protected]:10099/ebaystuff')
db = client['ebaycollection']
ebay_collection = db.ebaysearch
for key in listings:
print key
ebay_collection.insert(key)
Will get this error:
Traceback (most recent call last):
File "ebay_search.py", line 34, in <module>
ebay_collection.insert(key)
File "/Library/Python/2.7/site-packages/pymongo/collection.py", line 408, in insert
self.uuid_subtype, client)
File "/Library/Python/2.7/site-packages/pymongo/collection.py", line 378, in gen
doc['_id'] = ObjectId()
TypeError: 'str' object does not support item assignment
Simple stuff. All I want to do is add each item as a document.
Answer: An immutable type like a string cannot be used as a document because it
doesn't allow adding additional fields, like the [`_id` field Mongo
requires](http://docs.mongodb.org/manual/reference/method/db.collection.insert/#id-
field). You can instead wrap the string in a dictionary to serve as a wrapper
document:
key_doc = {'key': key}
ebay_collection.insert(key_doc)
|
file i/o and the the meaning of binary modes
Question: so I want to save some arbitrary piece of data in redis with python. Since
redis supports this by just storing it as a string I thought I could read the
date with python again and write it to a file. At first this didn`t work
because I used the standard 'r' and 'w' mode of open(). Python does say they
are equal though.
After I changed it to 'rb' and 'wb' it works, but why does the non binary read
or write change the data somehow? what is the significance of it anyway?
here is some code which works, but just change the file modes to non-binary
and watch the testfile_read.zip change. You do need redis though, easy to
install with pip install redis
import redis
import os.path
version=1.0
path='testfile.zip'
r_server=redis.Redis("127.0.0.1")
fp = open(path,'rb')
test=fp.read()
fp.close()
r_server.hset('testfile',version,test)
r_server.hset('testfile','currver',version)
test2=r_server.hget('testfile',version)
if test==test2:
print "read from file and read from redis are the same"
else:
print "read from file and read from redis are the NOT!! same"
fp2 = open("testfile_read.zip",'wb')
fp2.write(test2)
fp2.close()
fp3 = open("testfile_read.zip",'rb')
test3=fp3.read()
fp3.close()
if test2==test3:
print "redis is equal to written file"
else:
print "redis is NOT!!! equal to written file"
Answer: The reason for having non-binary mode is to have the file calls do some
manipulation of the data for you (for example, translating line-endings).
That's why you don't have to worry so much about where a text file came from;
the file routines hide those details from you.
|
Image file to vector of pixels with CImg?
Question: I have this in python:
import Image
import numpy as np
import random
img = Image.open('img.jpg')
#turn img to list of rgb tuples and scramble
pixels = list(img.getdata())
pixels.reverse()
random.shuffle(pixels)
#make new image using scrambled pixels
img2 = Image.new(img.mode, img.size)
img2.putdata(pixels)
img2.save('newimg.png')
I figured I should be working in c++ to keep stuff I learned last semester
fresh in my head and to prepare for the class I have next semester which also
revolves around c++. So, I found CImg and got a bit overwhelmed by the
documentation. So, what would be CImg's equivalent of line 8?
My end goal is to be able to scramble an image using a known pattern, then use
that pattern to unscramble later. I don't know if this is possible though. To
me its a bit like asking the following:
given:
srand(x);
int rand_num = rand() % 10;
and rand_num = 7
find x.
Answer: As far as know `CImg` provides
[iterators](http://cimg.sourceforge.net/reference/structcimg__library_1_1CImg.html)
to loop through every pixel. As such and provided that your compiler support
C++11, you could use
[std::shuffle](http://www.cplusplus.com/reference/algorithm/shuffle/) to
shuffle the pixels of your image (see example below).
CImg<float> img("lena.jpg"); // Load image from file.
unsigned seed = std::chrono::system_clock::now().time_since_epoch().count();
std::shuffle(img.begin(), img.end(), std::default_random_engine(seed));
|
Python multiprocessing: where should join() be called, if process' children have grandchildren?
Question: Here is a toy problem I am working with:
import multiprocessing as mp
def task2():
print "I am doing something important."
def task1():
listOfProcesses = []
for i in range(5):
process = mp.Process(target=task2)
process.start()
listOfProcesses.append(process)
def task0():
listOfProcesses = []
for i in range(5):
process = mp.Process(target=task1)
process.start()
listOfProcesses.append(process)
if __name__ == '__main__':
task0()
Now, I am having trouble understanding where join should be called in such a
scenario.
If I change `task0`'s definition like so:
def task0():
listOfProcesses = []
for i in range(5):
process = mp.Process(target=task2)
process.start()
listOfProcesses.append(process)
for process in listOfProcesses:
process.join()
Everything then seems to work correctly, but I don't understand what I am
really doing here. `task1` only starts its children, it does not join them. So
what does joining `task0` mean for `task1`?
Answer: `join` is fairly simple in concept - `x.join` says "the current thread (i.e.
process) of execution cannot proceed past _this_ point until `x` terminates."
So, in general you don't want your **main** thread to proceed past some point
until all your workers are done doing their work. Since you execute `task0` in
your main thread, doing a `join` there prevents your main thread from
proceeding past that point until all your workers (both `task1` _and_ `task2`)
are done.
# But wait, I didn't `join` in `task1`!
That's right. But `task1`'s process still won't terminate until all its
`task2`s are finished. This has to do with the POSIX concept of [_process
groups_](http://en.wikipedia.org/wiki/Process_group) \- a parent process will
not terminate until all its child processes are terminated. So, let's look at
the output of this reduced example:
import multiprocessing as mp
from time import sleep
def task2():
sleep(1)
print "I am doing something important."
def task1():
for i in range(2):
process = mp.Process(target=task2)
process.start()
print 'task1 done'
def task0():
process = mp.Process(target=task1)
process.start()
process.join()
if __name__ == '__main__':
task0()
print 'all done'
output:
task1 done
I am doing something important.
I am doing something important.
all done
So as you can see, `task1` reached its end but did not terminate until its
child processes did - which meant that our `join` block in `task0` correctly
blocked our main thread from terminating until all the workers did.
For fun, here is the output of `ps jf` when running your original script with
no `join`s with the _only modification_ being `time.sleep` thrown into `task2`
so I could capture it running:
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
6780 7385 7385 7385 pts/11 7677 Ss 1000 0:00 bash
7385 7677 7677 7385 pts/11 7677 R+ 1000 0:00 \_ ps jf
6780 6866 6866 6866 pts/7 7646 Ss 1000 0:00 bash
6866 7646 7646 6866 pts/7 7646 S+ 1000 0:00 \_ python test
7646 7647 7646 6866 pts/7 7646 S+ 1000 0:00 \_ python test
7647 7672 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7647 7673 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7647 7674 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7647 7675 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7647 7676 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7646 7648 7646 6866 pts/7 7646 S+ 1000 0:00 \_ python test
7648 7665 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7648 7666 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7648 7667 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7648 7668 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7648 7669 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7646 7649 7646 6866 pts/7 7646 S+ 1000 0:00 \_ python test
7649 7656 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7649 7657 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7649 7658 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7649 7659 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7649 7660 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7646 7650 7646 6866 pts/7 7646 S+ 1000 0:00 \_ python test
7650 7652 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7650 7653 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7650 7654 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7650 7655 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7650 7670 7646 6866 pts/7 7646 S+ 1000 0:00 | \_ python test
7646 7651 7646 6866 pts/7 7646 S+ 1000 0:00 \_ python test
7651 7661 7646 6866 pts/7 7646 S+ 1000 0:00 \_ python test
7651 7662 7646 6866 pts/7 7646 S+ 1000 0:00 \_ python test
7651 7663 7646 6866 pts/7 7646 S+ 1000 0:00 \_ python test
7651 7664 7646 6866 pts/7 7646 S+ 1000 0:00 \_ python test
7651 7671 7646 6866 pts/7 7646 S+ 1000 0:00 \_ python test
You can see that our main process (the one that did `task0`) and the "first
children" (the ones that did `task1`) are still alive, even though they
clearly were out of python code to execute. They are also all members of the
same process group (`TPGID`).
# Sum it up, man
All of that is a long-winded way to say: `join` in your main thread is usually
all you need, since you have the guarantee that any child processes will wait
for _their_ children to terminate before themselves terminating.
|
Getting a MemoryError because list/array is too large
Question: ## Problem
I have to download `object_x`. For simplicity's sake, `object_x` comprises a
series of `integers` adding up to `1000`. The download is irregular. I receive
groups or `chunks` of integers in seemingly random order, and I need to keep
track of them until I have all `1000` to make up the final `object_x`.
The incoming chunks can also overlap, so for instance:
Chunk 1: integers 0-500
Chunk 2: integers 600-1000
Chunk 3: integers 400-700
## Current method
Create `object_x` as a `list` containing all of its comprising integers
`0-1000`. When a `chunk` is downloaded, remove all of the integers that
comprise the `chunk` from `object_x`. Keep doing this until `object_x` is
empty (known to be complete then).
object_x = range(0,1000)
# download chunk 1
chunk = range(0, 500)
for number in chunk:
if number in object_x:
object_x.remove(number)
# repeat for every downloaded chunk
## Conclusion
This method is very memory intensive. The script throws a MemoryError if
`object_x` or `chunk` is too large.
I'm searching for a better way to keep track of the chunks to build the
`object_x`. Any ideas? I'm using Python, but language doesn't matter I guess.
Answer: This is the kind of scenario where streaming is very important. Doing
everything in memory is a bad idea because you might not have enough memory
(as in your case). You should probably save the chunks to disk, keep track of
how many you downloaded, and when you reach 1000, process them on disk (or
load them into memory one by one to process them).
"[C# Security: Computing File
Hashes](http://www.programmersranch.com/2014/05/c-security-computing-file-
hashes.html)" is a recent article I wrote - it's a different subject, but it
does illustrate the importance of streaming towards the end.
|
emulate file-like behavior in python
Question: I'm writing a script, where I have to dump some columns of tables from an SQL
database into a file, and transfer it via FTP.
Because dumps can get really big, my Idea was to write a FakeFile which querys
row by row from a cursor in its `readline` method and pass it to
`ftplib.FTP.storlines`.
This is what I have so far:
import ftplib
import MySQLdb
def MySQLFakeFile(object):
'''
Simulates a read-only file, which dumps rows on demand.
Use this, to pass it to the FTP protocol to make the dump more efficient,
without the need to dump it somewhere and copy it over the net afterwords
'''
def __init__(self, cursor, delimeter, table_name, query):
self.cursor = cursor
self.delimeter = delimeter
self.table_name = table_name
#query is something along select ... FROM %s
self.cursor.execute(query, table_name)
self._has_written_index = False
#file attrs
self.closed = False
self.name = table_name + ".csv"
self.encoding = "utf-8"
self.mode = "r"
def close(self):
self.cursor.close()
self.closed = True
def flush(self):
'''No-OP'''
pass
def read(self, size):
pass
def readline(self, size):
if not self._has_written_index:
ret = []
for desc in self.cursor.description:
ret.append(desc[0])
self._has_written_index = True
else:
ret = self.cursor.fetchone()
if not ret:
return None
s = ""
for col in ret:
s += str(col) + self.delimeter
return s + "\n"
def readlines(self, size):
ret = []
line = self.readline()
while line:
ret.append(line)
line = self.readline()
def write(self, string):
raise Exception("cannot write to a MySQLFakeFile")
def writelines(self, lines)
raise Exception("cannot write to a MySQLFakeFile")
db = MySQLdb("host", "user", "pass", "db")
ftp = ftplib.FTP("host", "user", "pass")
fakeFile = MySQLFakeFile(db.cursor(), ";", "tableName", "SELECT * FROM %s")
ftp.storlines("STOR tableName.csv", fakeFile)
gives me
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/ftplib.py", line 496, in storlines
if len(buf) > self.maxline:
TypeError: object of type 'NoneType' has no len()
What am I doing wrong, and whats the `NoneType` here?
Answer: Your readline returns `None` instead of empty string `""` when reached end-of-
rows.
Your readlines returns nothing.
def readlines(self, size):
ret = []
while True:
line = self.readline()
if not line:
break
ret.append(line)
return ret
|
python regex invalid syntax
Question: I am testing a code from a current 2600 magazine for a wordlist generator
based off a bunch of searches in google. I get an invalid syntax from this
line:
results.extend(re.findall("<a href="/%201D([^/%201D]*)/%201D">class=(?:1|s)",data.read()))
I am new to regex so I did some research on the basics of re and it seemed
fairly easy but I still didn't understand the /%201D. I did a search on it and
found thats it's a hex of a char code. I am still stuck on making this work.
[Here](http://pastebin.com/gknjzDsj) is the rest of the code. The line I'm
having a problem with is line 36.
This is the function:
import re, sys, os, urllib
### custom useragent ###
class AppURLopener(urllib.FancyURLopener):
version = "Mozilla/5.0(compatable;MSIE 9.0; Windows NT 6.1; Trident/5.0)"
urllib._urlopener = AppURLopener()
uopen = urllib.urlopen
uencode = urllib.urlencode
def google(query, numget=10, verbose=0):
numget = int(numget)
start = 0
results = []
if verbose == 2:
print("[+]Getting " + str(numget) + " results")
while len(results) < numget:
print("[+]" + str(len(results)) + " so far...")
data = uopen("https://www.google.com/search?q="+query+"&star="+str(start))
if data.code != 200:
print("Error " + str(data.code))
break
results.extend(re.findall("<a href="/%201D([^/%201D]*)/%201D">class=(?:1|s)",data.read()))
print(data.read())
start += 10
if verbose == 2:
print("[+] Got " + str(numget) + " results")
return results[:numget]
Answer: first you need to escape the " in `<a href="`
"<a href=\"/%201D([^/%201D]*)/%201D\">class=(?:1|s)"
second, `%20` encodes a single space in URLs, so `%201D` corresponds to `"
1D"`.
|
python CGI : upload error
Question: I use Python version 3.4
and this is server source code in python
import io
from socket import *
import threading
import cgi
serverPort = 8181
serverSocket = socket(AF_INET, SOCK_STREAM)
serverSocket.bind(('', serverPort))
serverSocket.listen(5)
def serverHandle(connectionSocket, addr):
try:
message = connectionSocket.recv(1024)
if not message:
return
path = message.split()[1].decode('utf-8')
if path == '/':
connectionSocket.send(b'HTTP/1.1 200 OK\r\n')
connectionSocket.send(b'Content-type: text/html\r\n\r\n')
with open('upload.html', 'rb') as f:
connectionSocket.send(f.read())
elif path == '/upload':
header = message.split(b'\r\n\r\n')[0]
query = message.split(b'\r\n\r\n')[-1]
if not query:
return
fp = io.BytesIO(query)
form = cgi.FieldStorage(fp, environ={'REQUEST_METHOD':'POST', 'CONTENT_TYPE': 'multipart/form-data'})
f = open(form.filename, 'w')
f.write(form.file)
f.close()
connectionSocket.close()
except IOError:
connectionSocket.send('404 File Not Found\r\n\r\n'.encode('utf_8'))
connectionSocket.close()
while True:
connectionSocket, addr = serverSocket.accept();
threading._start_new_thread(serverHandle, (connectionSocket, addr))
serverSocket.close()
and upload HTML source is here
<HTML>
<BODY>
<FORM ENCTYPE="multipart/form-data" ACTION="http://127.0.0.1:8181/upload" METHOD=POST>
File to process: <INPUT NAME="file" TYPE="file">
<INPUT TYPE="submit" VALUE="Send File">
</FORM>
</BODY>
</HTML>
I try upload file to server and save file in directory but error is occurred
like this
Unhandled exception in thread started by <function serverHandle at 0x000000000240CBF8>
Traceback (most recent call last):
File "C:\Users\Inwoo\Eclipse\workspace\WebServer\src\webserver1.py", line 36, in serverHandle
form = cgi.FieldStorage(fp, environ={'REQUEST_METHOD':'POST', 'CONTENT_TYPE': 'multipart/form-data'})
File "C:\Python34\lib\cgi.py", line 559, in __init__
self.read_multi(environ, keep_blank_values, strict_parsing)
File "C:\Python34\lib\cgi.py", line 681, in read_multi
raise ValueError('Invalid boundary in multipart form: %r' % (ib,))
ValueError: Invalid boundary in multipart form: b''
I cant understand this problem?
and sometimes query of message is empty!
so I wrote `if not query: return`, is it correct?
and how can I receive uploaded file and save them in server?
Answer: You need to _not_ split the query on as many newlines as you do; there can be
embedded newlines in it. Use `str.partition()` instead here:
header, _, query = message.partition(b'\r\n\r\n')
You'll have to parse the headers; the `Content-Type` header contains the
multipart boundary; the `FieldStorage` instance needs this to determine where
the fields begin and end.
import re
content_type = re.search(br'content-type:\s+(.*?)\r\n', header, flags=re.I).group(1)
form = cgi.FieldStorage(fp, environ={'REQUEST_METHOD':'POST', 'CONTENT_TYPE': content_type})
Take into account that a HTTP message with a file upload can easily contain
much more that 1024 bytes; you only ever read that little. _Just the headers_
can easily make up most of the message; my browser sends 819 bytes for the
header and keeps that in one packet. Your `connectionSocket.recv(1024)` call
will then contain **just those headers** and you'll need to read more data
still.
|
Start new subprocess with 'default' environment variables
Question: I'm writing a build script to resolve dependent shared libraries (and their
shared libraries, etc.). These shared libraries do not exist in the normal
`PATH` environment variable.
For the build process to work (for the compiler to find these libraries), the
`PATH` has been changed to include the directories of these libraries.
The build process is thus:
Loader script (changes PATH) -> Python-based build script -> Configure ->
Build -> Resolve Dependencies -> Install.
The Python instance inherits a changed `PATH` variable from its parent shell.
From within Python, I'm trying to get the default `PATH` (not the one
inherited from its parent shell).
The idea:
The idea to resolve the 'default' PATH variable is to somehow 'signal' the OS
to start a new process (running a script that prints PATH) but this process is
**NOT** a child of the current Python process (and presumably won't inherit
its modified environment variables).
The attempted implementation:
import os
import sys
print os.environ["PATH"]
print "---"
os.spawnl(os.P_WAIT, sys.executable, "python", "-c \"import os;print(os.environ['PATH']);\"")
`os.spawn` appears to use the same environment variables as the Python process
which calls it. I've also tried this approach with `subprocess.POpen`, with no
success.
Can this approach be implemented ? If not, what is an alternative approach
(given that the loader script and the overall process can't change)?
I'm currently using Windows but the build script is to be cross-platform.
**EDIT:**
The cross-platform constraint appears to be too restrictive. Different
implementations of the same concept can now be considered.
As an example, using code from
[this](http://stackoverflow.com/a/1146404/2507539) answer, the Windows
registry can be used to get the 'default' system `PATH` variable.
try:
import _winreg as winreg
except ImportError:
try:
import winreg
except ImportError:
winreg = None
def env_keys(user=True):
if user:
root = winreg.HKEY_CURRENT_USER
subkey = "Environment"
else:
root = winreg.HKEY_LOCAL_MACHINE
subkey = r"SYSTEM\CurrentControlSet\Control\Session Manager\Environment"
return root, subkey
def get_env(name, user=True):
root, subkey = env_keys(user)
key = winreg.OpenKey(root, subkey, 0, winreg.KEY_READ)
try:
value, _ = winreg.QueryValueEx(key, name)
except WindowsError:
return ""
value = winreg.ExpandEnvironmentStrings(value)
return value
print get_env("PATH", False)
A consistent approach for *nix is needed.
Answer: Using `subprocess.Popen`, you can provide an environment for the child process
to use:
default_path = os.environ['PATH'] # save the default path before changing it
os.environ['PATH'] = # whatever you want
child_env = os.environ.copy()
child_env['PATH'] = default_path
# change env
subprocess.Popen(..., env=child_env)
The
[documentation](https://docs.python.org/2/library/subprocess.html#subprocess.Popen)
states that the provided environment will be used instead of inheriting it
from the parent:
> If env is not None, it must be a mapping that defines the environment
> variables for the new process; these are used instead of inheriting the
> current process’ environment, which is the default behavior.
|
Deepcopy list of lists (speed issue)
Question: Due to unclear formulation I decided to rewrite my question: My code looks sth
like this (org is supposed to be a list of a list and two integers):
def my_copy(org):
temp = (tuple(org[0]), org[1], org[2])
temp2 = []
temp2.append(list(temp[0]))
temp2.append(temp[1])
temp2.append(temp[2])
return temp2
a = [[1,2,3], 4, 5]
b = []
for i in range(5):
b.append(my_copy(a))
now I can change the elements of b without influencing the others copies.
Unlike if I were to use
`b.append(copy.copy(a))` in the loop
I do all this to avoid using copy.deepcopy() which appears to be quite slow.
There are now three questions: Does this code generate a deepcopy of my list?
And if not, why does it still create copies and not just new references as
`b.append(a)` would? In addition: How can I do this in a more elegant, fast
and pythonic way?
Answer: There appears to be some misunderstanding here of the difference between a
shallow copy and a deep copy. You state in your question that you are
appending lists of lists. Let's assume that the following is such a list:
In [32]: x = [[1,2,3],[4,5,6]]
In a shallow copy we copy only the first layer. From the docs:
> A shallow copy constructs a new compound object and then (to the extent
> possible) inserts references into it to the objects found in the original.
In [33]: z = []
# using the method you describe
In [35]: z.append(list(tuple(list(x))))
In [36]: z
Out[36]: [[[1, 2, 3], [4, 5, 6]]]
If we now modify the contents of `z` we alter `x`, as we have used a shallow
copy.
In [38]: z[0][0][0]=7
In [39]: x
Out[39]: [[7, 2, 3], [4, 5, 6]]
In a deep copy, we make a copy of the object at all levels, essentially
creating a clone of the original object. From the docs:
> A deep copy constructs a new compound object and then, recursively, inserts
> copies into it of the objects found in the original.
In [40]: import copy
In [41]: z = []
In [42]: x = [[1,2,3],[4,5,6]]
In [43]: z.append(copy.deepcopy(x))
In [44]: z
Out[44]: [[[1, 2, 3], [4, 5, 6]]]
In [45]: z[0][0][0] = 7
In [46]: x
Out[46]: [[1, 2, 3], [4, 5, 6]]
Numpy is likely the fastest solution to this problem, but you will have to
refactor your code to get a benefit. Numpy will not benefit you if you are
converting between lists and arrays in the base level of a loop. Instead you
should try an vectorise the problem early on and minimize the number of type
conversions.
**Edit:**
Looking at the update question, it seems that there is a very simple solution.
If the list in your list only ever contains immutable types you can use either
of the following:
def my_copy_1(org):
return (copy.copy(org[0]),org[1],org[2])
def my_copy_2(org):
return (org[0][:],org[1],org[2])
Testing the speed on these against your original implementation, I get:
In [2]: a = [[1,2,3],1,2]
In [3]: %timeit tmp.my_copy_orig(a)
100000 loops, best of 3: 2.05 µs per loop
In [4]: %timeit tmp.my_copy_1(a)
100000 loops, best of 3: 2.06 µs per loop
In [5]: %timeit tmp.my_copy_2(a)
1000000 loops, best of 3: 784 ns per loop
It would appear that `my_copy_2` is the clear winner here in terms of speed.
You can test that it produces the correct behaviour with:
In [6]: a = [[1,2,3],1,2]
In [7]: z = tmp.my_copy_2(a)
In [8]: z[2] = 999
In [9]: z[0][0] = 999
In [10]: a
Out[10]: [[1, 2, 3], 1, 2]
In [11]: z
Out[11]: [[999, 2, 3], 1, 999]
|
Postgresql Database Backup Using Python
Question: I would like to backup database using Python code. I want to backup some
tables of related data. How to backup and how to choose desired tables using
"SELECT" statement?
e.g.
I want to get data from 2014-05-01 to 2014-05-10 of some tables and output
this result as .sql extension file
How can I get this format using python code? If you don't mind, please
explain. Thanks.
Answer: The first idea that comes to my mind is to dump your tables calling
[pg_dump](http://www.postgresql.org/docs/9.1/static/app-pgdump.html) command,
similar to the approach presented
[here](http://codepoets.co.uk/2011/postgresql-backup-script-python/) (but
google is plenty of alternatives).
However, since your backup strategy requires you to select precise dates and
not only tables, you will probably have to rely on a sequence of queries, and
then my advise is to use a library like [Psycopg](http://initd.org/psycopg/).
**EDIT** :
I cannot provide a complete example since I don't know:
* which tables do you want to dump
* what is the precise backup strategy for each table (i.e. the SELECT statement)
* how you want to restore them. By deleting the table and then re-creating it, by overwriting db rows basing on an ID attribute, ...
the following example generates a file that stores the result of a single
query.
import psycopg
conn = psycopg2.connect("dbname=test user=postgres") # change this according to your RDBMS configuration
cursor = conn.cursor()
table_name='YOUR_TABLE_HERE' # place your table name here
with open("table_dump.sql") as f:
cursor.execute("SELECT * FROM %s" % (table_name)) # change the query according to your needs
column_names = []
columns_descr = cursor.description
for c in columns_descr:
column_names.append(c[0])
insert_prefix = 'INSERT INTO %s (%s) VALUES ' % (table_name, ', '.join(column_names))
rows = cursor.fetchall()
for row in rows:
row_data = []
for rd in row:
if rd is None:
row_data.append('NULL')
elif isinstance(rd, datetime.datetime):
row_data.append("'%s'" % (rd.strftime('%Y-%m-%d %H:%M:%S') ))
else:
row_data.append(repr(rd))
f.write('%s (%s);\n' % (insert_prefix, ', '.join(row_data))) # this is the text that will be put in the SQL file. You can change it if you wish.
|
Google App Engine Go SDK: Request to '/' failed
Question: i'm just getting started using GAE, i have following guide in here
<https://developers.google.com/appengine/docs/go/gettingstarted/devenvironment>
and some hello word tutorial here
<https://developers.google.com/appengine/docs/go/gettingstarted/helloworld> .
my problem is when i'm typing `goapp serve` it works. and show log like this:
INFO 2014-05-18 08:44:57,130 devappserver2.py:765] Skipping SDK update check.
WARNING 2014-05-18 08:44:57,135 api_server.py:374] Could not initialize images API; you are likely missing the Python "PIL" module.
INFO 2014-05-18 08:44:57,140 api_server.py:171] Starting API server at: http://localhost:59559
INFO 2014-05-18 08:44:57,154 dispatcher.py:182] Starting module "default" running at: http://localhost:8080
INFO 2014-05-18 08:44:57,156 admin_server.py:117] Starting admin server at: http://localhost:8000
but, when i'm trying to access `http://localhost:8080` , it not show me
"hello, world!" in the browser. and error log show me like this:
ERROR 2014-05-18 08:48:05,002 module.py:714] Request to '/' failed
Traceback (most recent call last):
File "/home/bayu/.go_appengine/google/appengine/tools/devappserver2/module.py", line 708, in _handle_request
environ, wrapped_start_response)
File "/home/bayu/.go_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "/home/bayu/.go_appengine/google/appengine/tools/devappserver2/module.py", line 1228, in _handle_script_request
request_type)
File "/home/bayu/.go_appengine/google/appengine/tools/devappserver2/instance.py", line 382, in handle
request_type))
File "/home/bayu/.go_appengine/google/appengine/tools/devappserver2/http_proxy.py", line 148, in handle
connection.connect()
File "/home/bayu/.pyenv/versions/2.7.6/lib/python2.7/httplib.py", line 772, in connect
self.timeout, self.source_address)
File "/home/bayu/.pyenv/versions/2.7.6/lib/python2.7/socket.py", line 571, in create_connection
raise err
error: [Errno 111] Connection refused
INFO 2014-05-18 08:48:05,007 module.py:639] default: "GET / HTTP/1.1" 500 -
i have google it and trying another get started tutorial in here
<http://blog.joshsoftware.com/2014/03/12/learn-to-build-and-deploy-simple-go-
web-apps-part-one/> but it's not work also.
what should i do?
* i'm on ubuntu 12.04 , Python 2.7.3, go version go1.2.1 linux/386
* using go_appengine_sdk_linux_386-1.9.4.zip
this is my hello.go and app.yaml
hello.go
package hello
import (
"fmt"
"net/http"
)
func init() {
http.HandleFunc("/", handler)
}
func handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, "Hello,")
}
app.yaml
application: helloworld
version: 1
runtime: go
api_version: go1
handlers:
- url: /.*
script: _go_app
thank you,
Answer: Not sure if this will help anyone or not but I came across the exact same
issues and finally tracked it down to my hosts file. Basically what is
happening is the app engine server is running on 0.0.0.0:8080 yet the
interface / output is displaying localhost:8080. In my hosts file 0.0.0.0:8080
wasn't mapped to localhost so:
1. edit your `/private/etc/hosts` file with vim or whatever
2. add `0.0.0.0 localhost` to your hosts file
3. save, start a new terminal and try and run you appengine server again
After that simple edit everything worked for me.
|
Login and get HTML file using python
Question: Hey I'm trying to login to a website and get the html of the webpage after the
login. And can't figure out how to do it with python. Using python 2.7. Need
to fill out the html forms on this website:
'user'= 'magaleast' and 'password' = '1181' (real login details that are
useless to me). Then the website redirects the user to an authentication page
and when its done it goes to the page i need.
Any ideas?
EDIT: trying this code:
from mechanize import Browser
import cookielib
br = Browser()
br.open("http://www.shiftorganizer.com/")
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
# Browser options
br.set_handle_equiv(True)
br.set_handle_gzip(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
# You need to spot the name of the form in source code
br.select_form(name = "user")
# Spot the name of the inputs of the form that you want to fill,
# say "username" and "password"
br.form["user"] = "magaleast"
br.form["password"] = "1181"
response = br.submit()
print response.read()
but i get:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>ShiftOrganizer סידור עבודה בפחות משניה</title>
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<script type="text/javascript">
var emptyCompany=1
function subIfNewApp()
{
if (emptyCompany){
document.authenticationForm.action = document.getElementById('userName').value + "/authentication.asp"
} else {
document.authenticationForm.action = document.getElementById('Company').value + "/authentication.asp"
}
document.authenticationForm.submit()
}
</script>
</head>
<body onload="subIfNewApp()">
<form name="authenticationForm" method="post" action="">
<input type="hidden" name="userName" id="userName" value="magaleast" />
<input type="hidden" name="password" id="password" value="1181" />
<input type="hidden" name="Company" id="Company" value="שם חברה" />
</form>
</body>
</html>
is js the problem? because it stops in the authentication part again..?
Answer: It seems that the website requires some JS indeed so the code below won't be
enough. In that particular case, by looking at the source code, it seems that
at the end this url is used :
<http://shifto.shiftorganizer.com/magaleast/welcome.asp?password=1181> which
seems to contain similar information that the page after login (altough I
can't read Hebrew, I may be totally wrong...). If so, you could simply do:
import urllib
url = 'http://shifto.shiftorganizer.com/*username*/welcome.asp?password=*password*'
print urllib.urlopen(url).read()
* * *
For information, code to login to a form which does not require Javascript.
I would use the [mechanize](http://wwwsearch.sourceforge.net/mechanize/)
library (also Requests will work), doing something like
from mechanize import Browser
br = Browser()
br.set_cookiejar(cookielib.LWPCookieJar())
# Browser options
br.set_handle_equiv(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.open("your url")
# You need to spot the name of the form in source code
br.select_form(name="form_name")
# Spot the name of the inputs of the form that you want to fill,
# say "username" and "password"
br.form["username"] = "magaleast"
br.form["password"] = "1181"
response = br.submit()
print response.read()
|
Python unicode dictornary to string from twitch stream
Question: I'm trying to decode a twitch api answer in python.
import urllib2
import json
url = 'http://api.justin.tv/api/stream/list.json?channel=kungentv'
result =json.loads(urllib2.urlopen(url, timeout = 100).read().decode('utf-8'))
print result
If i run this i get this:
[{u'broadcast_part': 6, u'featured': True, u'channel_subscription': True, u'embed_count': 0, u'id': u'9602378624', u'category': u'gaming', u'title': u'5x Legend - Playing the TOP DECKS from DECK WARS to Legendary! Handlock, Miracle and Shaman. (Never played Shaman EVUR)', u'video_height': 1080, u'site_count': 0, u'embed_enabled': False, u'channel': {u'category': u'gaming', u'status': u'5x Legend - Playing the TOP DECKS from DECK WARS to Legendary! Handlock, Miracle and Shaman. (Never played Shaman EVUR)', u'views_count': 56403101, u'subcategory': None, u'language': u'en', u'title': u'kungentv', u'screen_cap_url_huge': u'http://static-cdn.jtvnw.net/previews/live_user_kungentv-630x473.jpg', u'producer': True, u'tags': None, u'subcategory_title': u'', u'category_title': u'', u'screen_cap_url_large': u'http://static-cdn.jtvnw.net/previews/live_user_kungentv-320x240.jpg', u'mature': None, u'screen_cap_url_small': u'http://static-cdn.jtvnw.net/previews/live_user_kungentv-70x53.jpg', u'screen_cap_url_medium': u'http://static-cdn.jtvnw.net/previews/live_user_kungentv-150x113.jpg', u'timezone': u'Europe/Stockholm', u'login': u'kungentv', u'channel_url': u'http://www.justin.tv/kungentv', u'id': 30383713, u'meta_game': u'Hearthstone: Heroes of Warcraft'}, u'up_time': u'Mon May 19 00:34:43 2014', u'meta_game': u'Hearthstone: Heroes of Warcraft', u'format': u'live', u'stream_type': u'live', u'channel_count': 3671, u'abuse_reported': False, u'video_width': 1920, u'geo': u'SE', u'name': u'live_user_kungentv', u'language': u'en', u'stream_count': 0, u'video_bitrate': 3665.328125, u'broadcaster': u'obs', u'channel_view_count': 0}]
How do i decode this so I can use it a normal string dictionary.
Thanks in advance!
Answer: You don't have to decode anything; Python will encode and decode string values
as needed for you.
Demo:
>>> result = [{u'broadcast_part': 6, u'featured': True, u'channel_subscription': True, u'embed_count': 0, u'id': u'9602378624', u'category': u'gaming', u'title': u'5x Legend - Playing the TOP DECKS from DECK WARS to Legendary! Handlock, Miracle and Shaman. (Never played Shaman EVUR)', u'video_height': 1080, u'site_count': 0, u'embed_enabled': False, u'channel': {u'category': u'gaming', u'status': u'5x Legend - Playing the TOP DECKS from DECK WARS to Legendary! Handlock, Miracle and Shaman. (Never played Shaman EVUR)', u'views_count': 56403101, u'subcategory': None, u'language': u'en', u'title': u'kungentv', u'screen_cap_url_huge': u'http://static-cdn.jtvnw.net/previews/live_user_kungentv-630x473.jpg', u'producer': True, u'tags': None, u'subcategory_title': u'', u'category_title': u'', u'screen_cap_url_large': u'http://static-cdn.jtvnw.net/previews/live_user_kungentv-320x240.jpg', u'mature': None, u'screen_cap_url_small': u'http://static-cdn.jtvnw.net/previews/live_user_kungentv-70x53.jpg', u'screen_cap_url_medium': u'http://static-cdn.jtvnw.net/previews/live_user_kungentv-150x113.jpg', u'timezone': u'Europe/Stockholm', u'login': u'kungentv', u'channel_url': u'http://www.justin.tv/kungentv', u'id': 30383713, u'meta_game': u'Hearthstone: Heroes of Warcraft'}, u'up_time': u'Mon May 19 00:34:43 2014', u'meta_game': u'Hearthstone: Heroes of Warcraft', u'format': u'live', u'stream_type': u'live', u'channel_count': 3671, u'abuse_reported': False, u'video_width': 1920, u'geo': u'SE', u'name': u'live_user_kungentv', u'language': u'en', u'stream_count': 0, u'video_bitrate': 3665.328125, u'broadcaster': u'obs', u'channel_view_count': 0}]
>>> result[0]['broadcast_part']
6
Python 2 uses ASCII to encode / decode between Unicode and byte string values
as needed.
Generally speaking, you _want_ your textual data as Unicode values; your
program should be a Unicode sandwich. Decode your data to Unicode when you
receive it, encode when you send it out again. It's just like any other
serialized data; you don't work with string timestamps when you can work with
`datetime` objects, you don't work with bytestrings if you try to make numeric
calculations, you'd convert to `int` or `float` or `decimal.Decimal()` values
instead, etc.
|
locale.getpreferredencoding() - why does this reset string.letters?
Question:
>>> import string
>>> import locale
>>> string.letters
'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
>>> locale.getpreferredencoding()
'UTF-8'
>>> string.letters
'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'
Any workarounds for this?
Platform: Linux Python2.6.7 and Python2.7.3 seem to be affected, Works fine in
Python3 (with `ascii_letters`)
Answer: **Note** : what OP did to solve the issue is to pass `encoding='UTF-8'` to the
`open` call. If you run into this issue and are just looking for a fix this
works. The rest of the post is an emphasis on _why_.
* * *
### What happens
As Lukas said, the docs specify:
> On some systems, it is necessary to invoke setlocale() to obtain the user
> preferences
Initially, string.letters is set to returning `lowercase + uppercase`:
lowercase = 'abcdefghijklmnopqrstuvwxyz'
uppercase = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
letters = lowercase + uppercase
However, when you call `getpreferredencoding()`, the `_locale` module
overrides it by calling `PyDict_SetItemString(string, "letters", ulo);` after
it generates them inside `fixup_ulcase(void)` with the following:
/* create letters string */
n = 0;
for (c = 0; c < 256; c++) {
if (isalpha(c))
ul[n++] = c;
}
ulo = PyString_FromStringAndSize((const char *)ul, n);
if (!ulo)
return;
if (string)
PyDict_SetItemString(string, "letters", ulo);
Py_DECREF(ulo);
In turn, this is called in `PyLocale_setlocale` which is indeed `setlocale`,
which is called by `getpreferredencoding` \- code here
<http://hg.python.org/cpython/file/07a6fca7ff42/Lib/locale.py#l612> :
def getpreferredencoding(do_setlocale = True):
"""Return the charset that the user is likely using,
according to the system configuration."""
if do_setlocale:
oldloc = setlocale(LC_CTYPE)
try:
setlocale(LC_CTYPE, "")
except Error:
pass
result = nl_langinfo(CODESET)
setlocale(LC_CTYPE, oldloc)
return result
else:
return nl_langinfo(CODESET)
### How do I avoid it?
Try `getpreferredencoding(False)`
### Why does it not happen in windows?
Windows uses different code for getting the locale, as you can see
[here](http://hg.python.org/cpython/file/07a6fca7ff42/Lib/locale.py#l593).
### In Python 3
In Python 3, `getdefaultlocale` does not accept a boolean setlocale variable
and does not call setlocale itself as you can see
[here](http://hg.python.org/cpython/file/d1bf37def4fd/Lib/locale.py#l506).
|
python cookiejar: CookieJar instance has no attribute 'load'
Question: I am quite new to python and I am trying to set a cookie using the `cookielib`
library of python, like so:
>>> import cookielib
>>> cj = cookielib.CookieJar()
>>> cj.load('cookies.txt')
and I get thrown this error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: CookieJar instance has no attribute 'load'
I think the same code used to work earlier, but I am entirely unsure why this
is happening now.
Answer: [`load()`](https://docs.python.org/2/library/cookielib.html#cookielib.FileCookieJar.load)
is a method on the _[`FileCookieJar()`
class](https://docs.python.org/2/library/cookielib.html#cookielib.FileCookieJar)_
only, a subclass of
[`CookieJar()`](https://docs.python.org/2/library/cookielib.html#cookielib.CookieJar).
Looks like you got confused between the two somewhere.
The following works:
import cookielib
cj = cookielib.FileCookieJar()
cj.load('cookies.txt')
|
Understanding python import of gevent
Question: This fails for me:
import gevent
gevent.monkey.patch_all()
This works:
from gevent import monkey
monkey.patch_all()
Is there anything wrong I am trying to do by accessing gevent.monkey
Also, I am confused on this snippet:
import gevent as ge
from gevent import monkey
ge.monkey.patch_all()
what makes ge.monkey accessible now? if I comment from gevent import monkey,
this doesn't work. I have used
import datetime
datetime.datetime.now()
which works perfectly fine but monkey doesn't.
Answer: `monkey` is a _module_ inside the `gevent` module.
`datetime` is a _class_ inside the `datetime` module.
When you `import x`, all of `x`'s objects are imported (in the `x` namespace).
_But child modules are not imported_
While your top and bottom examples look identical, the type of object makes
all the difference.
|
reading of csv into dictionary, first line becomes the name
Question: in python I have a csv file, which has lots of parameters in it like:
Name, Surname, Address1, Address2, email, etc
Adam1,Smith1,12 Connaugh Rd.,,[email protected], etc...
Adam2,Smith2,12 Connaugh Rd.,,[email protected], etc...
Adam3,Smith3,12 Connaugh Rd.,,[email protected], etc...
How do I read it, so first line Name, Surname, Address1, Address2, email, etc
becomes the name of parameter in dictionary? So I can get
Dict{Name:Adam1,Adam2, Adam3
Surname: Smith1,Smith2,Smith3
Address1: 12 Connaugh Rd.,12 Connaugh Rd.,12 Connaugh Rd.
etc.}
Since I'm going to use it in future, is that the best way to work with csv's
or there is something better?
Update1: ripr(row) gives:
{None: ['\tSales Record Number', 'User Id', 'Buyer Full name', 'Buyer Phone Number', 'Buyer Email', 'Buyer Address 1', 'Buyer Address 2', 'Buyer Town/City', 'Buyer County', 'Buyer Postcode', 'Buyer Country', 'Item Number', 'Item Title', 'Custom Label', 'Quantity', 'Sale Price', 'Included VAT Rate', 'Postage and Packaging', 'Insurance', 'Cash on delivery fee', 'Total Price', 'Payment Method', 'Sale Date', 'Checkout Date', 'Paid on Date', 'Dispatch Date ', 'Invoice date', 'Invoice number', 'Feedback left', 'Feedback received', 'Notes to yourself', 'PayPal Transaction ID', 'Delivery Service', 'Cash on delivery option', 'Transaction ID', 'Order ID', 'Variation Details']}
{None: ['3528', 'steve33559', 'Steven sdf', '45678', '[email protected]', '1 sdfgh Road, ', '', 'dfgh', 'dfgh', 'ertyu', 'United Kingdom', '151216259484', 'Small stuff ', '', '1', '\xa311.99', '', '\xa30.00', '\xa30.00', '', '\xa311.99', 'PayPal', '21-Mar-14', '21-Mar-14', '21-Mar-14', '', '', '', 'Yes', '', '', '384858394n5838f48', 'Other 24 Hour Courier', '', '49503847573848', '', '']}
{None: ['3529', 'buyretry13', 'Tariq fhb', '345678', '[email protected]', '80 rtyukfd Road', '', 'Manchester', 'wertyuk', 'M16 1KY', 'United Kingdom', '76543283858', 'Apple iPhone 5', '100329', '1', '\xa31.95', '', '\xa30.00', '\xa30.00', '', '\xa31.95', 'PayPal', '21-Mar-14', '21-Mar-14', '21-Mar-14', '', '', '', 'Yes', '', '', '45678723456', 'Royal Mail 2nd Class', '', '3456785737', '', '']}
Answer: You can use `zip()` to transpose columns to rows, and apply that to a
dictionary comprehension to extract the first element as the key:
import csv
with open(yourfile, 'rb') as infile:
reader = csv.reader(infile)
result = {c[0]: c[1:] for c in zip(*reader)}
This produces one dictionary, each with all entries in a column as a list of
values.
You'd be better of using
[`csv.DictReader()`](https://docs.python.org/2/library/csv.html#csv.DictReader)
here however. This produces a dict object _per row_ :
import csv
with open(yourfile, 'rb') as infile:
reader = csv.DictReader(infile)
for row in reader:
print row
where `row` is then `{'Name': 'Adam1', 'Surname': 'Smith1', 'Address1':
'Connaugh rd.', ...}` for the first row, `{'Name': 'Adam2', 'Surname':
'Smith2', 'Address1': 'Connaugh rd.', ...}`, etc. The `DictReader()` object
takes the keys from the first row in the CSV data.
This keeps each row of data together as one easy-to-access object instead of
having to correlate your data between separate rows.
Demo:
>>> import csv
>>> sample = '''\
... Name,Surname,Address1,Address2,email,etc
... Adam1,Smith1,12 Connaugh Rd.,,[email protected],etc...
... Adam2,Smith2,12 Connaugh Rd.,,[email protected],etc...
... Adam3,Smith3,12 Connaugh Rd.,,[email protected],etc...
... '''
>>> reader = csv.DictReader(sample.splitlines())
>>> print next(reader)
{'Surname': 'Smith1', 'Name': 'Adam1', 'Address1': '12 Connaugh Rd.', 'Address2': '', 'etc': 'etc...', 'email': '[email protected]'}
>>> print next(reader)
{'Surname': 'Smith2', 'Name': 'Adam2', 'Address1': '12 Connaugh Rd.', 'Address2': '', 'etc': 'etc...', 'email': '[email protected]'}
>>> print next(reader)
{'Surname': 'Smith3', 'Name': 'Adam3', 'Address1': '12 Connaugh Rd.', 'Address2': '', 'etc': 'etc...', 'email': '[email protected]'}
|
PyYAML path at serialization time vs deserialization time
Question: I am working on a game engine which includes a simple GUI development tool.
The GUI tool allows a user to define various entities and components, which
can then be saved in a configuration file. When the game engine runtime loads
the configuration file, it can determine how to create the various entities
and components for use in the game.
For a configuration file saving mechanism, I am using PyYAML. The issue that I
am having stems from the fact that the serialization process occurs in a
module which is in a different directory than the module which loads and
parses the file through PyYAML.
**Simplified Serializer**
import yaml
def save_config(context, file_name):
config_file = file(file_name, 'w')
# do some various processing on the context dict object
yaml.dump(context, config_file)
config_file.close()
This takes the `context` object, which is a `dict` that represents various
game objects, and writes it to a config file. This works without issue.
**Simplified Deserializer in engine**
import yaml
def load(file_name):
config_file = open(file_name, 'r')
context = yaml.load(config_file)
return context
This is where the problem occurs. On `yaml.load(config_file)`, I will receive
an error, because it fails to find a various name on a certain module. I
understand why this is happening. For example, when I serialize the config
file, it will list an `AssetComponent` (a component type in the engine) as
being at `engine.common.AssetComponent`. However, from the deserializer's
perspective, the `AssetComponent` should just be at `common.AssetComponent`
(because the deserialization code itself exists within the `engine` package),
so it fails to find it under `engine`.
Is there a way to manually handle paths when serializing or deserializing with
PyYAML? I would like to make sure they both happen from the same
"perspective."
**Edit:** The following shows what a problematic config file might look like,
followed by what the manually corrected config would look like
**Problematic**
!!python/object/apply:collections.defaultdict
args: [!!python/name:__builtin__.dict '']
dictitems:
assets:
- !!python/object:common.Component
component: !!python/object:engine.common.AssetComponent {file_name: ../content/sticksheet.png,
surface: null}
text: ../content/sticksheet.png
type_name: AssetComponent
**Corrected**
!!python/object/apply:collections.defaultdict
args: [!!python/name:__builtin__.dict '']
dictitems:
assets:
- !!python/object:tools.common.Component
component: !!python/object:common.AssetComponent {file_name: ../content/sticksheet.png,
surface: null}
text: ../content/sticksheet.png
type_name: AssetComponent
Answer: You can explicitly [declare the Python type of an
object](http://pyyaml.org/wiki/PyYAMLDocumentation#Objects) in a YAML
document:
!!python/object:module_foo.ClassFoo {
attr_foo: "spam",
…,
}
|
NoReverseMatch error using get_absolute_url()
Question: I am trying to use get_absolute_url to follow DRY rules. If I code the class
to build the href directly from the slug it all works fine. Ugly, messy but
working...
So I am trying to get this done right using get_absolute_url() and I am
getting stuck with a NoReverseMatch exception using the code below. I know
this must be some kind of newbie error, but I have been up and down all the
docs and forums for days, and still can't figure this one out!
I get this error:
NoReverseMatch at /calendar
Reverse for 'pEventsCalendarDetail' with arguments '()' and keyword arguments '{u'slug': u'Test-12014-05-05'}' not found. 0 pattern(s) tried: []
Request Method: GET
Request URL: http://127.0.0.1:8000/calendar
Django Version: 1.6
Exception Type: NoReverseMatch
Exception Value:
Reverse for 'pEventsCalendarDetail' with arguments '()' and keyword arguments '{u'slug': u'Test-12014-05-05'}' not found. 0 pattern(s) tried: []
Exception Location: /usr/local/lib/python2.7/site-packages/django/core/urlresolvers.py in _reverse_with_prefix, line 429
Python Executable: /usr/local/opt/python/bin/python2.7
Python Version: 2.7.6
using the following models.py excerpt:
@python_2_unicode_compatible
class Event(models.Model):
eventName = models.CharField(max_length=40)
eventDescription = models.TextField()
eventDate = models.DateField()
eventTime = models.TimeField()
eventLocation = models.CharField(max_length=60, null=True, blank=True)
creationDate = models.DateField(auto_now_add=True)
eventURL = models.URLField(null=True, blank=True)
slug = AutoSlugField(populate_from=lambda instance: instance.eventName + str(instance.eventDate),
unique_with=['eventDate'],
slugify=lambda value: value.replace(' ','-'))
@models.permalink
def get_absolute_url(self):
from django.core.urlresolvers import reverse
path = reverse('pEventsCalendarDetail', (), kwargs={'slug':self.slug})
return "http://%s" % (path)
The complete urls.py file:
from django.conf.urls import patterns, include, url
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
# Examples:
url(r'^$', 'electricphoenixfll.views.home', name='home'),
url(r'^home$', 'electricphoenixfll.views.home', name='home'),
url(r'^calendar$', 'electricphoenixfll.views.calendar', name='calendar'),
url(r'^forum$', 'electricphoenixfll.views.forum', name='forum'),
url(r'^donate$', 'electricphoenixfll.views.donate', name='donate'),
url(r'^donate_thanks$', 'electricphoenixfll.views.donate_thanks', name='donate_thanks'),
url(r'^what_is_fll$', 'electricphoenixfll.views.what_is_fll', name='what_is_fll'),
url(r'^core_values$', 'electricphoenixfll.views.core_values', name='core_values'),
url(r'^follow_the_phoenix$', 'electricphoenixfll.views.follow_the_phoenix', name='follow_the_phoenix'),
url(r'^followEnter/$', 'electricphoenixfll.views.followEnter', name='followEnter'),
url(r'^followList/$', 'electricphoenixfll.views.followList', name='followList'),
url(r'^about_us$', 'electricphoenixfll.views.about_us', name='about_us'),
url(r'^calendarDetail/(?P<slug>[\w-]+)/$', 'phoenixEvents.views.calendarDetail', name='pEventsCalendarDetail'),
url(r'^admin/', include(admin.site.urls)),
)
Answer: The second positional argument to
[`reverse()`](https://docs.djangoproject.com/en/dev/ref/urlresolvers/#reverse)
is `urlconf` argument:
reverse(viewname[, urlconf=None, args=None, kwargs=None, current_app=None])
To make it work use keyword argument for setting `args`:
path = reverse('pEventsCalendarDetail', args=(), kwargs={'slug':self.slug})
Or, don't set `args` at all:
path = reverse('pEventsCalendarDetail', kwargs={'slug':self.slug})
|
The first argument for python Tkinter
Question: I am using Tkinter with python 2.7 and am curious about why the following code
snippet would work:
import Tkinter as tk
import ttk
class Application(ttk.Frame):
def __init__(self, master=None):
ttk.Frame.__init__(self, master) # This is where my question is
self.grid()
return
if __name__ == '__main__':
root = tk.Tk()
app = Application(root)
root.mainloop()
1) The `ttk.Frame.__init__` takes one argument, which is the `master`. But now
the first argument is an instance inherited from it, and the second is
`master`. How did this work?
2) I noticed that the `ttk.Frame` class also have a function called
`mainloop`. How is this different from `root.mainloop()`?
Thanks!
Answer: 1) `ttk.Frame.__init__()` in method `Application.__init__()` is used to
initialize base class `ttk.Frame`, the explaination doc is
[here](https://docs.python.org/2/reference/datamodel.html#object.__init__).
2) `mainloop()` in `ttk.Frame` and `root.mainloop()` is equal, please look at
[this](http://python.6.x6.nabble.com/the-difference-between-mainloop-and-app-
mainloop-and-root-mainloop-td1975563.html).
|
Reading a csv (text) file with pkgutil.get_data
Question: so I'm writing some code which needs to pull configuration / data from CSV
files, which are packaged with the application. From what I understand using
`pkgutil` is the 'right' way to do this. So what I'm trying to do is:
import pkgutil
MatFile = pkgutil.get_data('impy.implosions', 'LILAC_Materials.csv')
which works fine and gives me the file's bytes. But I can't figure out how to
feed this into `csv.reader` in a clean way. I found this [old
question](http://stackoverflow.com/questions/5003755/how-to-use-pkgutils-get-
data-with-csv-reader-in-python) but its solution would be something like:
MatFile = io.StringIO(MatFile)
dataReader = csv.reader(MatFile , delimiter=',')
which doesn't work since `StringIO` expects a str. The complementary
functionality in `io` would be `BytesIO`, but then that doesn't help me since
`csv.reader` can't handle it. It seems like this should have a simple
solution, but I'm not familiar with handling byte data in python. Thanks!
Answer: In Python 3, the `csv` module's classes expect you to pass them an iterable
that yields Unicode strings. If you are have your data as a single byte
strings, you first need to decode the data, then split it up into lines.
Here's a quick bit of code that should work:
MatFile = pkgutil.get_data('impy.implosions', 'LILAC_Materials.csv')
dataReader = csv.reader(MatFile.decode('utf-8').splitlines(), delimiter=',')
I'm guessing the file was encoded with UTF-8 (or ASCII, which is a subset). If
you know otherwise swap in the appropriate encoding in the `decode` call.
`str.splitlines` takes care of breaking up the single string into a list of
lines, which is perfectly acceptable as input to `csv.reader`.
|
python modifying the elements in a dictionary
Question: So I have a python dictionary called "p", where
import nltk, json, cPickle, itertools
import numpy as np
p = {key1: nan, key2: 0.1, key3: nan}
nan is np.nan.
I want to write a piece of code that if a value in the dictionary is equal to
nan, then it is changed to 0 instead. I wrote the following code:
for key,value in p:
if value == np.nan:
p[key] = 0
However, the compiler returns with an error saying "ValueError: too many
values to unpack"
I also want to sum up all the values in the dictionary after all the nan's
have been converted to 0s, then divide each value by that sum. I wrote
normalization_factor = float(sum(p.values()))
for key, value in p:
p[key] = value/normalization_factor
Again, the compiler returns the ValueError stated above.
Is there a way to fix the error, or a way to go around this error by doing
something else?
Answer: By default, iterating over a dictionary iterates over only the keys. To
iterate over key/value pairs, you need to do `for key, value in p.iteritems()`
(or `p.items()` in Python 3).
However, you will then run into problems because `nan` is not equal to itself.
You should do `if value is np.nan` or `if np.isnan(value)`.
|
Is there a way to get the python program to "refresh the sd drive connection"
Question: My daughters SD card has gone corrupt, and I'm trying to recover the
photos/files. I have tried various recover software but to no avail.
However I have found that if I use MSDOS, I can copy various photos, however
intermittently this fails and says "theres no disk in drive" I assuming the
file its trying to copy is corrupt, I can overcome this by pulling out the sd
card and re-inserting it.
So opportunity for a program! :) - I got the file names via dos, created a
text file and knocked up a rough program.... However I keep getting the
"theres no disk in drive"
Hence... Is there a way to get the python program to "refresh the sd drive
connection"
Many thanks.
#!/usr/bin/env python
""" File copy program"""
import os
import shutil
def main():
Basepathfile = os.path.dirname(os.path.abspath(__file__))
FileName = 'Filenames.txt'
PrelimPathFile = os.path.join(Basepathfile, FileName )
PathFile = os.path.normpath(PrelimPathFile)
Filename1 = open(PathFile, 'r')
Filename2 = 'c:\\Lili\\'
for line in Filename1:
line = line.replace('\r\n', '').replace('\r', '').replace('\n', '')
Filename1 = 'f:\\DCIM\\Camera\\'+ line
Filename2 = 'c:\\Lili\\'+ line
try:
shutil.copy2(Filename1, Filename2)
print 'Copied... '
print Filename1, '...to...', Filename2
except:
print 'Failed to copy ', Filename1, ' to ', Filename2
if __name__ == '__main__':
main()
Answer: This sounds more like a question about Windows than Python. If your SD card is
plugged in via a USB card reader, the easiest approach would probably be to
disable and re-enable the USB hub device it is connected to. This will cause
windows to re-enumerate all the USB devices attached to that hub.
You can do this from the command line with `DevCon`:
<http://support.microsoft.com/kb/311272>
If this is something other than a USB device, I'm not really sure, but maybe
DevCon can still do the trick by trying to disable and re-enable the device
directly.
Alternatively you could try doing it manually with `robocopy`. It has options
not to copy the files it has already copied. You'd still have to manually pull
the SD card a few times, but if you're lucky, the SD card will stay up long
enough to confirm everything was pulled across eventually.
|
Iterate over all but d-th dimension of any boost::multi_array
Question: Quite often one wants to apply operation `f()` along dimension `d` of an
`N`-dimensional array `A`. This implies looping over all remaining dimensions
of `A`. I tried to figure out if `boost::multi_array` was capable of this.
Function `f(A)` should work on all varieties of `boost::multi_array`,
including `boost:multi_array_ref`, `boost::detail::multi_array::sub_array`,
and `boost::detail::multi_array::array_view`, ideally also for the rvalue
types such as `boost::multi_array_ref<T, NDims>::reference`.
The best I could come up with is an implementation of a `reshape()` function
that can be used to reshape the ND array into a 3D array, such that the
working dimension is always the middle one. Here is `f.hpp`:
#include "boost/multi_array.hpp"
#include <ostream>
using namespace boost;
typedef multi_array_types::index index_t;
typedef multi_array_types::index_range range;
template <template <typename, std::size_t, typename...> class Array,
typename T, std::size_t NDims, typename index_t, std::size_t NDimsNew>
multi_array_ref<T, NDimsNew>
reshape(Array<T, NDims>& A, const array<index_t, NDimsNew>& dims) {
multi_array_ref<T, NDimsNew> a(A.origin(), dims);
return a;
}
template <template <typename, std::size_t, typename...> class Array, typename T>
void f(Array<T, 1>& A) {
for (auto it : A) {
// do something with it
std::cout << it << " ";
}
std::cout << std::endl;
}
template <template <typename, std::size_t, typename...> class Array,
typename T, std::size_t NDims>
void f(Array<T, NDims>& A, long d) {
auto dims = A.shape();
typedef typename std::decay<decltype(*dims)>::type type;
// collapse dimensions [0,d) and (d,Ndims)
array<type, 3> dims3 = {
std::accumulate(dims, dims + d, type(1), std::multiplies<type>()),
dims[d],
std::accumulate(dims + d + 1, dims + NDims, type(1), std::multiplies<type>())
};
// reshape to collapsed dimensions
auto A3 = reshape(A, dims3);
// call f for each slice [i,:,k]
for (auto Ai : A3) {
for (index_t k = 0; k < dims3[2]; ++k) {
auto S = Ai[indices[range()][k]];
f(S);
}
}
}
template <template <typename, std::size_t, typename...> class Array,
typename T, std::size_t NDims>
void f(Array<T, NDims>& A) {
for (long d = NDims; d--; ) {
f(A, d);
}
}
This is the test program `test.cpp`:
#include "f.hpp"
int main() {
boost::multi_array<double, 3> A(boost::extents[2][2][3]);
boost::multi_array_ref<double, 1> a(A.data(), boost::extents[A.num_elements()]);
auto Ajk = A[1];
auto Aik = A[boost::indices[range()][1][range()]];
int i = 0;
for (auto& ai : a) ai = i++;
std::cout << "work on boost::multi_array_ref" << std::endl;
f(a);
std::cout << "work on boost::multi_array" << std::endl;
f(A);
std::cout << "work on boost::detail::multi_array:sub_array" << std::endl;
f(Ajk);
std::cout << "work on boost::detail::multi_array:sub_array" << std::endl;
f(Aik); // wrong result, since reshape() ignores strides!
//f(A[1]); // fails: rvalue A[1] is boost::multi_array_ref<double, 3ul>::reference
}
Clearly, there are problems with this approach, namely when a slice is passed
to `f()`, such that the memory is no longer contiguous, which defeats the
implementation of `reshape()`.
It appears a better (more C++-like) way would be to construct an aggregate
iterator out of the iterators that the boost types provide, since this would
automatically take care of non-unity strides along a given dimension.
`boost::detail::multi_array::index_gen` looks relevant, but it is not quite
clear to me how this can be used to make an iterator over all slices in
dimension `d`. Any ideas?
_Note:_
There are similar questions already on SO, but none was quite satisfactory to
me. I am not interested in specialized solutions for `N = 3` or `N = 2`. It's
got to work for any `N`.
_Update:_
Here is the equivalent of what I want in Python:
def idx_iterator(s, d, idx):
if len(s) == 0:
yield idx
else:
ii = (slice(None),) if d == 0 else xrange(s[0])
for i in ii:
for new_idx in idx_iterator(s[1:], d - 1, idx + [i]):
yield new_idx
def iterator(A, d=0):
for idx in idx_iterator(A.shape, d, []):
yield A[idx]
def f(A):
for d in reversed(xrange(A.ndim)):
for it in iterator(A, d):
print it
print
import numpy as np
A = np.arange(12).reshape((2, 2, 3))
print "Work on flattened array"
f(A.ravel())
print "Work on array"
f(A)
print "Work on contiguous slice"
f(A[1])
print "Work on discontiguous slice"
f(A[:,1,:])
The same should somehow be possible using the functionality in
`index_gen.hpp`, but I have still not been able to figure out how.
Answer: Ok, after spending a significant amount of time studying the implementation of
`boost::multi_array`, I am now ready to answer my own question: No, there are
no provisions anywhere in `boost::multi_array` that would allow one to iterate
along any but the first dimension. The best I could come up with is to
construct an iterator that manually manages the `N-1` indices that are being
iterated over. Here is `slice_iterator.hpp`:
#include "boost/multi_array.hpp"
template <template <typename, std::size_t, typename...> class Array,
typename T, std::size_t NDims>
struct SliceIterator {
typedef Array<T, NDims> array_type;
typedef typename array_type::size_type size_type;
typedef boost::multi_array_types::index_range range;
typedef boost::detail::multi_array::multi_array_view<T, 1> slice_type;
typedef boost::detail::multi_array::index_gen<NDims, 1> index_gen;
array_type& A;
const size_type* shape;
const long d;
index_gen indices;
bool is_end = false;
SliceIterator(array_type& A, long d) : A(A), shape(A.shape()), d(d) {
int i = 0;
for (; i != d; ++i) indices.ranges_[i] = range(0);
indices.ranges_[i++] = range();
for (; i != NDims; ++i) indices.ranges_[i] = range(0);
}
SliceIterator& operator++() {
// addition with carry, excluding dimension d
int i = NDims - 1;
while (1) {
if (i == d) --i;
if (i < 0) {
is_end = true;
return *this;
}
++indices.ranges_[i].start_;
++indices.ranges_[i].finish_;
if (indices.ranges_[i].start_ < shape[i]) {
break;
} else {
indices.ranges_[i].start_ = 0;
indices.ranges_[i].finish_ = 1;
--i;
}
}
return *this;
}
slice_type operator*() {
return A[indices];
}
// fakes for iterator protocol (actual implementations would be expensive)
bool operator!=(const SliceIterator& r) {
return !is_end;
}
SliceIterator begin() {return *this;}
SliceIterator end() {return *this;}
};
template <template <typename, std::size_t, typename...> class Array,
typename T, std::size_t NDims>
SliceIterator<Array, T, NDims> make_slice_iterator(Array<T, NDims>& A, long d) {
return SliceIterator<Array, T, NDims>(A, d);
}
// overload for rvalue references
template <template <typename, std::size_t, typename...> class Array,
typename T, std::size_t NDims>
SliceIterator<Array, T, NDims> make_slice_iterator(Array<T, NDims>&& A, long d) {
return SliceIterator<Array, T, NDims>(A, d);
}
It can be used as
for (auto S : make_slice_iterator(A, d)) {
f(S);
}
and works for all examples in my question.
I must say that `boost::multi_array`'s implementation was quite disappointing
to me: Over 3700 lines of code for what should be little more than a bit of
index housekeeping. In particular the iterators, which are only provided for
the first dimension, aren't anywhere near a performance implementation: There
are actually up to `3*N + 5` comparisons carried out at each step to decide
whether the iterator has arrived at the end yet (note that my implementation
above avoids this problem by faking `operator!=()`), which makes this
implementation unsuitable for anything but arrays with a dominant last
dimension, which is handled more efficiently. Moreover, the implementation
doesn't take advantage of dimensions that are contiguous in memory. Instead,
it always proceeds dimension-by-dimension for operations such as array
assignment, wasting significant optimization opportunities.
In summary, I find `numpy`'s implementation of an N-dimensional array much
more compelling than this one. There are 3 more observations that tell me
"Hands Off" of `boost::multi_array`:
* I couldn't find any serious use cases for `boost::multi_array` anywhere on the web
* Development appears to have essentially stopped in 2002
* This (and similar) questions on StackOverflow have hardly generated any interest ;-)
|
AppEngine application using Django fails to load
Question: Django is constantly causing our application to crash. After deployment the
application is running fine, but once the initial instance is
restarted/shutdown it often fails to start with an error similar to the
following:
Traceback (most recent call last): File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 266, in Handle
result = handler(dict(self._environ), self._StartResponse)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/django-1.5/django/core/handlers/wsgi.py", line 236, in **call**
self.load_middleware()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/django-1.5/django/core/handlers/base.py", line 53, in load_middleware
raise exceptions.ImproperlyConfigured('Error importing middleware %s: "%s"' % (mw_module, e))
ImproperlyConfigured: Error importing middleware myfolder.middleware: "No module named myfolder.middleware"
`
Our file structure is similar to this:
|- app.yaml
|- _ _ init _ _.py
|- settings.py
|- myfolder |
| |- _ _ init _ _.py
| |- middleware.py
| |- ...
|-...
|
`
Our app.yaml:
application: XXXXX
module: app
version: master
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /api/(login|logout|passwd|master._|banners._)
script: app.handler
secure: always
...
builtins:
- django_wsgi: on
libraries:
- name: django
version: 1.5
env_variables:
DJANGO_SETTINGS_MODULE: 'settings'
`
We have 2 modules in our application and they both exhibit this behaviour
(they have similar configurations). Sometimes the modules will stay up for a
whole day before crashing again. After they fail to load, all subsequent
requests fail with he same error. Deploying one more time always solves the
problem temporarily.
We are using plain django with CloudSql. The problem is not reproducible in
the development server. After deployment everything in both modules works
fine. All middleware, ndb, memcache, cloudsql, taskqueue, etc, including all
the modules inside the "myfolder" and every other library xcopied.
The following attempts at solving this problem haven't worked:
* We have tried using the appengine_config.py to force django to reload the settings with from django.conf import settings\nsettings._target = None\n
* Originally we had shared settings inside "myfolder" and were importing them with "from myfolder.shared_settings import *" inside the root settings.py but django could not load the module myfolder.shared_settings either (similar problem)
* using a custom mysettings.py and defining the DJANGO_SETTINGS_MODULE in the app.yaml or in python
The system is not live yet but will be soon and we are running out of options.
Other traces of similarly failing configurations:
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 266, in Handle
result = handler(dict(self._environ), self._StartResponse)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/django-1.5/django/core/handlers/wsgi.py", line 236, in __call__
self.load_middleware()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/django-1.5/django/core/handlers/base.py", line 45, in load_middleware
for middleware_path in settings.MIDDLEWARE_CLASSES:
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/django-1.5/django/conf/__init__.py", line 53, in __getattr__
self._setup(name)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/django-1.5/django/conf/__init__.py", line 48, in _setup
self._wrapped = Settings(settings_module)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/django-1.5/django/conf/__init__.py", line 134, in __init__
raise ImportError("Could not import settings '%s' (Is it on sys.path?): %s" % (self.SETTINGS_MODULE, e))
ImportError: Could not import settings 'settings' (Is it on sys.path?): No module named myfolder.settings
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 239, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/lib_config.py", line 353, in __getattr__
self._update_configs()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/lib_config.py", line 289, in _update_configs
self._registry.initialize()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/lib_config.py", line 164, in initialize
import_func(self._modname)
File "/base/data/home/apps/s~blue-myapp/app:master.375531077560785947/appengine_config.py", line 17, in
settings._target = None
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/django-1.5/django/utils/functional.py", line 227, in __setattr__
self._setup()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/django-1.5/django/conf/__init__.py", line 48, in _setup
self._wrapped = Settings(settings_module)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/django-1.5/django/conf/__init__.py", line 134, in __init__
raise ImportError("Could not import settings '%s' (Is it on sys.path?): %s" % (self.SETTINGS_MODULE, e))
ImportError: Could not import settings 'settings' (Is it on sys.path?): No module named myfolder.settings
This is our current appengine_config.py:
import sys
import logging
logging.debug(",\n".join(sys.path))
# Since Google App Engine's webapp framework uses Django templates, Django will half-initialize when webapp is loaded.
# This causes the initialization of the rest of Django's setting to be skipped. If you are getting this error, you need
# to explicitly force Django to reload your settings:
from django.conf import settings
settings._target = None
Logging sys.path from appengine_config.py does not change between a successful
instance start and a failed instance start (apart from the XXXXXXXXXXX bit of
course):
/base/data/home/apps/s~blue-persomi/app:master.3759720XXXXXXXXXXX,
/base/data/home/runtimes/python27/python27_dist/lib/python27.zip,
/base/data/home/runtimes/python27/python27_dist/lib/python2.7,
/base/data/home/runtimes/python27/python27_dist/lib/python2.7/plat-linux2,
/base/data/home/runtimes/python27/python27_dist/lib/python2.7/lib-tk,
/base/data/home/runtimes/python27/python27_dist/lib/python2.7/lib-old,
/base/data/home/runtimes/python27/python27_dist/lib/python2.7/lib-dynload,
/base/data/home/runtimes/python27/python27_dist/lib/python2.7/site-packages,
/base/data/home/runtimes/python27/python27_lib/versions/1,
/base/data/home/runtimes/python27/python27_lib/versions/third_party/MySQLdb-1.2.4b4,
/base/data/home/runtimes/python27/python27_lib/versions/third_party/django-1.5,
/base/data/home/runtimes/python27/python27_lib/versions/third_party/protorpc-1.0,
/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.5.2,
/base/data/home/runtimes/python27/python27_lib/versions/third_party/webob-1.1.1,
/base/data/home/runtimes/python27/python27_lib/versions/third_party/yaml-3.10
Answer: It seems to be a path related issue as people have mentioned in your
question's comments
Possible short-sighted solution add everything to your path manually - look at
the top answer here: [How to import modules in Google App
Engine?](http://stackoverflow.com/questions/2710861/how-to-import-modules-in-
google-app-engine) At the very least, this will help narrow the problem to
path related.
what the docs say: <https://developers.google.com/appengine/docs/python/>
> The Python module include path includes your application's root directory
> (the directory containing the app.yaml file). Modules you create in your
> application's root directory are available using a path from the root. Don't
> forget to create **init**.py files in sub-directories, so Python will
> recognize the sub-directories as packages.
so from what I can tell, b/c everything is at or below the app.yaml file in
your question the path should already be correct.
1. double check all your `__init__.py` file files are in place and spelled correctly.
2. try deleting all of your `*.pyc` files and letting them be regenerated.
3. try importing from the container's folder name `FOLDER_CONTAINING_YAML.myfolder.middleware`
|
Processing Large Files in Python [ 1000 GB or More]
Question: Lets say i have a text file of 1000 GB. I need to find how much times a phrase
occurs in the text.
Is there any faster way to do this that the one i am using bellow? How much
would it take to complete the task.
phrase = "how fast it is"
count = 0
with open('bigfile.txt') as f:
for line in f:
count += line.count(phrase)
If I am right if I do not have this file in the memory i would meed to wait
till the PC loads the file each time I am doing the search and this should
take at least 4000 sec for a 250 MB/sec hard drive and a file of 10000 GB.
Answer: I used `file.read()` to read the data in chunks, in current examples the
chunks were of size 100 MB, 500MB, 1GB and 2GB respectively. The size of my
text file is 2.1 GB.
**Code:**
from functools import partial
def read_in_chunks(size_in_bytes):
s = 'Lets say i have a text file of 1000 GB'
with open('data.txt', 'r+b') as f:
prev = ''
count = 0
f_read = partial(f.read, size_in_bytes)
for text in iter(f_read, ''):
if not text.endswith('\n'):
# if file contains a partial line at the end, then don't
# use it when counting the substring count.
text, rest = text.rsplit('\n', 1)
# pre-pend the previous partial line if any.
text = prev + text
prev = rest
else:
# if the text ends with a '\n' then simple pre-pend the
# previous partial line.
text = prev + text
prev = ''
count += text.count(s)
count += prev.count(s)
print count
**Timings:**
read_in_chunks(104857600)
$ time python so.py
10000000
real 0m1.649s
user 0m0.977s
sys 0m0.669s
read_in_chunks(524288000)
$ time python so.py
10000000
real 0m1.558s
user 0m0.893s
sys 0m0.646s
read_in_chunks(1073741824)
$ time python so.py
10000000
real 0m1.242s
user 0m0.689s
sys 0m0.549s
read_in_chunks(2147483648)
$ time python so.py
10000000
real 0m0.844s
user 0m0.415s
sys 0m0.408s
On the other hand the simple loop version takes around 6 seconds on my system:
def simple_loop():
s = 'Lets say i have a text file of 1000 GB'
with open('data.txt') as f:
print sum(line.count(s) for line in f)
$ time python so.py
10000000
real 0m5.993s
user 0m5.679s
sys 0m0.313s
* * *
Results of @SlaterTyranus's [`grep`
version](http://stackoverflow.com/a/23765585/846892) on my file:
$ time grep -o 'Lets say i have a text file of 1000 GB' data.txt|wc -l
10000000
real 0m11.975s
user 0m11.779s
sys 0m0.568s
* * *
Results of @woot's [solution](http://stackoverflow.com/a/23838307/846892):
$ time cat data.txt | parallel --block 10M --pipe grep -o 'Lets\ say\ i\ have\ a\ text\ file\ of\ 1000\ GB' | wc -l
10000000
real 0m5.955s
user 0m14.825s
sys 0m5.766s
Got best timing when I used 100 MB as block size:
$ time cat data.txt | parallel --block 100M --pipe grep -o 'Lets\ say\ i\ have\ a\ text\ file\ of\ 1000\ GB' | wc -l
10000000
real 0m4.632s
user 0m13.466s
sys 0m3.290s
* * *
Results of woot's [second
solution](http://stackoverflow.com/a/23839675/846892):
$ time python woot_thread.py # CHUNK_SIZE = 1073741824
10000000
real 0m1.006s
user 0m0.509s
sys 0m2.171s
$ time python woot_thread.py #CHUNK_SIZE = 2147483648
10000000
real 0m1.009s
user 0m0.495s
sys 0m2.144s
**System Specs** : Core i5-4670, 7200 RPM HDD
|
Appending to HDFStore fails with "cannot match existing table structure"
Question: The final solution was to use the "converters" parameter of read_csv and check
every value before adding it to the DataFrame. In the end there were only 2
broken values in over 80GB of raw data.
The parameter looks like this:
converters={'XXXXX': self.parse_xxxxx}
And the small static helper method like this:
@staticmethod
def parse_xxxxx(input):
if not isinstance(input, float):
try:
return float(input)
except ValueError:
print "Broken Value: ", input
return float(0.0)
else:
return input
* * *
While trying to read ca. 40GB+ of csv data into a HDF file I ran into a
confusing problem. After reading about 1GB the entire process fails with the
following error
File "/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 658, in append
self._write_to_group(key, value, table=True, append=True, **kwargs)
File "/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 923, in write_to_group
s.write(obj = value, append=append, complib=complib, **kwargs)
File "/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 2985, in write **kwargs)
File "/usr/lib/python2.7/dist-packages/pandas/io/pytables.py", line 2675, in create_axes
raise ValueError("cannot match existing table structure for [%s] on appending data" % items)
ValueError: cannot match existing table structure for [Date] on appending data
The read_csv call I use is as follows:
pd.io.parsers.read_csv(filename, sep=";|\t", compression='bz2', index_col=False, header=None, names=['XX', 'XXXX', 'Date', 'XXXXX'], parse_dates=[2], date_parser=self.parse_date, low_memory=False, iterator=True, chunksize=self.input_chunksize, dtype={'Date': np.int64})
Why would the 'Date' column of the new chunk not fit the existing colum when
Iexplicitly set the dtypte to int64?
Thx for your help!
Here is the function for parsing the date:
@staticmethod
def parse_date(input_date):
import datetime as dt
import re
if not re.match('\d{12}', input_date):
input_date = '200101010101'
timestamp = dt.datetime.strptime(input_date, '%Y%m%d%H%M')
return timestamp
After following some of Jeff's tips I can provide further details on my
problem. Here is the entire code I use to load a bz2 encoded file:
iterator_data = pd.io.parsers.read_csv(filename, sep=";|\t", compression='bz2', index_col=False, header=None,
names=['XX', 'XXXX', 'Date', 'XXXXX'], parse_dates=[2],
date_parser=self.parse_date, iterator=True,
chunksize=self.input_chunksize, dtype={'Date': np.int64})
for chunk in iterator_data:
self.data_store.append('huge', chunk, data_columns=True)
self.data_store.flush()
The csv file follows the following pattern: {STRING};{STRING};{STRING}\t{INT}
The output of _ptdump -av_ called for the output file is the following:
ptdump -av datastore.h5
/ (RootGroup) ''
/._v_attrs (AttributeSet), 4 attributes:
[CLASS := 'GROUP',
PYTABLES_FORMAT_VERSION := '2.0',
TITLE := '',
VERSION := '1.0']
/huge (Group) ''
/huge._v_attrs (AttributeSet), 14 attributes:
[CLASS := 'GROUP',
TITLE := '',
VERSION := '1.0',
data_columns := ['XX', 'XXXX', 'Date', 'XXXXX'],
encoding := None,
index_cols := [(0, 'index')],
info := {'index': {}},
levels := 1,
nan_rep := 'nan',
non_index_axes := [(1, ['XX', 'XXXX', 'Date', 'XXXXX'])],
pandas_type := 'frame_table',
pandas_version := '0.10.1',
table_type := 'appendable_frame',
values_cols := ['XX', 'XXXX', 'Date', 'XXXXX']]
/huge/table (Table(167135401,), shuffle, blosc(9)) ''
description := {
"index": Int64Col(shape=(), dflt=0, pos=0),
"XX": StringCol(itemsize=16, shape=(), dflt='', pos=1),
"XXXX": StringCol(itemsize=16, shape=(), dflt='', pos=2),
"Date": Int64Col(shape=(), dflt=0, pos=3),
"XXXXX": Int64Col(shape=(), dflt=0, pos=4)}
byteorder := 'little'
chunkshape := (2340,)
autoIndex := True
colindexes := {
"Date": Index(6, medium, shuffle, zlib(1)).is_CSI=False,
"index": Index(6, medium, shuffle, zlib(1)).is_CSI=False,
"XXXX": Index(6, medium, shuffle, zlib(1)).is_CSI=False,
"XXXXX": Index(6, medium, shuffle, zlib(1)).is_CSI=False,
"XX": Index(6, medium, shuffle, zlib(1)).is_CSI=False}
/huge/table._v_attrs (AttributeSet), 23 attributes:
[XXXXX_dtype := 'int64',
XXXXX_kind := ['XXXXX'],
XX_dtype := 'string128',
XX_kind := ['XX'],
CLASS := 'TABLE',
Date_dtype := 'datetime64',
Date_kind := ['Date'],
FIELD_0_FILL := 0,
FIELD_0_NAME := 'index',
FIELD_1_FILL := '',
FIELD_1_NAME := 'XX',
FIELD_2_FILL := '',
FIELD_2_NAME := 'XXXX',
FIELD_3_FILL := 0,
FIELD_3_NAME := 'Date',
FIELD_4_FILL := 0,
FIELD_4_NAME := 'XXXXX',
NROWS := 167135401,
TITLE := '',
XXXX_dtype := 'string128',
XXXX_kind := ['XXXX'],
VERSION := '2.6',
index_kind := 'integer']
After a lot of additional debugging I got to the following error:
ValueError: invalid combinate of [values_axes] on appending data [name->XXXX,cname->XXXX,dtype->int64,shape->(1, 10)] vs current table [name->XXXX,cname->XXXX,dtype->string128,shape->None]
I then tried to fix this by adding modifying the _read_csv_ call so to force
the proper type for the XXXX column but just received the same error:
dtype={'XXXX': 's64', 'Date': dt.datetime})
Is _read_csv_ ignoring the dtype settings or what am I missing here?
When reading the data with a chunksize of 10 the last 2 _chunk.info()_ calls
give the following output:
Int64Index: 10 entries, 0 to 9
Data columns (total 4 columns):
XX 10 non-null values
XXXX 10 non-null values
Date 10 non-null values
XXXXX 10 non-null values
dtypes: datetime64[ns](1), int64(1), object(2)<class 'pandas.core.frame.DataFrame'>
Int64Index: 10 entries, 0 to 9
Data columns (total 4 columns):
XX 10 non-null values
XXXX 10 non-null values
Date 10 non-null values
XXXXX 10 non-null values
dtypes: datetime64[ns](1), int64(2), object(1)
I'm using pandas version _0.12.0_.
Answer: ok you have a couple of issues:
* when specifying dtypes to pass to `read_csv`, they must be numpy dtypes; and string dtypes are converted to `object` dtype (so the `s64` doesn't do anything). nor does the `datetime`, that's what `parse_dates` is used.
* your dtypes in different chunks are DIFFERENT, that is in the first on you have 2 `int64` columns and 1 `object`, while the 2nd has 1 `int64` and 2 `object`. This your problem. (I think the error message might be slightly confusing, which IIRC is fixed in later versions of pandas).
So, you need to conform your dtypes in EVERY chunk to be the same. You might
have mixed data in that particular column. One way to do this is to specify
`dtype = { column_that_is_bad : 'object' }`. Another is to use
`convert_objects(convert_numeric=True)` ON THAT column to coerce all non-
numeric values to `nan` (this will also change the dtype of the column to
`float64`).
|
Using datetime to get CSV dates in Python
Question: I am trying to get dates from a csv file to chart a graph, and am having
difficulty getting a method with which to compare the data. The dates are in
the format MM/DD/YYYY HH:MM:SS . I have struggled with finding a method to
perform this task for several days, but my instructor insists using online
resources instead of giving direct assistance for our final project.
I think that I will need to use a for loop to create the 'now' variable, but I
don't know how I could then compare all of the dates under the 'now' variable.
I also don't know how else to insert arguments into the 'then' variable,
because now[n] gives the nth entry for date in the csv file.
Thank you ahead of time for your help, and I will try to organize my post as
much as needed to clarify!
"""
Demo of scatter plot with varying marker colors and sizes.
"""
import numpy as np
import matplotlib.pyplot as plt
import os.path
import random
import datetime
from datetime import timedelta
import time
# Load a numpy record array from yahoo csv data with fields date,
# open, close, volume, adj_close from the mpl-data/example directory.
# The record array stores python datetime.date as an object array in
# the date column
directory = os.path.dirname(os.path.abspath(__file__))
datafile = os.path.join(directory, 'v2vcomm.csv')
data = np.recfromcsv(datafile, delimiter=',', filling_values=np.nan, case_sensitive=True, deletechars='', replace_space=' ')
#start = datetime.datetime(np.data(data.date)
#datetime.datetime.time.strftime("%y-%m-%d-%H-%M")
time = data.date
now = time.replace('/', ' ').replace(':', ' ').split() #having problems turning this into a useable list. It outputs a list containing each individual date as a element, so it isn't helpful to me
then = datetime.datetime(int(now[2]), int(now[0]), int(now[1]), int(now[3]), int(now[4]), int(now[5])) #trying to input int values for each item in one date, but the formatting is tricky to me
length = len(np.diff(data.msg_id))
#these random values are from an earlier assignment, going to eliminate them when I get datetime working
x= []
z= []
for v in range(1,length+1):
x.append(random.randint(0,300))
z.append(random.randint(1,100))
y = np.diff(data.msg_id)
count = 1
fig, ax = plt.subplots()
ax.scatter(x, y, c=z, s=30, alpha=0.5)
ax.set_xlabel(r'Total Time Elasped since - Seconds (300=5 min)', fontsize=12)
ax.set_ylabel(r'Impediments', fontsize=12)
ax.set_title('Traffic Flow Impediment Frequency')
ax.grid(True)
fig.tight_layout()
plt.show()
Answer: You **can** compare Python's `date` and `datetime` object(s).
**Example:**
>>> from datetime import date
>>> d1 = date(2014, 05, 21)
>>> d2 = date(2014, 05, 20)
>>> d1 < d2
False
>>> d1
datetime.date(2014, 5, 21)
>>> d1 > d2
True
See: [datetime](https://docs.python.org/2/library/datetime.html)
Documentation.
Regarding your 2nd problem of insertion into list(s):
>>> xs = [0, 1, 2, 4]
>>> xs.insert(3, 3)
>>> xs
[0, 1, 2, 3, 4]
See: `help(list)` or `pydoc list` (_on the command line_).
|
virtualenvwrapper - IOError: [Errno 13] Permission denied
Question: I'm trying to install `virtualenvwrapper` on a fresh Ubuntu 14.04
installation. I followed the steps
[here](http://virtualenvwrapper.readthedocs.org/en/latest/install.html) and
added these lines to my .bashrc:
export WORKON_HOME=$HOME/.virtualenvs
export PROJECT_HOME=$HOME/Devel
source /usr/local/bin/virtualenvwrapper.sh
I get the following error message when I try and `source ~/.bashrc`:
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/local/lib/python2.7/dist-packages/virtualenvwrapper/hook_loader.py", line 217, in <module>
main()
File "/usr/local/lib/python2.7/dist-packages/virtualenvwrapper/hook_loader.py", line 131, in main
run_hooks(hook, options, args)
File "/usr/local/lib/python2.7/dist-packages/virtualenvwrapper/hook_loader.py", line 157, in run_hooks
hook_mgr = ExtensionManager(namespace)
File "/usr/local/lib/python2.7/dist-packages/stevedore/extension.py", line 92, in __init__
verify_requirements)
File "/usr/local/lib/python2.7/dist-packages/stevedore/extension.py", line 155, in _load_plugins
for ep in self._find_entry_points(self.namespace):
File "/usr/local/lib/python2.7/dist-packages/stevedore/extension.py", line 148, in _find_entry_points
eps = list(pkg_resources.iter_entry_points(namespace))
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 515, in iter_entry_points
entries = dist.get_entry_map(group)
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2371, in get_entry_map
self._get_metadata('entry_points.txt'), self
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2155, in parse_map
for group, lines in data:
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2715, in split_sections
for line in yield_lines(s):
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 1989, in yield_lines
for ss in strs:
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2305, in _get_metadata
for line in self.get_metadata_lines(name):
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 1369, in get_metadata_lines
return yield_lines(self.get_metadata(name))
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 1361, in get_metadata
return self._get(self._fn(self.egg_info,name))
File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 1470, in _get
stream = open(path, 'rb')
IOError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/numpydoc-0.4-py2.7.egg/EGG-INFO/entry_points.txt'
virtualenvwrapper.sh: There was a problem running the initialization hooks.
If Python could not import the module virtualenvwrapper.hook_loader,
check that virtualenv has been installed for
VIRTUALENVWRAPPER_PYTHON=/usr/bin/python and that PATH is
set properly.
This happens when `source /usr/local/bin/virtualenvwrapper.sh` is executed.
Any ideas? Thanks.
**EDIT:** Although I am getting this error message, it seems my virtualenv is
kind of working. I am able to create new env and even `workon` them. But every
command I ttype, I get the IOError above.
Answer: Fixed. I just uninstalled the offending numpy doc package:
sudo pip uninstall numpydoc
|
How to apply relative directory in python when imported from different module?
Question: Here's the problem:
In package `main.A`, there's a module `AM` and a `config.ini` file. In `AM`,
I'm using **./config.ini** to refer to this file. This just works fine.
Whereas in package `main.B`, there's another module named `BM`, which imports
`main.A.AM` module. In this time, it throws that **config.ini cannot be found
in /main/B/**
Could anyone tell me how to import this relatively located file in `Main.A`?
Thanks a lot!
Answer: The `__file__` magic variable stores the path to the file it appears in.
If you put the following line in `main.A`, it will always point to a file
residing in the same directory as `main.A`, regardless of the location
`main.A` is imported from:
import os.path as osp
osp.join(osp.dirname(__file__), 'config.ini')
|
How does the python interpreter know when to compile and update a .pyc file?
Question: I knew that a `.pyc` file is generated by the python interpreter and contains
the byte code as this
[question](http://stackoverflow.com/questions/2998215/if-python-is-
interpreted-what-are-pyc-files) said.
I thought python interpreter is using the time stamp to detect whether a
`.pyc` is newer than a `.py`, and if it is, skipped compiling it again when
executing. (The way what makefile do)
So, I did a test, but it seemed I was wrong.
1. I wrote `t.py` contains `print '123'` and `t1.py` contains `import t`. Running command `python t1.py` gave the output `123` and generated `t.pyc`, all as expected.
2. Then I edited `t.py` as `print '1234'` and updated the time stamp of `t.pyc` by using `touch t.pyc`.
3. Run `python t1.py` again, I thought I would get `123` but `1234` indeed. So it seemed the python interpreter still knew that `t.py` is updated.
Then I wondered whether python interpreter will compile and generate `t.pyc`
every time running `python t1.py`. But when I run `python t1.py` several
times, I found that the `t.pyc` will not be updated when `t.py` is not
updated.
So, my question is: how python interpreter knows when to compile and update a
`.pyc` file?
**Updated**
Since python interpreter is using the timestamp stored in the `.pyc` file. I
think it a record of when `.pyc` was last updated. And when imported, compare
it with the timestamp of `.py` file.
So I tried to hack it in this way: change the OS time to an older one, and
edit `.py` file. I thought when imported again, the `.py` seems older than the
`.pyc`, and the python interpreter will not update `.pyc`. **But I was wrong
again.**
**So, does the python interpreter compare these two timestamp not in a older
or newer way but in a exactly equal way?**
In a exectly equal way, I means the timestamp in `.pyc` records the when the
`.py` was last modified. When imported, it compares the timestamp with the
current timestamp of `.py`, if it's not the same, recompile and update `.pyc`.
Answer: It looks like the timestamp is stored directly in the `*.pyc` file. The python
interpreter doesn't rely on the last modification attribute of the file, maybe
to avoid incompatibe bytecode issues when copying source trees.
Looking at [the python implementation of the `import`
statement](https://github.com/python/cpython/blob/master/Lib/importlib/_bootstrap.py),
you can find the stale check in
[`_validate_bytecode_header()`](https://github.com/python/cpython/blob/master/Lib/importlib/_bootstrap.py#L600).
By the looks of it, it extracts bytes 4 to 7 (incl) and compares it against
the timecode of the source file. If those doesn't match, the bytecode is
considered stalled and thus recompiled.
In the process, it also checks the length of the source file against the
length of the source used to generate a given bytecode (stored in bytes 8 to
11).
In the python implementation, if one of those checks fails, the bytecode
loader raises an `ImportError` catched by
[`SourceLoader.get_code()`](https://github.com/python/cpython/blob/master/Lib/importlib/_bootstrap.py#L1535)
that triggers a recompilation of the bytecode.
_**Note:** That's how it's done in the python version of `importlib`. I guess
there's no functionnal difference in the native version, but my C is a bit too
rusty to dig into compiler code_
|
How to generate signature using SHA-1 HMAC for google map in ruby
Question: I am trying to generate signature using SHA-1 HMAC in ruby for google maps
calls. I have got a python's code from the internet which I am trying to copy
into ruby. Following is phython's code
import urllib.parse
import base64
import hashlib
import hmac
GOOGLEAPIS_URL = 'http://maps.googleapis.com'
STREETVIEW_ENDPOINT = '/maps/api/streetview'
encodedParams = urllib.parse.urlencode({'size':'50x50','location':'42.35878993272469,-71.05793081223965','sensor':'false','client':'gme-securealert'});
privateKey = 'Encoded_Key' # e.g XEL-B9Zs3lRLajIXkD-bqTYix20=
decodedKey = base64.urlsafe_b64decode(privateKey)
urlToSign = STREETVIEW_ENDPOINT + '?' + encodedParams
print(urlToSign)
signature = hmac.new(decodedKey, urlToSign.encode('utf-8') , hashlib.sha1)
encodedSignature = base64.urlsafe_b64encode(signature.digest())
print(encodedSignature
)
## that generates **OI2DXDLq7Qd790Lokaxgqtis_pE=** signature.
Following is the Ruby code that I am trying to achieve same signature with.
GOOGLE_APIS_URL= "http://maps.googleapis.com"
key = Encoded_Key # e.g XEL-B9Zs3lRLajIXkD-bqTYix20=
data ='/maps/api/streetview?size=50x50&sensor=false&client=gme-securealert&location=42.35878993272469,-71.05793081223965'
data_array = data.split("?")
STREET_VIEW_ENDPOINT = data_array[0]
query_string = data_array[1]
encoded_query_string = URI.escape(query_string) # to encode parameters only
decoded_key = Base64.decode64(key) # to decode the key
data = STREET_VIEW_ENDPOINT << '?' << encoded_query_string
#p "DATA #{data}"
#data = Base64.decode64(data)
#puts "data #{data}"
digest = OpenSSL::Digest.new('sha1')
p OpenSSL::HMAC.digest(digest, decoded_key, data)
hmac = Base64.encode64(OpenSSL::HMAC.digest(digest, decoded_key, data))
p hmac
but this seems to be not working for me. Please guide.
Answer: You can create the hash for request parameters and use URI.encode_www_form for
encoding parameters. Use Base64.urlsafe_decode64 and Base64.urlsafe_encode64
instead of Base64.decode64 and Base64.encode64. In my case, I have used
reverse geocoding api. You will need to ammend the parameters and
REVERSE_GEOCODING_ENDPOINT_JSON. Hope this will help you. Please let me know
if you have any queries.
GOOGLE_MAPS_API_CLIENT_ID = 'gme-xyz'
GOOGLE_MAPS_API_CRYPTO_KEY = 'private_key'
GOOGLEAPIS_HTTP_URL = 'http://maps.googleapis.com'
REVERSE_GEOCODING_ENDPOINT_JSON = '/maps/api/geocode/json'
str_latlng = lat.to_s + ',' + lon.to_s
encoded_params = URI.encode_www_form({'latlng' => str_latlng,
'client' => GOOGLE_MAPS_API_CLIENT_ID})
decoded_key = Base64.urlsafe_decode64(GOOGLE_MAPS_API_CRYPTO_KEY)
url_to_sign = REVERSE_GEOCODING_ENDPOINT_JSON + '?' + encoded_params
digest = OpenSSL::Digest.new('sha1')
signature = OpenSSL::HMAC.digest(digest, decoded_key, url_to_sign)
encoded_signature = Base64.urlsafe_encode64(signature)
signed_url = GOOGLEAPIS_HTTP_URL + REVERSE_GEOCODING_ENDPOINT_JSON + '?' + encoded_params + '&signature='+encoded_signature
|
Python fnmatch negative match does not behave as expected
Question: trying to match the strings that does not containt "foo" or "bar".
After many reserch I came up with something that works with in linux `kiki`
(which is ironically written in python) but does not work when I use it in
python:
Python 3.3.2+ (default, Feb 28 2014, 00:52:16)
[GCC 4.8.1] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import fnmatch, re
>>> regex = fnmatch.translate("\Abla.bla.bla-((?!foo|bar).)*\Z")
>>> reobj = re.compile(regex)
>>> reobj.match('bla.bla.bla-9.9.9-123.tar.gz')
>>> reobj.match('bla.bla.bla-9.9.9-123.foo.tar.gz')
>>> reobj.match('bla.bla.bla-9.9.9-123.bar.tar.gz')
I would have expected the first occurence of reobj.match to return a positive
match.
Please help me find where I messed up.
Thanks
Answer: The pattern you specify here:
_regex = fnmatch.translate("\Abla.bla.bla-((?!foo|bar).)*\Z"_
is a _regular expression_ pattern, not a shell-style
[fnmatch](https://docs.python.org/2/library/fnmatch.html) pattern. fnmatch, of
course, only understands the latter.
Shell-style matching is rather limited. To do what you want, I'd perhaps just
use a regular expression, or strip the `"bla.bla.bla-"` prefix off of the
filename and then check that `"foo" not in stripped_filename` and `"bar" not
in stripped_filename`.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.