text
stringlengths 226
34.5k
|
---|
Attribute error in python won't go away
Question: Why do I keep getting
AttributeError: 'dict_keys' object has no attribute 'sort'
or my code? How can I fix this?
import string
infile = open('alice_in_wonderland.txt', 'r')
text = infile.readlines()
counts = {}
for line in text:
for word in line:
counts[word] = counts.get (word, 0) +1
'''
if word != " ":
if word != ".":
'''
word_keys = counts.keys()
word_keys.sort()
infile.close()
outfile = open("alice_words.txt", 'w')
outfile.write("Word \t \t Count \n")
outfile.write("======================= \n")
for word in word_keys:
outfile.write("%-12s%d\n" % (word.lower(), counts[word]))
outfile.close()
I'm not sure what else to do.
Answer: To produce a sorted list of keys, use:
word_keys = sorted(counts)
instead. This works in both Python 2 and 3.
In Python 3 `dict.keys()` does not return a list object, but a [dictionary
view object](https://docs.python.org/3/library/stdtypes.html#dictionary-view-
objects). You could call `list()` on that object, but `sorted()` is much more
direct and saves you two additional calls.
I see that you appear to count words in a file; if so, you are instead
counting _characters_ , not words; `for word in line:` iterates over a string,
so `word` is assigned individual characters from the line.
You should be using
[`collections.Counter()`](https://docs.python.org/2/library/collections.html#collections.Counter)
instead:
from collections import Counter
counts = Counter
with open('alice_in_wonderland.txt') as infile:
for line in infile:
# assumption: words are whitespace separated
counts.update(w for w in line.split())
with open("alice_words.txt", 'w') as outfile:
outfile.write("Word \t \t Count \n")
outfile.write("======================= \n")
for word, count in counts.most_common():
outfile.write("%-12s%d\n" % (word.lower(), counts[word]))
This code uses the file objects as context managers (with the `with`
statement) to have them closed automatically. The `Counter.most_common()`
method takes care of the sorting for us, not by key but by word count.
|
Python create save button that saves an edited version to the same file(not save as)
Question: This is a simple notepad program that i am currently writing. Most things are
working but cannot get the save to work. Where i have defined save i don't
know how to create the save function(not save as).
NOT THE FULL CODE
from tkinter import *
from tkinter.messagebox import *
from tkinter.filedialog import *
from tkinter.font import *
import sys, time, sched, math
class Format:
def __init__(self, notepad):
print("Font")
class ZPad:
def __init__(self):
self.root = Tk()
self.root.title("ZPad")
self.root.wm_iconbitmap('Notepad.ico')
self.scrollbar = Scrollbar(self.root)
self.scrollbar.pack(side=RIGHT, fill=Y)
self.textbox = Text(self.root, yscrollcommand=self.scrollbar.set, undo=TRUE)
self.textbox.pack(side=LEFT, fill=BOTH, expand=YES)
#Menu Bar
self.menubar = Menu(self.root)
self.filemenu = Menu(self.menubar, tearoff=0)
self.filemenu.add_command(label="New", command=self.New, accelerator="Ctrl+N")
self.filemenu.add_command(label="Open...", command=self.open, accelerator="Ctrl+O")
self.filemenu.add_command(label="Save", command=self.Save, accelerator="Ctrl+S")
self.filemenu.add_command(label="Save as...", command=self.Save_as, accelerator="Ctrl+Shift+S")
self.filemenu.add_separator()
self.filemenu.add_command(label="Exit", command=self.quit, accelerator="Ctrl+Q")
self.menubar.add_cascade(label="File", menu=self.filemenu)
self.editmenu = Menu(self.menubar, tearoff=0)
self.editmenu.add_command(label="Undo", command=self.Undo, accelerator="Ctrl+Z")
self.editmenu.add_command(label="Redo", command=self.Redo, accelerator="Ctrl+Y")
self.editmenu.add_separator()
self.editmenu.add_command(label="Cut", command=self.Cut, accelerator="Ctrl+X")
self.editmenu.add_command(label="Copy", command=self.Copy, accelerator="Ctrl+C")
self.editmenu.add_command(label="Paste", command=self.Paste, accelerator="Ctrl+P")
self.editmenu.add_command(label="Clear All", command=self.Clear_All, accelerator="Ctrl+Shift+A")
self.editmenu.add_separator()
self.editmenu.add_command(label="Format", command=self.options, accelerator="Ctrl+T")
self.menubar.add_cascade(label="Edit", menu=self.editmenu)
self.helpmenu = Menu(self.menubar, tearoff=0)
self.helpmenu.add_command(label="About...", command=self.About)
self.menubar.add_cascade(label="Help", menu=self.helpmenu)
self.root.config(menu=self.menubar)
self.root.mainloop()
def Save(self):
print("Save")
def Save_as(self):
global file
file = tkinter.filedialog.asksaveasfile(mode='w', defaultextension=".z", filetypes = ( ("ztext file", "*.z"),("zytext", "*.zy") ) )
if file is None:
return
else:
print(file)
textoutput = self.textbox.get(0.0, END)
file.write(textoutput.rstrip())
file.write("\n")
notepad = ZPad()
I am using python 3.4 in windows 8.1. Thanks.
Answer: Here's a quick example of a save and save as function, with comments:
def save(self):
contents = self.textbox.get(1.0,"end-1c") #store the contents of the text widget in a str
try: #this try/except block checks to
with open(self.f, 'w') as outputFile: #see if the str containing the output
outputFile.write(contents) #file (self.f) exists, and we can write to it,
except AttributeError: #and if it doesn't,
self.save_as() #call save_as
def save_as(self):
contents = self.textbox.get(1.0,"end-1c")
self.f = tkFileDialog.asksaveasfilename( #this will make the file path a string
defaultextension=".z", #so it's easier to check if it exists
filetypes = (("ztext file", "*.z"), #in the save function
("zytext", "*.zy")))
with open(self.f, 'w') as outputFile:
outputFile.write(contents)
There are other ways to do this, but this will work in a pinch. Just think
about what the `save` function needs: text to save and a file to save in, and
go from there.
|
Bigquery streaming API timeout error
Question: We're using bigquery streaming API. All went well until recently (no code
change) - In the last few hours we get many errors like:
> "The API call urlfetch.Fetch() took too long to respond and was cancelled.
> Traceback (most recent call last): File "/base/data/home/runtimes/python27"
or
> "Deadline exceeded while waiting for HTTP response from URL"
The insert call is done on a python deferred process and is retried again
after a wait.
Questions:
* How can we check if it's our internal issue or a general problem with big query?
* Can we increase the 5000 timeout?
Answer: Are you running in appengine? If so, you can do this:
from google.appengine.api import urlfetch
urlfetch.set_default_fetch_deadline(60)
That said, streaming ingestion shouldn't be anywhere close to the default 5
second error. There was a networking configuration issue with streaming
ingestion, it should be resolved now.
Are you still seeing the issues?
|
python requests read from file status_code wrong
Question: trying to read a file with urls and extract the status_code with python
requests module.
this does not work, or gives me a wrong status code:
import requests
f = open('urls.txt','r')
for line in f:
r = requests.head(line)
print r.status_code
if r.status_code == requests.codes.ok:
print "ok";
but if I do it manually (without file read) it works.
import requests
r = requests.head('https://encrypted.google.com/')
print r.status_code
if r.status_code == requests.codes.ok:
print "ok";
Answer: Do a `line.strip()` before making the `head` call. There is a possibility that
the carriage return and/or line feed characters are part of the line you just
read.
Like this:
for line in f:
r = requests.head(line.strip())
print r.status_code
|
python maya - Select children 'nurbs' then hides key
Question: The script below selects all the child nurbCurves of the selected object. The
issue I'm having is that after running the script, none of the keys from the
children nodes appear in the timeline. Why is that? How can I correct this?
To test the script. Create a few nurb curve controls and animate the position.
Then parent them together so their parent is the same master control. Then
select the main control and run the script.
import maya.cmds as cmds
# Get selected objects
curSel = maya.cmds.ls(sl=True)
# Or, you can also specify a type in the listRelatives command
nurbsNodes = maya.cmds.listRelatives(curSel, allDescendents=True, noIntermediate=True, fullPath=True, type="nurbsCurve", path=True)
cmds.select(nurbsNodes)
Answer: You are selecting the `nurbsCurve` _shapes_ , but your animation is on the
_transform_
# list the shape nodes
nurbsShapes = maya.cmds.listRelatives(curSel, allDescendents=True, noIntermediate=True, fullPath=True, type="nurbsCurve", path=True)
# list the transform nodes to each shape node
nurbsTransforms = maya.cmds.listRelatives(nurbsShapes, type='transform', parent=True)
# select the transform nodes
cmds.select(nurbsTransforms)
|
Python: Reducing a List in a Dictionary
Question: I'm hoping this will be easy. I have a dictionary:
{
'key1': 'value1',
'key2': 'value2',
'key3': 'value3'
},
{
'key1': 'value4',
'key2': 'value5',
'key3': 'value6'
},
How can I reduce this to be the following:
{ 'key1': ['value1', 'value4'], 'key2': ['value2', 'value5'], 'key3': ['value3', 'value6'] }
Many thanks!
Answer: Like this:
from collections import defaultdict
d1 = { 'key1': 'value1', 'key2': 'value2', 'key3': 'value3' }
d2 = { 'key1': 'value4', 'key2': 'value5', 'key3': 'value6' }
dout = defaultdict(list)
for item in d1:
dout[item].append(d1[item])
for item in d2:
dout[item].append(d2[item])
print dout
|
np complete algorithm in python [solved]
Question: This question is closed.
All information about this question will be uploaded in a week. I am still
trying to figure out the full details. Still testing it as a module.
n is the number of elements and formula is a the clauses
this is an np complete algorithm.
the code tries to figure out for any permutations of n could satisfy if the
formula is correct.
booleanValues = [True,False] * n
allorderings = set(itertools.permutations(booleanValues, n)) #create possible combinations of variables that can check if formula is satisfiable or not
print(allorderings)
for potential in allorderings:
l = [] #boolean value for each variable / different combination for each iteration
for i in potential:
l.append(i)
#possible = [False]*n
aclause = []
for clause in formula:
something = []
#clause is [1,2,3]
for item in clause:
if item > 0:
something.append(l[item-1])
else:
item = item * -1
x = l[item-1]
if x == True:
x = False
else:
x = True
something.append(x)
counter = 0
cal = False
for thingsinclause in something:
if counter == 0:
cal = thingsinclause
counter = counter + 1
else:
cal = cal and thingsinclause
counter = counter + 1
aclause.append(cal)
counter2 = 0
formcheck = False
for checkformula in aclause:
if counter2 == 0:
formcheck = checkformula
counter2 = counter2 + 1
else:
formcheck = formcheck or checkformula
print("this combination works", checkformula)
Answer: Here is a corrected version:
import itertools
n = 4
formula = [[1, -2, 3], [-1, 3], [-3], [2, 3]]
allorderings = itertools.product ([False, True], repeat = n)
for potential in allorderings:
print ("Initial values:", potential)
allclauses = []
for clause in formula:
curclause = []
for item in clause:
x = potential[abs (item) - 1]
curclause.append (x if item > 0 else not x)
cal = any (curclause)
allclauses.append (cal)
print ("Clauses:", allclauses)
formcheck = all (allclauses)
print ("This combination works:", formcheck)
Points to consider:
1. Instead of introducing some complex -- and also wrong -- logic to find the conjunction and disjunction, you can use [`any`](https://docs.python.org/2/library/functions.html#any) and [`all`](https://docs.python.org/2/library/functions.html#all). That's cleaner and less prone to bugs.
2. The natural object to loop over is `itertools.product([False, True], repeat = n)`, that is, the set `[False, True]` of possible boolean values raised to the power of `n`. In other words, the [Cartesian product](http://en.wikipedia.org/wiki/Cartesian_product) of `n` copies of `[False, True]`. [Here](https://docs.python.org/2.7/library/itertools.html#itertools.product) is the documentation for itertools.product.
3. I introduced a bit more output to see how things are going. Here is the output I get with Python3 (Python2 adds parentheses but prints essentially the same):
* * *
Initial values: (False, False, False, False)
Clauses: [True, True, True, False]
This combination works: False
Initial values: (False, False, False, True)
Clauses: [True, True, True, False]
This combination works: False
Initial values: (False, False, True, False)
Clauses: [True, True, False, True]
This combination works: False
Initial values: (False, False, True, True)
Clauses: [True, True, False, True]
This combination works: False
Initial values: (False, True, False, False)
Clauses: [False, True, True, True]
This combination works: False
Initial values: (False, True, False, True)
Clauses: [False, True, True, True]
This combination works: False
Initial values: (False, True, True, False)
Clauses: [True, True, False, True]
This combination works: False
Initial values: (False, True, True, True)
Clauses: [True, True, False, True]
This combination works: False
Initial values: (True, False, False, False)
Clauses: [True, False, True, False]
This combination works: False
Initial values: (True, False, False, True)
Clauses: [True, False, True, False]
This combination works: False
Initial values: (True, False, True, False)
Clauses: [True, True, False, True]
This combination works: False
Initial values: (True, False, True, True)
Clauses: [True, True, False, True]
This combination works: False
Initial values: (True, True, False, False)
Clauses: [True, False, True, True]
This combination works: False
Initial values: (True, True, False, True)
Clauses: [True, False, True, True]
This combination works: False
Initial values: (True, True, True, False)
Clauses: [True, True, False, True]
This combination works: False
Initial values: (True, True, True, True)
Clauses: [True, True, False, True]
This combination works: False
|
python telnet server crash
Question: Whenever my client connect to my server via telnet
telnet myip 43
When the client hits ctrl+c on his own machine continuously it causes the
telnet server to crash... how can this be stopped ? Also are there any chances
by which my server can be attacked by a buffer overflow?
This is my script(very basic script it is)
#!/usr/bin/env python
import sys
import socket
import urllib2
def main():
s = socket.socket() # Create a socket object
host = ''
BUFFER_SIZE = 2048
port = 43 # Reserve a port for your service.
s.bind((host, port)) # Bind to the port
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.listen(120) # Now wait for client connection.
while True:
data = ''
print 'waiting for connection...'
c, addr = s.accept() # Establish connection with client.
print 'Got connection from', addr
data = c.recv(BUFFER_SIZE)
print 'requested website: '+data
print x
c.send(x)
c.close() # Close the connection
if __name__ == '__main__':
main()
Answer: try [this](http://stackoverflow.com/questions/295135/turn-a-string-into-a-
valid-filename-in-python):
import string
valid_chars = "-_.() %s%s" % (string.ascii_letters, string.digits)
[...]
data = c.recv(BUFFER_SIZE)
data=''.join(c for c in data if c in valid_chars)
print 'requested website: '+data
if len(data)>0:
try:
urllib2.urlopen('localhost:2020/?id='+data ).read()
print x
c.send(x)
except:
pass
c.close() # Close the connection
Edit `valid_chars` to only allow characters allowed for your id argument.
|
How retrieve from Python win32evtlog rest of info?
Question: Using win32evtlog I can get next info:
events = win32evtlog.ReadEventLog(loghandle, flags, 0)
while events:
for event in events:
print 'Event Category:', event.EventCategory
print 'Time Generated:', event.TimeGenerated
print 'Source Name:', event.SourceName
print 'Event ID:', event.EventID
print 'Event Type:', event.EventType
data = event.StringInserts
if data:
print 'Event Data:'
for msg in data:
print msg
events = win32evtlog.ReadEventLog(loghandle, flags, 0)
But if we look at event structure:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
- <System>
<Provider Name="PRNAME" />
<EventID Qualifiers="0">18</EventID>
<Level>0</Level>
<Task>0</Task>
<Keywords>0xa0000000000000</Keywords>
<TimeCreated SystemTime="2012-04-03T05:30:02.000000000Z" />
<EventRecordID>2387524</EventRecordID>
<Channel>PRNAME</Channel>
<Computer>A00001</Computer>
<Security />
</System>
- <EventData>
<Data>tst</Data>
<Binary>01020304</Binary>
</EventData>
</Event>
We can find there additional info:
* Channel name - that is different from Provider name
* EventRecordId
* Computer
* Binary
and other. How to get them? I especially need Binary and EventRecordId, but I
guess there have to be way to get all data from event log.
Answer: If you don't mind to use BeautifulSoup over the xml formatted data, then here
is an example
from bs4 import BeautifulSoup
soup = BeautifulSoup(event_log_as_xml)
print soup.find("channel").text
print soup.find("eventrecordid").text
print soup.find("computer").text
print soup.find("binary").text
|
python web scraping of a dynamically loading page
Question: Let's say i want to scrape this page: <https://twitter.com/nfl>
from bs4 import BeautifulSoup
import requests
page = 'https://twitter.com/nfl'
r = requests.get(page)
soup = BeautifulSoup(r.text)
print soup
The more i scroll down on the page, the more results show up. But this above
request only gives me the initial load. How do i get all the information of
the page as if I were to manually scroll down?
Answer: First parse the `data-max-id="451819302057164799"` value from the html source.
Then using the id `451819302057164799` construct an url like below:
`https://twitter.com/i/profiles/show/nfl/timeline?include_available_features=1&include_entities=1&max_id=451819302057164799`
Now get the html source of the link and parse using `simplejson` or any other
json library.
Remember, the next page load(when you scroll down) is available from the value
`"max_id":"451369755908530175"` in that json.
|
Sending RabbitMQ messages via websockets
Question: Looking for some code samples to solve this problem :-
Would like to write some code (Python or Javascript) that would act as a
subscriber to a RabbitMQ queue so that on receiving a message it would
broadcast the message via websockets to any connected client.
I've looked at Autobahn and node.js (using
"[amqp](https://github.com/postwait/node-amqp)" and
"[ws](https://github.com/einaros/ws)" ) but cannot get things to work as
needed. Here's the server code in javascript using node.js:-
var amqp = require('amqp');
var WebSocketServer = require('ws').Server
var connection = amqp.createConnection({host: 'localhost'});
var wss = new WebSocketServer({port:8000});
wss.on('connection',function(ws){
ws.on('open', function() {
console.log('connected');
ws.send(Date.now().toString());
});
ws.on('message',function(message){
console.log('Received: %s',message);
ws.send(Date.now().toString());
});
});
connection.on('ready', function(){
connection.queue('MYQUEUE', {durable:true,autoDelete:false},function(queue){
console.log(' [*] Waiting for messages. To exit press CTRL+C')
queue.subscribe(function(msg){
console.log(" [x] Received from MYQUEUE %s",msg.data.toString('utf-8'));
payload = msg.data.toString('utf-8');
// HOW DOES THIS NOW GET SENT VIA WEBSOCKETS ??
});
});
});
Using this code, I can successfully subscribe to a queue in Rabbit and receive
any messages that are sent to the queue. Similarly, I can connect a websocket
client (e.g. a browser) to the server and send/receive messages. BUT ... how
can I send the payload of the Rabbit queue message as a websocket message at
the point indicated ("HOW DOES THIS NOW GET SENT VIA WEBSOCKETS") ? I think
it's something to do with being stuck in the wrong callback or they need to be
nested somehow ...?
Alternatively, if this can be done easier in Python (via Autobahn and pika)
that would be great.
Thanks !
Answer: One way to implement your system is use python with
[tornado](http://www.tornadoweb.org).
Here the server:
import tornado.ioloop
import tornado.web
import tornado.websocket
import os
import pika
from threading import Thread
clients = []
def threaded_rmq():
connection = pika.BlockingConnection(pika.ConnectionParameters("localhost"));
print 'Connected:localhost'
channel = connection.channel()
channel.queue_declare(queue="my_queue")
print 'Consumer ready, on my_queue'
channel.basic_consume(consumer_callback, queue="my_queue", no_ack=True)
channel.start_consuming()
def consumer_callback(ch, method, properties, body):
print " [x] Received %r" % (body,)
for itm in clients:
itm.write_message(body)
class SocketHandler(tornado.websocket.WebSocketHandler):
def open(self):
print "WebSocket opened"
clients.append(self)
def on_message(self, message):
self.write_message(u"You said: " + message)
def on_close(self):
print "WebSocket closed"
clients.remove(self)
class MainHandler(tornado.web.RequestHandler):
def get(self):
print "get page"
self.render("websocket.html")
application = tornado.web.Application([
(r'/ws', SocketHandler),
(r"/", MainHandler),
])
if __name__ == "__main__":
thread = Thread(target = threaded_rmq)
thread.start()
application.listen(8889)
tornado.ioloop.IOLoop.instance().start()
and here the html page:
<html>
<head>
<script src="//code.jquery.com/jquery-1.11.0.min.js"></script>
<script>
$(document).ready(function() {
var ws;
if ('WebSocket' in window) {
ws = new WebSocket('ws://localhost:8889/ws');
}
else if ('MozWebSocket' in window) {
ws = new MozWebSocket('ws://localhost:8889/ws');
}
else {
alert("<tr><td> your browser doesn't support web socket </td></tr>");
return;
}
ws.onopen = function(evt) { alert("Connection open ...")};
ws.onmessage = function(evt){
alert(evt.data);
};
function closeConnect(){
ws.close();
}
});
</script>
</head>
<html>
So when you publish a message to "my_queue" the message is redirects to all
web page connected.
I hope it can be useful
EDIT**
Here <https://github.com/Gsantomaggio/rabbitmqexample> you can find the
complete example
|
I'm receiving the error 'except MySQLdb.error, e' Syntax Error - invalid syntax when trying to write my scrapy spider to mysql,
Question: Here is my pipelines.py and I'm receiving an error on line 18.
import sys;sys.path.append("/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages")
import MySQLdb
import hashlib
from scrapy.exceptions import DropItem
from scrapy.http import Request
class TestPipeline(object):
def __init__(self):
self.conn = MySQLdb.connect(user='test', passwd='password', db='c1024403', host='ephesus.cs.cf.ac.uk', charset='utf8', use_unicode=True)
self.cursor = self.conn.cursor()
def process_item(self, item, spider):
try:
self.cursor.execute("""INSERT INTO test (test1, test2) VALUES (%s, %s)""", (item['Country'], item['Qualification']))
self.conn.commit()
except MySQLdb.Error, e:
print 'Error %d: %s' % (e.args[0], e.args[1])
sys.exit(1)
return item
Heres the error -
File "project\pipelines.py", line 18
except MySQLdb.Error, e:
^
SyntaxError: invalid syntax
I've installed mysql-python and visual c++ 2008 express, I don't understand
what the error means as I can't find anything about it anywhere else on the
internet.
Answer: You are running your code with `Python 3.x`, but your code scheme for `try..
except` section is for `Python 2.X`.
If you want to run your code with `Python 3.x`, then change this line:
except MySQLdb.Error, e:
To:
except MySQLdb.Error as e:
And if you want this section of code works with `Python 2.x` and also `Python
3.x`, then change it to:
except MySQLdb.Error:
e = sys.exc_info()[1]
Read [more](https://docs.python.org/3/howto/pyporting.html#capturing-the-
currently-raised-exception).
But according to your `print` statement, you write your script for `Python
2.x`, so it's better to run your code with `Python 2.x`, instead of `Python
3.x`
Also this `sys.path.append("../python2.7/site-packages")` line is strange in
first line of your script.
Also your indention of your first code that you pasted was wrong, and i think
your are still using that, please use current edited version that is now in
your question.
|
Python Can't Find Modules
Question: I'm running a python script which resizes and converts images to JPG and an
RGB. This script worked perfectly until I upgraded my computers HD and moved
everything over from my Time Machine backup. Now when I try to run the script
it can't find any modules. Specifically the Image module (I use Pillow).
Traceback (most recent call last):
File "processImgs.py", line 1, in <module>
import os, sys, argparse, shutil, imgFunctions
File "/web/script/python/img_processing/imgFunctions.py", line 1, in <module>
import os, sys, Image, shutil, re
ImportError: No module named Image
I am using Homebrew to manage my modules, and "brew list" outputs the
following:
freetype graphicsmagick libpng libtool little-cms2 openssl pkg-config readline webp
gdbm jpeg libtiff little-cms openjpeg pillow python sqlite
If i run "pip list" I get:
Pillow (2.3.0)
pip (1.5.4)
setuptools (2.2)
wsgiref (0.1.2)
If i run "help(modules)" in python, the Image module isn't listed.
Answer: Could be your python path isn't set up correctly after the move.
[see here](https://github.com/Homebrew/homebrew/wiki/Homebrew-and-Python) for
the homebrew/python docs page and it seems that a reinstall of homebrew may
fix it.
|
Jython output formats, adding symbol at N:th character
Question: I have a problem that probably is very easy to solve. I have a script that
takes numbers from various places does math with them and then prints the
results as strings.
This is a sample
type("c", KEY_CTRL)
LeInput = Env.getClipboard().strip() #Takes stuff from clipboard
LeInput = LeInput.replace("-","") #Quick replace
Variable = int(LeInput) + 5 #Simple math operation
StringOut = str(Variable) #Converts it to string
popup(StringOut) #shows result for the amazed user
But what I want to do is to add the "-" signs again as per XXXX-XX-XX but I
have no idea on how to do this with Regex etc. The only solution I have is
dividing it by 10^N to split it into smaller and smaller integers. As an
example:
int 543442/100 = 5434 giving the first string the number 5434, and then repeat
process until i have split it enough times to get my 5434-42 or whatever.
**So how do I insert any symbol at the N:th character?**
OK, so here is the Jython solution based on the answer from Tenub
import re
strOut = re.sub(r'^(\d{4})(.{2})(.{2})', r'\1-\2-\3', strIn)
This can be worth noting when doing Regex with Jython:
> The solution is to use Python’s raw string notation for regular expression
> patterns; backslashes are not handled in any special way in a string literal
> prefixed with 'r'. **So r"\n" is a two- character** *_string containing '\'
> and 'n', while "\n" is a one-character string_ * **containing a newline.
> Usually patterns will be expressed in Python** *_code using this raw string
> notation._ *
Here is a working example <http://regex101.com/r/oN2wF1>
Answer: In that case you could do a replace with the following:
(\d{4})(\d{2})(\d+)
to
$1-$2-$3
|
Doctest and Decorators in Python
Question: I was trying to use Python decorator to catch exceptions and log the
exceptions.
import os.path
import shutil
class log(object):
def __init__(self, f):
print "Inside __init__()"
self.f = f
def __call__(self, *args):
print "Inside __call__()"
try:
self.f(*args)
except Exception:
print "Sorry"
@log
def testit(a, b, c):
print a,b,c
raise RuntimeError()
if __name__ == "__main__":
testit(1,2,3)
It works fine
Desktop> python deco.py
Inside __init__()
Inside __call__()
1 2 3
Sorry
The issue is that when I tried to use for testing with doctest
@log
def testit(a, b, c):
"""
>>> testit(1,2,3)
"""
print a,b,c
raise RuntimeError()
if __name__ == "__main__":
import doctest
doctest.testmod()
nothing seems to be happening.
Desktop> python deco2.py
Inside __init__()
What's wrong with this?
Answer: The decorated function (which is actually a class instance) doesn't get the
`__doc__` attribute of the original function (which is what `doctest` parses).
You _could_ just copy `__doc__` over to the class instance, but ... Honestly,
I don't really see the need for a class at all here -- you'd probably be
better just using `functools.wraps`
import functools
def log(func):
@functools.wraps(func)
def wrapper(*args):
try:
return func(*args)
except Exception:
print "sorry"
return wrapper
|
convert a code from java into python
Question: Now I am trying to change a program from Java into Python, when I finally fix
three problems on my code and I run it. Then shows me this Error.
Traceback (most recent call last):
File "serial.py", line 4, in <module>
import serial
File "/home/pi/serial.py", line , in <module>
port=serial.Serial(
AttributeError:'module' object has no attribute 'Serial'
How can I fix it?
If you need the source code and What were the problems that I had to fix, let
me know. Please help me about it.
Answer: I think import serial is wrong.
Did you try:-
from serial import serial
<http://pyserial.sourceforge.net/shortintro.html>
|
How to create a matrix? (Python)
Question: I want to create a 2x3 matrix. (2 rows, 3 columns) When I run my code, I get
the matrix in parentheses, which is incorrect.
def fill_matrix(numrows, numcols, val):
matrix = [[val for i in range(numrows)] for j in range(numcols)]
return (numrows, numcols, val)
If I choose to create a 2x2 matrix and fill all holes with 1, I'm supposed to
get this: [[1, 1], [1, 1]]
But I get this instead: (2, 2, 1)
Answer: Your `fill_matrix` function returns the tuple `(numrows, numcols, val)`, which
is why you're getting `(2,2,1)`. You're not returning the matrix at all.
You could try:
def fill_matrix(numrows, numcols, val):
return [[val for i in range(numrows)] for j in range(numcols)]
to just return the matrix itself.
If you're working with matricies you might also consider using
[numpy](http://www.numpy.org/) and doing:
import numpy as np
np.ones((2,2))
or:
def fill_matrix(numrows, numcols, val):
return np.ones((numrows, numcols)) * val
|
Converting data into dictionary of Dictionaries in python
Question: I am reading a file in python using a key value pair, for example
Mac:aaaa
IP:bbbbb
Name:dddd
Mac:wwwww
IP:fffff
Name:sssss
Mac:hhhh
IP:ddd
Name:fff
so, my query is, I need to build a dictionary of dictionaries for the above
data so as to format it as json.
Answer: I assume you mean a list of dictionaries, not a dictionary of dictionaries
from operator import methodcaller
fdata = open("data.txt").read().split()
split2 = methodcaller("split",":")
print map(dict, zip(*[iter(map(split2, fdata))]*3))
is a fun way to do it ;)
however if you did want a dictionary of dictionaries as your title suggests
you can simply
dict(enumerate(map(dict, zip(*[iter(map(split2, fdata.split()))]*3))))
[edited to be more pep-8 compliant :P ]
|
How do I fix the deprecation warning that comes with pylab.pause?
Question: A short python program to demonstrate a bug I keep running into:
import pylab
pylab.ion()
pylab.title('doom')
pylab.pause(0)
If I run it, it works fine, but I get this warning:
/usr/lib/pymodules/python2.7/matplotlib/backend_bases.py:2280: MatplotlibDeprecationWarning: Using default event loop until function specific to this GUI is implemented
If I remove the pause line, then I don't get the warning, but my plot ceases
to exist as soon as the program finishes.
I should like an elegant way to get the behaviour without the warning. Can
anyone help?
Answer: It looks like
[waitforbuttonpress](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.waitforbuttonpress)
should do what you want. Just replce the `pause` with
`pylab.waitforbuttonpress(timeout=-1)`.
|
Is flush method missing in couchbase python client?
Question: I did not find the flush button for bucket in couchbase admin UI at port 8091
.May be because of this <http://www.couchbase.com/issues/browse/MB-5351> .
Then I saw this [How to delete all items in the
bucket?](http://stackoverflow.com/questions/15966769/how-to-delete-all-items-
in-the-bucket) so I wanted to do flush in python client .
import sys
from couchbase import Couchbase
from couchbase.exceptions import CouchbaseError
try:
client = Couchbase.connect(bucket='production',host='localhost',port=8091)
except CouchbaseError as e:
print " Sorry , we could not create connection to bucket specified , due to " , e
else :
print "Successfully made the connection to bucket "
Here client i did not find a method to flush . I tried intellisense in IDE .
Please guide me to flush the bucket via python client .
Answer: It appears that the couchbase python SDK does not currently provide a flush()
method. But since the flush method is provided through the couchbase REST API
you may use it to flush your bucket.
Reference: Couchbase REST API, Buckets API -
<http://docs.couchbase.com/admin/admin/REST/rest-bucket-intro.html>
The python SDK provides an **Admin** object which you can use to perform your
administrator tasks using REST using its **http_request** method. source code:
<https://github.com/couchbase/couchbase-python-
client/blob/master/couchbase/admin.py>
description of the Admin class (from source):
> An administrative connection to a Couchbase cluster.
>
>
> With this object, you can do things which affect the cluster, such as
> modifying buckets, allocating nodes, or retrieving information about
> the cluster.
>
> This object should **not** be used to perform Key/Value operations. The
> :class:`couchbase.bucket.Bucket` is used for that.
>
Admin.http_request method description (from source):
>
> Perform an administrative HTTP request. This request is sent out to
> the administrative API interface (i.e. the "Management/REST API")
> of the cluster.
>
> See <LINK?> for a list of available comments.
>
> Note that this is a fairly low level function. This class will with
> time contain more and more wrapper methods for common tasks such
> as bucket creation or node allocation, and this method should
> mostly be used if a wrapper is not available.
>
* * *
Example:
First make sure that the Flush option is enabled for your bucket.
I created a test bucket named "default" for the example and created 2
documents within the bucket.
import sys
from couchbase import Couchbase
from couchbase.exceptions import CouchbaseError
#import Admin module
from couchbase.admin import Admin
#make an administrative connection using Admin object
try:
admin = Admin(username='Administrator',password='password',host='localhost',port=8091)
except CouchbaseError as e:
print " Sorry , we could not create admin connection , due to " , e
else :
print "Successfully made an admin connection "
#retrieve bucket information for bucket named "default"
# "default" is just the name of the bucket I set up for trying this out
try:
htres = admin.http_request("/pools/default/buckets/default")
except Exception as e:
print "ERROR: ", e
sys.exit()
#print the current number of items in the "default" bucket
# "default" is just the name of the bucket I set up for trying this out
print "# of items before flush: ", htres.value['basicStats']['itemCount']
#flush bucket
try:
#the bucket information request returned the path to the REST flush method for this bucket
# the flush method requires a POST request
htres = admin.http_request(htres.value['controllers']['flush'],"POST")
except Exception as e:
print "ERROR: ", e
sys.exit()
#re-retrieve bucket information for bucket named "default"
try:
htres = admin.http_request("/pools/default/buckets/default")
except Exception as e:
print "ERROR: ", e
sys.exit()
#print the number of items in the "default" bucket (after flush)
print "# of items after flush: ", htres.value['basicStats']['itemCount']
result:
>
> Successfully made an admin connection
> # of items before flush: 2
> # of items after flush: 0
>
|
display pdf with python using pdfviewer
Question: I'm trying to display a pdf using pdfviewer lib from wx but I can't find the
module. I already have the wxPython installed is it deprecated ?
I'm following this tutorial :
<http://wxpython.org/Phoenix/docs/html/lib.pdfviewer.html#module-
lib.pdfviewer>
And I'm getting : `ImportError: No module named pdfviewer`
Answer: I am not allowed to write comments, so sorry for asking this question as my
answer, but have you seen the comment on the website?
> The viewer uses pyPDF2 or pyPdf, if neither of them are installed an import
> error exception will be thrown.
|
How to retrieve an output of a Python script using a Java program
Question: I am having a problem with my code. I have a java class that calls a python
script that get gps data. The problem is that I can call the script but I want
to return the data to a string on the java side so I can use it in an array
later. So basically all I want is to take the data from the python script and
bring it to java as a string and place it in to an array.
**python code**
#! /usr/bin/python
import os
from gps import *
from time import *
import time
import threading
gpsd = None #seting the global variable
os.system('clear') #clear the terminal (optional)
class GpsPoller(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
global gpsd #bring it in scope
gpsd = gps(mode=WATCH_ENABLE) #starting the stream of info
self.current_value = None
self.running = True #setting the thread running to true
def run(self):
global gpsd
while gpsp.running:
gpsd.next() #this will continue to loop and grab EACH set of gpsd info to clear the buffer
if __name__ == '__main__':
gpsp = GpsPoller() # create the thread
try:
gpsp.start() # start it up
while True:
#It may take a second or two to get good data
#print gpsd.fix.latitude,', ',gpsd.fix.longitude,' Time: ',gpsd.utc
os.system('clear')
print
print ' GPS reading'
print '----------------------------------------'
print 'latitude ' , gpsd.fix.latitude
print 'longitude ' , gpsd.fix.longitude
time.sleep(5) #set to whatever
except (KeyboardInterrupt, SystemExit): #when you press ctrl+c
print "\nKilling Thread..."
gpsp.running = False
gpsp.join() # wait for the thread to finish what it's doing
print "Done.\nExiting."
**java code**
Process p4 = null;
//String commands4="sudo python test1.py";
// provision gpio pin #01 as an output pin and turn on
try {
p4 = Runtime.getRuntime().exec("sudo python test1.py");
InputStream is = p4.getInputStream();
int i = 0;
while( (i = is.read() ) != -1) {
System.out.print((char)i);
lat
break;
}
Answer: You might want to check for the `p4.exitCode()` to see if the python script
was successfully invoked, and then if it wasn't successful you may check for
`p4.getErrorStream()` to see why.
Either way, generally the code should handle different exit codes because you
may never be 100% confident that the calling to python script will always be
successful.
|
Convert tsv file so i can use it for nodes and edges in python
Question: I have this tsv file i would like to read and somehow count the numbers of
nodes in a path
This is how the parts of tsv file looks like:
6a3701d319fc3754 1297740409 166 14th_century;15th_century;16th_century;Pacific_Ocean;Atlantic_Ocean;Accra;Africa;Atlantic_slave_trade;African_slave_trade NULL
3824310e536af032 1344753412 88 14th_century;Europe;Africa;Atlantic_slave_trade;African_slave_trade 3
the paths is only the ones looking like this : 14th_century;15th_century;
seperated by ';'
my code so fare :
import networkx as nx
fh = open("test.tsv", 'rb')
G = nx.read_edgelist("test.tsv", create_using=nx.DiGraph())
print G.nodes()
print G.edges()
So my question is how do i count the numbers of nodes touched by a path?
Answer: I'm using the pandas library here for speed, you can install using `pip
install pandas` and also check here: <http://pandas.pydata.org/>
Firstly construct our dataframe from your sample code:
In [39]:
temp = """6a3701d319fc3754 1297740409 166 14th_century;15th_century;16th_century;Pacific_Ocean;Atlantic_Ocean;Accra;Africa;Atlantic_slave_trade;African_slave_trade NULL
3824310e536af032 1344753412 88 14th_century;Europe;Africa;Atlantic_slave_trade;African_slave_trade 3"""
# construct the dataframe
# in your case replace io.String() with the path to your tsv file
df = pd.read_csv(io.StringIO(temp), sep='\s+', header=None, names=['a','b','c','d','e'])
df
Out[39]:
a b c \
0 6a3701d319fc3754 1297740409 166
1 3824310e536af032 1344753412 88
d e
0 14th_century;15th_century;16th_century;Pacific... NaN
1 14th_century;Europe;Africa;Atlantic_slave_trad... 3
[2 rows x 5 columns]
In [65]:
# use itertools to flatten our list of lists
import itertools
def to_edge_list(x):
# split on semi-colon
split_list = x.split(';')
#print(split_list)
# get our main node
primary_node = split_list[0]
# construct our edge list
edge_list=[]
# create a list comprehension from the split list
edge_list = [(primary_node, x) for x in split_list[1:] ]
#print(edge_list)
return edge_list
# now use itertools to flatten the list of lists into a single list
combined_edge_list = list(itertools.chain.from_iterable(df['d'].apply(to_edge_list)))
print(combined_edge_list)
[('14th_century', '15th_century'), ('14th_century', '16th_century'), ('14th_century', 'Pacific_Ocean'), ('14th_century', 'Atlantic_Ocean'), ('14th_century', 'Accra'), ('14th_century', 'Africa'), ('14th_century', 'Atlantic_slave_trade'), ('14th_century', 'African_slave_trade'), ('14th_century', 'Europe'), ('14th_century', 'Africa'), ('14th_century', 'Atlantic_slave_trade'), ('14th_century', 'African_slave_trade')]
# Now construct our networkx graph from the edge list
In [66]:
import networkx as nx
G = nx.MultiDiGraph()
G.add_edges_from(combined_edge_list)
G.edges()
Out[66]:
[('14th_century', '15th_century'),
('14th_century', 'Africa'),
('14th_century', 'Africa'),
('14th_century', 'Atlantic_slave_trade'),
('14th_century', 'Atlantic_slave_trade'),
('14th_century', 'African_slave_trade'),
('14th_century', 'African_slave_trade'),
('14th_century', '16th_century'),
('14th_century', 'Accra'),
('14th_century', 'Europe'),
('14th_century', 'Atlantic_Ocean'),
('14th_century', 'Pacific_Ocean')]
draw the graph (doesn't look pretty but what the hell):

|
Python clear the screen code?
Question: I am still trying to make my quiz, but I want to enter a clear the screen code
for my questions. So, after researching, I found a code that works, but it
puts my questions way to the bottom of the screen. Here is a screenshot I
took:

here is a example of the clear the screen code I found:
print "\n" * 40
So I tried changing the "40" To "20" but there was no effects. I am operating
on Mac so the
import os
os.system('blahblah')
does not work. Please help!
Answer: As @pNre noted, [this
question](http://stackoverflow.com/questions/2084508/clear-terminal-in-python)
shows you how to do this with ASCI escape sequences.
If you'd like to do this using the `os` module, then on Mac the command is
`clear`.
import os
os.system('clear')
Note: The fact that you are on a Mac does not mean `os.system('blahblah')`
will not work. It is more likely the command you are passing to `os.system`
was erroneous.
See answer below for how to do this on Windows/Linux.
|
Random Gaussian issues
Question: I'm trying to generate a random.gauss numbers but I have message error. Here
is my code:
import sys,os
import numpy as np
from random import gauss
previous_value1=1018.163072765074389
previous_value2=0.004264112033664
alea_var_n=random.gauss(1,2)
alea_var_tau=random.gauss(1,2)
new_var_n= previous_value1*(1.0+alea_var_n)
new_var_tau=previous_value2*(1.0+alea_var_tau)
print 'new_var_n',new_var_n
print 'new_var_tau',new_var_tau
I got this error:
Traceback (most recent call last):
File "lolo.py", line 15, in <module>
alea_var_n=random.gauss(1,2)
AttributeError: 'builtin_function_or_method' object has no attribute 'gauss'
Someone know what's wrong, I'm a newbye with python. Or is it a numpy version
problem.
Answer: For a faster option, see Benjamin Bannier's solution (which I gave a +1 to).
Your present code that you posted will not work for the following reason: your
import statement
from random import gauss
adds `gauss` to your namespace but not `random`. You need to do this instead:
alea_var_n = gauss(1, 2)
The error in your post, however, is not the error you should get when you run
the code that you have posted above. Instead, you will get the following
error:
NameError: name 'random' is not defined
Are you sure you have posted the code that generated that error? Or have you
somehow included the wrong error in your post?
|
Can't pass a QAbstractListModel to QML
Question: I'm following [this PySide tutorial](http://qt-project.org/wiki/Selectable-
list-of-Python-objects-in-QML) as close as possible using PyQt5. When I run my
code, I get this error: `ReferenceError: pythonListModel is not defined`, and
the list shows up black with no items.
This is my code
def main():
platform = Platform("Windows")
platform_wrp = qml_platforms.PlatformsWrapper(platform)
platform_model = qml_platforms.PlatformsListModel([platform_wrp])
app = QGuiApplication(sys.argv)
engine = QQmlApplicationEngine(QUrl("main.qml"))
context = engine.rootContext()
context.setContextProperty('pythonListModel', platform_model)
window = engine.rootObjects()[0]
window.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
my model and wrapper
class PlatformsWrapper(QObject):
def __init__(self, platform):
QObject.__init__(self)
self.platform = platform
def full_name(self):
return str(self.platform.full_name)
changed = pyqtSignal()
full_name = pyqtProperty("QString", _full_name, notify=changed)
class PlatformsListModel(QAbstractListModel):
def __init__(self, platforms):
QAbstractListModel.__init__(self)
self.platforms = platforms
def rowCount(self, parent=QModelIndex()):
return len(self.platforms)
def data(self, index):
if index.isValid():
return self.platforms[index.row()]
return None
and my QML
import QtQuick 2.1
import QtQuick.Controls 1.1
ApplicationWindow{
ListView {
id: pythonList
width: 400
height: 200
model: pythonListModel
delegate: Component {
Rectangle {
width: pythonList.width
height: 40
color: ((index % 2 == 0)?"#222":"#111")
Text {
id: title
elide: Text.ElideRight
text: model.platform.full_name
color: "white"
font.bold: true
anchors.leftMargin: 10
anchors.fill: parent
verticalAlignment: Text.AlignVCenter
}
MouseArea {
anchors.fill: parent
}
}
}
}
}
Why can't Qt find my contextProperty?
Answer: The problem is that "main.qml" is loaded before you set context property. Try
load file after you setup your context:
def main():
platform = Platform("Windows")
platform_wrp = qml_platforms.PlatformsWrapper(platform)
platform_model = qml_platforms.PlatformsListModel([platform_wrp])
app = QGuiApplication(sys.argv)
engine = QQmlApplicationEngine()
context = engine.rootContext()
context.setContextProperty('pythonListModel', platform_model)
engine.load( QUrl("main.qml") ) #load after context setup
window = engine.rootObjects()[0]
window.show()
sys.exit(app.exec_())
|
Formatting a string by word at a specific character length
Question: I don't see this question anywhere in the database - if it's a duplicate,
please let me know!
I'm trying to format a string to a certain length by word; I have a string of
arbitrary length and I would simply like to add newlines in every **n**
characters, separated by word so that the string is not split in the middle of
a word.
str = "This is a string with length of some arbitrary number greater than 20"
count=0
tmp=""
for word in str.split():
tmp += word + " "
count += len(word + " ")
if count > 20:
tmp += "\n"
count = 0
str = tmp
print str
I'm certain that there is an embarrassingly simple Pythonic way to do this,
but I don't know what it is.
Suggestions?
Answer: Use the [textwrap](http://docs.python.org/library/textwrap.html) module. For
your case
[textwrap.fill](https://docs.python.org/2/library/textwrap.html#textwrap.fill)
should have it:
>>> import textwrap
>>> s = "This is a string with length of some arbitrary number greater than 20"
>>> print textwrap.fill(s, 20)
This is a string
with length of some
arbitrary number
greater than 20
|
How to output integer arrays to file in python?
Question: I have 3000000 ints' long array which I want to output to a file. How can I do
that? Also, is this
for i in range(1000):
for k in range(1000):
(r, g, b) = rgb_im.getpixel((i, k))
rr.append(r)
gg.append(g)
bb.append(b)
d.extend(rr)
d.extend(gg)
d.extend(bb)
a good practice to join array together?
All of the arrays are declared like this `d = array('B')`
EDIT: Managed to output all int`s delimited by ' ' with this
from PIL import Image
import array
side = 500
for j in range(1000):
im = Image.open(r'C:\Users\Ivars\Desktop\RS\Shape\%02d.jpg' % (j))
rgb_im = im.convert('RGB')
d = array.array('B')
rr = array.array('B')
gg = array.array('B')
bb = array.array('B')
f = open(r'C:\Users\Ivars\Desktop\RS\ShapeData\%02d.txt' % (j), 'w')
for i in range(side):
for k in range(side):
(r, g, b) = rgb_im.getpixel((i, k))
rr.append(r)
gg.append(g)
bb.append(b)
d.extend(rr)
d.extend(gg)
d.extend(bb)
o = ' '.join(str(t) for t in d)
print('#', j, ' - ', len(o))
f.write(o)
f.close()
Answer: if you're using python >= 2.6 then you can use print functions from the
_future_!
from __future__ import print_function
#your code
# This will print out a string representation of list to the file.
# If you need it formatted differently, then you'll have to construct the string yourself
print(d, file=open('/path/to/file.txt','w')
#you can join the list items with an empty string to get only the numbers
print("".join(d),file=('/path/to/file.txt','w'))
This has the side effect of turning print from a statement into a function, so
you'll have to wrap whatever you want printed in ()
|
How to pass a dict to a python thread that can be changed from outside of thread without using queue
Question: I would like to have a shared dict between my threads without using Queue. I
did it before by making thread with an object in python but now I want to make
thread with just one function of object not all of it. my simplified code is
as follow:
#!/usr/bin/python
import threading
import time
class WebSocket(threading.Thread):
def __init__(self, server=1, sock=1, address=1):
self.s1=server
self.s2=sock
self.s3=address
print "salam"
def CheckThread(self):
self.a={'1':1, '2':2}
s=threading.Thread(target=self.SockFun, kwargs=self.a)
s.deamon=True
s.start()
while(1):
self.a['1']=self.a['1']+1
time.sleep(1)
def SockFun(self,**kwargs):
pass
class SimpleEcho(WebSocket):
def SockFun(self,**kwargs):
while(1):
print kwargs
time.sleep(1)
def handleMessage(self):
self.CheckThread()
s=SimpleEcho()
s.handleMessage()
my printed dict element '1' of kwargs does not increase as expected inside
thread:
Answer: Using the `kwargs` mechanism causes the dict elements to be copied into
`SimpleEcho`. After they are there, changing the original doesn't make a
difference to what is printed. A quick way to see this is to change `print
kwargs` to `print self.a`.
Alternatively you can change `kwargs=self.a` to `kwargs={"argh":self.a}`. This
way, `kwargs` will still be copied, but that copy will contain a reference to
the original `self.a`.
And shouldn't you be using some kind of locking?
|
Python: Send IGMP packets with Scapy
Question: I would like to send IGMP packets using scapy, specifically IGMP Leave, IGMP
Membership report. Is it possible to do so?
**UPDATE:**
I was able to eventually generate them. Had to do the following:
1) Install scapy v.2.2.0 as it's described here (including minor alteration in
setup.py): [scapy's contrib is missing after installing scapy on both windows
and fedora](http://stackoverflow.com/questions/17412909/scapys-contrib-is-
missing-after-installing-scapy-on-both-windows-and-fedora)
2) You need to use file from contribution package (features not added to the
core of scapy):
import scapy.contrib.igmp
igmpPacket = scapy.contrib.igmp.IGMP()
Answer: Yes, it is possible to send IGMP packets. After googling a bit, I came up with
some useful links that can help you in some direction. On github there exists
a [IGMP](https://github.com/d1b/scapy/blob/master/scapy/contrib/igmp.py) and
[IGMPv3](http://bb.secdev.org/scapy/src/0d201eca59df/scapy/contrib/igmpv3.py)
implementation in scapy. Here is an interesting [mailing
list](http://search.gmane.org/?query=igmp&group=gmane.comp.security.scapy.general)
too. Also, this [post](http://blog.mmmonk.net/2012/01/quick-scapy-example-for-
linux-kernel.html) has an other interesting stuff related to IGMP.
|
In Ipython, does return statement works as Print?
Question: I have a Python code as follows, I wrote a function as the same name as the
Program name and gave an entry point for the function as follows:
import math
def Problem1():
total=0
for i in range(1000):
if i%3 == 0 or i%5==0 or i%6==0 or i%9==0:
total=total+i
return total
Now I went to terminal ran IPython, and then executed the following
statements: import Problem1 Problem1.Problem1()
It printed the output, even though I dont have a print statement, I have given
a return statement, so what is happening here?
Another question : How do I run this directly from the command line, ie how to
give something equivalent to void main() of C in Python?
Answer: What IPython does is it prints the output if you simply call a function or a
variable
so if you just enter a variable name and hit enter it will print it. the same
thing is happening for a function output. This is standard behavior in an
interactive shell.
The idea behind this is that if you do interactive work you usually want to
see your output directly.
To run this you would put your code in a *.py file where you actually call the
function at the bottom of your file. so you don't need to do void main(). You
then execute the code with the Interperter, i.e.
python yourfile.py
however there is a remotely [similar
pattern](http://stackoverflow.com/questions/419163/what-does-if-name-main-do)
to the void main() thingy
Note that because Python is an interpreted language, you can simply put
print "hello world"
in a *.py file call it with the python executable and it right away works.
There is no "main()" function that you need to define
|
Not able to access twitter after a lot of try
Question: I work on windows 7 and i am trying to access twitter using tweepy and even
twitter1.14.2-python.But i am not able crack it. Help needed.
_**TWEEPY_**
import tweepy
OAUTH_TOKEN = "defined here"
OAUTH_SECRET = "defined here"
CONSUMER_KEY = "defined here"
CONSUMER_SECRET = "defined here"
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(OAUTH_TOKEN, OAUTH_SECRET)
api = API.GetUserTimeline(screen_name="yyy")
Error : name 'API' is not defined
_**TWITTER 1.14.2_**
import twitter
from twitter import *
tw = Twitter(auth=OAuth(OAUTH_TOKEN, OAUTH_SECRET,CONSUMER_KEY, CONSUMER_SECRET))
tw.statuses.home_timeline()
tw.statuses.user_timeline(screen_name="yyy")
Error : No module named OAuth
_Where i am going wrong ?_
Answer: As there is no `GetUserTimeline` defined/declared in
[tweepy.API](http://pythonhosted.org/tweepy/html/api.html#tweepy-api-twitter-
api-wrapper) class, so am asuming that you intend to try the `GetUserTimeline`
method of [twitter.Api](http://inventwithpython.com/twitter.html#Api-
GetUserTimeline) class.
The `API` is exposed via the `twitter.Api` class.
To create an instance of the `twitter.Api` class:
>>> import twitter
>>> api = twitter.Api()
To create an instance of the `twitter.Api` with login credentials.
>>> api = twitter.Api(consumer_key='consumer_key',
consumer_secret='consumer_secret', access_token_key='access_token',
access_token_secret='access_token_secret')
To fetch a single user's public status messages, where `user` is either a
`Twitter "short name" or their user id`
>>> statuses = api.GetUserTimeline(user)
>>> print [s.text for s in statuses]
|
Gunicorn is unable to find my wsgi file, not able to load python?
Question: So I am trying to run a gunicorn script, following a tutorial I found in the
Internet.
Here is my folder structure:
(today_project)[littlem@server1 today_project]$ tree . -L 2
.
├── bin
│ ├── activate
│ ├── activate.csh
│ ├── activate.fish
│ ├── activate_this.py
│ ├── django-admin.py
│ ├── django-admin.pyc
│ ├── easy_install
│ ├── easy_install-2.7
│ ├── gunicorn
│ ├── gunicorn_django
│ ├── gunicorn_paster
│ ├── gunicorn_start
│ ├── pip
│ ├── pip2
│ ├── pip2.7
│ ├── python -> python2.7
│ ├── python2 -> python2.7
│ └── python2.7
├── include
│ ├── python2.6 -> /usr/include/python2.6
│ └── python2.7 -> /usr/local/include/python2.7
├── lib
│ ├── python2.6
│ └── python2.7
├── manage.py
├── run
│ └── gunicorn.sock
├── today
│ ├── #app files
├── today_project
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── __pycache__
│ ├── settings.py
│ ├── settings.pyc
│ ├── urls.py
│ ├── urls.pyc
│ ├── wsgi.py
│ └── wsgi.pyc
└── TODO.md
When I run
gunicorn today_project.wsgi:application
from my virtualenv, it works fine. But when I run this script (I pretty much
copied it from somewhere):
#!/bin/bash
NAME="today" # Name of the application
DJANGODIR=~/today_project/today_project # Django project directory
SOCKFILE=~/today_project/run/gunicorn.sock # we will communicte using this unix socket
USER=ferski # the user to run as
GROUP=ferski # the group to run as
NUM_WORKERS=3 # how many worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=today.settings # which settings file should Django use
DJANGO_WSGI_MODULE=today.wsgi # WSGI module name
echo "Starting $NAME as `whoami`"
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
exec ../bin/gunicorn ${DJANGO_WSGI_MODULE}:application \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER --group=$GROUP \
--log-level=debug \
--bind=unix:$SOCKFILE
I have the following error
Traceback (most recent call last):
File "/home/ferski/today_project/lib/python2.7/site-packages/gunicorn/arbiter.py", line 495, in spawn_worker
worker.init_process()
File "/home/ferski/today_project/lib/python2.7/site-packages/gunicorn/workers/base.py", line 106, in init_process
self.wsgi = self.app.wsgi()
File "/home/ferski/today_project/lib/python2.7/site-packages/gunicorn/app/base.py", line 114, in wsgi
self.callable = self.load()
File "/home/ferski/today_project/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 62, in load
return self.load_wsgiapp()
File "/home/ferski/today_project/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load_wsgiapp
return util.import_app(self.app_uri)
File "/home/ferski/today_project/lib/python2.7/site-packages/gunicorn/util.py", line 354, in import_app
__import__(module)
ImportError: No module named today_project.wsgi
2014-04-06 16:09:28 [19420] [INFO] Worker exiting (pid: 19420)
2014-04-06 16:09:28 [19407] [INFO] Shutting down: Master
2014-04-06 16:09:28 [19407] [INFO] Reason: Worker failed to boot.
So I guess its the problem with python version? I dont know what I am doing to
be honest when I do `export PYTHONPATH=$DJANGODIR:$PYTHONPATH` but I guess
this is where the problem lies
Answer: I think your DJANGODIR should just be `~/today_project/`. It's the root of the
tree, not the inner directory.
There are a few other things wrong, too: in particular, since you're running
in a virtualenv, you shouldn't need to change the PYTHONPATH at all.
|
How to import correctly my function from parent folder
Question: I have the following layout:
/project
/thomas
/users
/src
code_alpha.py
/tests
code_beta.py
I tried:
/project
/thomas
/users
__init__.py
/src
code_alpha.py
/tests
code_beta.py
with `from users.src import code_alpha`
also tried:
/project
/thomas
/users
__init__.py
/src
code_alpha.py
__init__.py
/tests
code_beta.py
with `from users.src import code_alpha`
I tried to solve the problem with this
[guide](https://docs.python.org/2/tutorial/modules.html) and some similiar
topics here, but could not figure out. Adding the directory to my path did not
solve the problem.
edit: updated layout.
Answer: Try adding `__init__.py` to `somefolder_3`.
Then in `code_beta.py` you will be able to write
`from somefolder_3.somefolder_4 import code_alpha`
|
ImportError: matplotlib requires dateutil; import matplotlib.pyplot as plt
Question: Am new to progamming and Python, i keep getting the error below when i run my
program. Someone advised i should use pip to solve it. But cant pip get
installed using the cmd. Though i suceeded using Powershell but still cant
make it work. How do i solve this, any tips will go along way. Thanks
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
from satmc import satmc
File "C:\Python27\starb_models_grid1\satmc.py", line 3, in <module>
import matplotlib.pyplot as plt
File "C:\Python27\lib\site-packages\matplotlib\__init__.py", line 110, in <module>
raise ImportError("matplotlib requires dateutil")
ImportError: matplotlib requires dateutil
Am using version 2.7.3
Answer: You need to install various packages to get `numpy` working correctly.
1. libsvm-3.17.win32-py2.7
2. pyparsing-2.0.1.win32-py2.7
3. python-dateutil-2.2.win32-py2.7
4. pytz-2013.9.win32-py2.7
5. six-1.5.2.win32-py2.7
6. scipy-0.13.3.win32-py2.7
7. numpy-MKL-1.8.0.win32-py2.7
8. Matplotlib
Download all the binaries from this
[link](http://www.lfd.uci.edu/~gohlke/pythonlibs/) and install and then you
will have a working numpy installation.
|
Trouble Installing Pillow: Why Am I Receiving Errors?
Question: I'm attempting to install pillow via pip (i.e. `sudo pip install pillow`) but
without much success.
Can anyone tell me what the problem is?
Here are the errors I receive:
clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future]
clang: note: this will be a hard error (cannot be downgraded to a warning) in the future
error: command 'clang' failed with exit status 1
----------------------------------------
Cleaning up...
Command /usr/bin/python -c "import setuptools, tokenize;__file__='/private/tmp/pip_build_root/pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-l7hSHg-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /private/tmp/pip_build_root/pillow
Answer: try with
sudo ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future pip install pillow
you will see a lot of warnings but it worked for me.
the unused arguments was preventing further install of pillow. the above tells
it to not treat the unused argument as an error.
Source: [a/22774631/1142814](http://stackoverflow.com/a/22774631/1142814)
EDIT: I link here the official issue you linked in your comment, maybe it can
be helpful for someone else: <https://github.com/python-
imaging/Pillow/issues/527>
|
Convert black and white array into an image in python?
Question: I have an array of 50x50 elements of which each is either True or False - this
represents a 50x50 black and white image.
I can't convert this into image? I've tried countless different functions and
none of them work.
import numpy as np
from PIL import Image
my_array = np.array([[True,False,False,False THE DATA IS IN THIS ARRAY OF 2500 elements]])
im = Image.fromarray(my_array)
im.save("results.jpg")
^ This one gives me: "Cannot handle this data type".
I've seen that PIL has some functions but they only convert a list of RGB
pixels and I have a simple black and white array without the other channels.
Answer: First you should make your array 50x50 instead of a 1d array:
my_array = my_array.reshape((50, 50))
Then, to get a standard 8bit image, you should use an unsigned 8-bit integer
dtype:
my_array = my_array.reshape((50, 50)).astype('uint8')
But you don't want the `True`s to be `1`, you want them to be `255`:
my_array = my_array.reshape((50, 50)).astype('uint8')*255
Finally, you can convert to a PIL image:
im = Image.fromarray(my_array)
I'd do it all at once like this:
im = Image.fromarray(my_array.reshape((50,50)).astype('uint8')*255)
|
Determining a functions output as print or write
Question: I need to determine the output of a function if whether the function is
printing a statement, or writing it out on a .txt file. Is there a way to do
this in Python? I thought perhaps the hasattr('myfunction', '**str** ') might
work, but it did not.
Examples of functions:
def x():
print('dog')
def y():
open("file.txt", "w").write("dog")
Answer: The print function can take more arguments than just whats going to be
written:
print(*objects, sep=' ', end='\n', file=sys.stdout)
So you could write a wrapper function for print that also appends a boolean to
a list that is true if the file was sys.stdout and false otherwise.
import sys
write_history = []
def write(*objects, sep=' ', end='\n', file=sys.stdout):
print(*objects, sep=sep, end=end, file=file)
write_history.append(file==sys.stdout)
Then just always use the write function, and when you want to test the output
of a function, simply clear the write_history and call that function.
|
How can I make itemgetter to take input from list variable?
Question: How can I use itemgetter with list variable instead of integers? For example:
from operator import itemgetter
z = ['foo', 'bar','qux','zoo']
id = [1,3]
I have no problem doing this:
In [5]: itemgetter(1,3)(z)
Out[5]: ('bar', 'zoo')
But it gave error when I do this:
In [7]: itemgetter(id)(z)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-7ba47b19f282> in <module>()
----> 1 itemgetter(id)(z)
TypeError: list indices must be integers, not list
How can I make itemgetter to take input from list variable correctly, i.e.
using `id`?
Answer: When you do:
print itemgetter(id)(z)
you are passing a `list` to `itemgetter`, while it expects indices (integers).
**What can you do?** You can unpack the `list` using `*`:
print itemgetter(*id)(z)
to visualize this better, both following calls are equivalent:
print itemgetter(1, 2, 3)(z)
print itemgetter(*[1, 2, 3])(z)
|
Get value from detect colour python opencv
Question: For my project, I want to control an RC car to avoid red obstacles using a
webcam camera running `Raspberry Pi`.
I can control my RC car, but I don't know much about `openCV`. I can also
detect selected colours using the webcam, but I do not know how to get the
colour value from the image to my RC car.
The issue that I am having is that my car does not know that a blue object is
in front of the camera.
Here is my code:
import cv2
import numpy as np
cap = cv2.videoCapture(0)
while(1):
# take each frame
_,frame=cap.read()
#convert BGR to HSV
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
#define range of blue color in HSV
lower_blue = np.array([160,50,50],dtype=np.uint8)
uper_blue np.array([179,255,255],dtype=np.uint8)
#Threshold the HSV image to get only blue colors
mask = cv2.inRange(hsv,lower_blue,upper_blue)
cv2.imshow('mask',mask)
k=cv2.waitKey(5) &0xFF
if k == 27 :
break
cv2.destroyAllwindows()
Answer: You can get the color values using:
values = img[100,100]
You have to Check if Mask is 255 (Pixel in Range) and get the color from your
image (hsv).
To iterate over the image you can use:
img.shape
which returns rows, columns, and channels.
In total you get something like:
row, col, chan = img.shape
for r in row:
for c in col:
if mask[r,c] == 255:
print img[r,c]
You can find more information in the [OpenCV Python
Tutorial](http://docs.opencv.org/trunk/doc/py_tutorials/py_core/py_basic_ops/py_basic_ops.html).
|
Why is module math called "built-in"?
Question: AFAIK, Python `builtins` refers to those exceptions and functions contained in
`__builtins__`:
>>> import builtins # import __builtin__ in Python 2
>>> dir(builtins) # dir(__builtin__) for Python 2
['ArithmeticError', 'AssertionError', 'AttributeError', 'BaseException',
'BlockingIOError', 'BrokenPipeError', 'BufferError', 'BytesWarning',
...many more...
'ord', 'pow', 'print', 'property', 'quit', 'range', 'repr', 'reversed',
'round', 'set', 'setattr', 'slice', 'sorted', 'staticmethod', 'str', 'sum',
'super', 'tuple', 'type', 'vars', 'zip']
But look at the code below (both Python2 and 3 gave the same results):
>>> globals()
{'__name__': '__main__', '__builtins__': <module 'builtins' (built-in)>,
'__loader__': <class '_frozen_importlib.BuiltinImporter'>,
'__doc__': None, '__package__': None}
>>> import threading
>>> import math
>>> globals()
{'__name__': '__main__', '__builtins__': <module 'builtins' (built-in)>,
'__loader__': <class '_frozen_importlib.BuiltinImporter'>,
'__doc__': None, '__package__': None,
'threading': <module 'threading' from C:\\ProgramFiles\\Python3.3\\lib\\threading.py'>,
'math': <module 'math' (built-in)>}
On the last line, module `math` is called `built-in`. Why? What's the
difference between module `math` and other modules like `threading`?
Answer: In your case the `math` module was included into the main Python library
itself (`libpython2.7.{so,dll,dylib}`) when the library was built. This is
possible since the module is written in C rather than pure Python. Other
modules that are like this include `sys` and `posix`.
|
how to take variable from different `def()` function to another - python
Question: I have two different functions like :
def messageWindow():
# all the necessary operations
feature_matrix_db = zip( B_mean , G_mean , R_mean, cont_list , ene_list , homo_list , cor_list, dis_list)
return feature_matrix_db
def open():
#all the necessary operations
feature_matrix_ip = zip( B_mean1 , G_mean1 , R_mean1, cont_list1 , ene_list1 , homo_list1 , cor_list1 , dis_list1)
return feature_matrix_ip
def result():
COLUMNS = 12
image_count = 0
resultlist_key = []
result_list = list()
i = 0
a_list = list()
b_list = list()
a_list.append(feature_matrix_ip)
while i < 70:
b_list.append(feature_matrix_db[i])
dist = distance.euclidean(a_list,b_list[i])
result_list.append(dist)
resultlist_key = OrderedDict(sorted(enumerate(result_list),key=lambda x:x[0])).keys()
i = i + 1
res_lst_srt = {'values': result_list,'keys':resultlist_key}
res_lst_srt['values'], res_lst_srt['keys'] = zip(*sorted(zip(res_lst_srt['values'], res_lst_srt['keys'])))
key = res_lst_srt['keys']
for i1,val in enumerate(key):
if i1 < 4:
image_count += 1
r, c = divmod(image_count, COLUMNS)
im = Image.open(resizelist[val])
tkimage = ImageTk.PhotoImage(resized)
myvar = Label(win, image=tkimage)
myvar.image = tkimage
myvar.grid(row=r, column=c)
First two `def()` function will return `feature_matrix_db` and
`feature_matrix_ip`, and I want these results to be imported into the next
`def()` function **result**. And it is giving an error like :
im = Image.open(resizelist[val])
File "E:\Canopy\System\lib\site-packages\PIL\Image.py", line 1956, in open
prefix = fp.read(16)
AttributeError: 'numpy.ndarray' object has no attribute 'read'
Any suggestions are welcome. Thanks in advance!
Answer: You probably want to read up about namespacing and scoping in python. There's
some info here: <https://docs.python.org/2/tutorial/classes.html>
For example your `messageWindow()` function:
def messageWindow():
# all the necessary operations
feature_matrix_db = zip( B_mean , G_mean , R_mean, cont_list , ene_list , homo_list , cor_list, dis_list)
return feature_matrix_db
But where are `B_mean`, `G_mean`, `R_mean`, `cont_list`, `ene_list`,
`homo_list`, `cor_list` and `dis_list` defined? You can pass them to
`messageWindow()` as arguments:
def messageWindow(B_mean , G_mean , R_mean, cont_list , ene_list , homo_list , cor_list, dis_list):
# all the necessary operations
feature_matrix_db = zip( B_mean , G_mean , R_mean, cont_list , ene_list , homo_list , cor_list, dis_list)
return feature_matrix_db
or assign some value to them within the function, or assign some value to them
outside of the function using the `global` keyword. But you can't just call
them without having ever said what values they take.
Let's look at the first few lines of `result()`:
def result():
COLUMNS = 12
image_count = 0
resultlist_key = []
result_list = list()
i = 0
a_list = list()
b_list = list()
a_list.append(feature_matrix_ip)
Again, where is `feature_matrix_ip` defined? What you could do instead is
this:
feature_matrix_ip = open() #super bad idea to call your function open()
a_list.append(feature_matrix_ip)
|
TemplateDoesNotExist at /
Question: Hej! Many threads here had the same caption, but none of them solved my
problem. I have a Django site an can access /admin (but it looks ugly). But on
/ there appears the following error page (`DEBUG = True` in `settings.py`):
TemplateDoesNotExist at /
index.html
Request Method: GET
Request URL: http://iecl.uni-potsdam.de/
Django Version: 1.4.5
Exception Type: TemplateDoesNotExist
Exception Value:
index.html
Exception Location: /usr/lib/python2.7/dist-packages/django/template/loader.py in find_template, line 138
Python Executable: /usr/bin/python
Python Version: 2.7.3
Python Path:
['/home/python/iecl/lib/python2.7/site-packages/distribute-0.6.28-py2.7.egg',
'/home/python/iecl/lib/python2.7/site-packages/pip-1.2.1-py2.7.egg',
'/usr/lib/python2.7',
'/usr/lib/python2.7/plat-linux2',
'/usr/lib/python2.7/lib-tk',
'/usr/lib/python2.7/lib-old',
'/usr/lib/python2.7/lib-dynload',
'/usr/local/lib/python2.7/dist-packages',
'/usr/lib/python2.7/dist-packages',
'/usr/lib/pymodules/python2.7',
'/home/python/iecl/lib/python2.7/site-packages',
'/home/django/django']
Server time: Mon, 7 Apr 2014 11:28:46 +0200
Template-loader postmortem
Django tried loading these templates, in this order:
Using loader django.template.loaders.filesystem.Loader:
/home/django/django/templates/iecl_dev/index.html (File does not exist)
Using loader django.template.loaders.app_directories.Loader:
/usr/lib/python2.7/dist-packages/django/contrib/auth/templates/index.html (File does not exist)
/usr/lib/python2.7/dist-packages/django/contrib/admin/templates/index.html (File does not exist)
In fact, the file /home/django/django/templates/iecl_dev/index.html does exist
and I also tried `chmod o+r index.html` without success.
The output of `python iecl_dev/manage.py runserver 0.0.0.0:0` is
Validating models...
0 errors found
Django version 1.4.5, using settings 'iecl_dev.settings'
Development server is running at http://0.0.0.0:0/
Quit the server with CONTROL-C.
so everything seems fine here.
What perplexes me: The *.pyc files are created automatically when *.py files
are run, right? After `python iecl_dev/manage.py runserver 0.0.0.0:0` there is
a file `/home/django/django/iecl_dev/settings.pyc` created. But it is not
created, when I load the page in my web browser. Does that mean, the
settings.py is never loaded? And how can Django say, a file, which exists,
would not exist?
Edit¹:
My settings.py looks as follows:
import django.conf.global_settings as DEFAULT_SETTINGS
import os
DEBUG = True
TEMPLATE_DEBUG = DEBUG
ADMINS = (
)
SETTINGS_PATH = os.path.realpath(os.path.dirname(__file__))
MANAGERS = ADMINS
DATABASES = { $
'default': { $
'ENGINE': 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'oracle'.
'NAME': 'iecl', # Or path to database file if using sqlite3.
'USER': 'iecl', # Not used with sqlite3.
'PASSWORD': '<xxx>', # Not used with sqlite3.
'HOST': '', # Set to empty string for localhost. Not used with sqlite3. $
'PORT': '', # Set to empty string for default. Not used with sqlite3.
}
}
TIME_ZONE = 'Europe/Berlin'
LANGUAGE_CODE = 'en-us'
SITE_ID = 1
USE_I18N = True
MEDIA_ROOT = '/var/www/django_media/iecl_dev/media/'
MEDIA_URL = ''
ADMIN_MEDIA_PREFIX = '/media/admin_media/'
SECRET_KEY = '<xxx>'
TEMPLATE_LOADERS = (
'django.template.loaders.filesystem.Loader',
'django.template.loaders.app_directories.Loader',
)
MIDDLEWARE_CLASSES = (
'django.middleware.csrf.CsrfViewMiddleware',
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
)
ROOT_URLCONF = 'iecl_dev.urls'
TEMPLATE_DIRS = (
os.path.join(SETTINGS_PATH, 'templates'),
)
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'iecl_dev.showStudents',
'django.contrib.admin',
)
TEMPLATE_CONTEXT_PROCESSORS = DEFAULT_SETTINGS.TEMPLATE_CONTEXT_PROCESSORS + (
)
Edit²:
The contents of `/home/django/django/` are as follows:
/home/django/django/:
iecl2
iecl_dev
templates
/home/django/django/iecl2:
__init__.py
__init__.pyc
manage.py
settings.py
settings.pyc
showStudents
urls.py
urls.pyc
/home/django/django/iecl2/showStudents:
__init__.py
__init__.pyc
admin.py
context_processors.py
models.py
models.pyc
views.py
views.pyc
/home/django/django/iecl_dev:
__init__.py
__init__.pyc
manage.py
piwik
settings.py
settings.pyc
showStudents
urls.py
urls.pyc
/home/django/django/iecl_dev/piwik:
__init__.py
app_settings.py
context_processors.py
models.py
tests.py
urls.py
views.py
/home/django/django/iecl_dev/showStudents:
__init__.py
__init__.pyc
admin.py
context_processors.py
models.py
models.pyc
views.py
/home/django/django/templates:
iecl
iecl_dev
/home/django/django/templates/iecl:
500.html
admin
changePW.html
editStudent.html
feedback.html
feedback_thanks.html
index.html
location.html
login.html
page.html
password_changed.html
showStudent.html
studentsOverview.html
/home/django/django/templates/iecl/admin:
404.html
500.html
actions.html
app_index.html
auth
base.html
base_site.html
change_form.html
change_list.html
change_list_results.html
date_hierarchy.html
delete_confirmation.html
delete_selected_confirmation.html
edit_inline
filter.html
includes
index.html
invalid_setup.html
login.html
object_history.html
pagination.html
prepopulated_fields_js.html
search_form.html
showStudents
submit_line.html
/home/django/django/templates/iecl/admin/auth:
user
/home/django/django/templates/iecl/admin/auth/user:
add_form.html
change_password.html
/home/django/django/templates/iecl/admin/edit_inline:
stacked.html
tabular.html
/home/django/django/templates/iecl/admin/includes:
fieldset.html
/home/django/django/templates/iecl/admin/showStudents:
pagecontent
userpagecontent
/home/django/django/templates/iecl/admin/showStudents/pagecontent:
change_form.html
/home/django/django/templates/iecl/admin/showStudents/userpagecontent:
change_form.html
/home/django/django/templates/iecl_dev:
500.html
__init__.py
admin
changePW.html
editStudent.html
feedback.html
feedback_thanks.html
index.html
location.html
login.html
page.html
password_changed.html
piwik
showStudent.html
studentsOverview.html
/home/django/django/templates/iecl_dev/admin:
404.html
500.html
actions.html
app_index.html
auth
base.html
base_site.html
change_form.html
change_list.html
change_list_results.html
date_hierarchy.html
delete_confirmation.html
delete_selected_confirmation.html
edit_inline
filter.html
includes
index.html
invalid_setup.html
login.html
object_history.html
pagination.html
prepopulated_fields_js.html
search_form.html
showStudents
submit_line.html
/home/django/django/templates/iecl_dev/admin/auth:
user
/home/django/django/templates/iecl_dev/admin/auth/user:
add_form.html
change_password.html
/home/django/django/templates/iecl_dev/admin/edit_inline:
stacked.html
tabular.html
/home/django/django/templates/iecl_dev/admin/includes:
fieldset.html
/home/django/django/templates/iecl_dev/admin/showStudents:
pagecontent
userpagecontent
/home/django/django/templates/iecl_dev/admin/showStudents/pagecontent:
change_form.html
/home/django/django/templates/iecl_dev/admin/showStudents/userpagecontent:
change_form.html
/home/django/django/templates/iecl_dev/piwik:
tracking.html
Edit³:
Ok. This is solved for me now. The solution was a conglomerate of different
things. One of the problems was the missing rights. The user that executes
Django, could not list the contents of the templates/ directory. `chmod o+x
templates/` did the job. Then there were some settings in the settings.py,
that changed the place to look for the templates from templates/iecl_dev/ to
iecl_dev/templates/. I saw this wrong path in the error message in my web
browser. But simply reverting the settings.py to the old version, was not
enough. Some service(s) needed to be restarted. I simply rebooted the machine
and everything was fine. Miraculously, the /admin/ page now looks nice as
well.
Many thanks for all your tips.
Answer: From what you've shown, I think you may have a problem with your project
layout.
Usually, it is set as follows :
├── manage.py
├── yourproject
│ ├── __init__.py
│ ├── settings.py
│ ├── templates
│ ├── urls.py
│ ├── wsgi.py
│ └── wsgi.pyc
├── yourfirstapp
│ ├── __init__.py
│ ├── admin.py
│ ├── models.py
│ ├── templates
│ ├── tests.py
│ ├── urls.py
│ └── views.py
├── yoursecondapp
├── __init__.py
├── admin.py
├── models.py
├── templates
├── tests.py
├── urls.py
└── views.py
As you can see, each app inside your project can have it's own `templates`
directory.`yourproject` is a bit particuliar, because it also stores unique
files, such as `settings.py`and `wsgi.py`. However, you can consider and use
it as an app.
Now, if you want to use a template stored in `yourproject/templates`, you'll
have to add `yourproject` to your `INSTALLED_APPS` settings.
Does it helps ? If not, can you edit your original post with your project
layout ?
|
How to fit parser in my python script
Question: Normally, if I have to perform an overlap between multiple files, I just
execute:
python -c 'import sys;print "".join(sorted(set.intersection(*[set(open(a).readlines()) for a in sys.argv[1:]])))' File1 File2 File3 File4 ....
But if I have to get this through a parser, how can I do it?
A short example will be enough around this script, and after that I can play
around. Actually I want the user to give two input: Files on which ioperation
is to be performed and the the operation here it is intersection.
Kindly help, instead of voting the post to close.
I think I have to show what I am doing, and here is an example of that:
usage = "Usage: %prog -i <file1>[,<file2>,...] -o <out> -ops "
version = "%prog " + str(VERSION)
parser = OptionParser(version=version, usage=usage)
group1 = OptionGroup(parser, 'Optional')
parser.add_option("-i",
dest="featurefile",
help="Gene list",
metavar="FILE")
parser.add_option("-o",
dest="outfile",
help="output file (type determined by extension)",
metavar="FILE")
# Optional arguments
group1.add_option("--op",
dest="operation",
help="Intersection, Union & Uniques (default Intersection)",
default=None,
type="string")
parser.add_option_group(group1)
(options, args) = parser.parse_args()
Now this set is parser thing (done), now the second step is to get the
operation (intersection) and the files and perform them. At this step I am
lost.
Thank you
Answer: Since you are using Python 2.7, use the
[argparse](https://docs.python.org/2.7/library/argparse.html) module instead
of the older `optparse`:
import argparse
import itertools as IT
VERSION = 0.1
version = "%(prog)s " + str(VERSION)
operation = {
'intersection': set.intersection,
'union': set.union,
'uniques': set.difference}
parser = argparse.ArgumentParser()
parser.add_argument('--version', action='version', version=version)
parser.add_argument('-i', '--featurefiles', nargs='+', help='Gene list',
metavar='FILE')
parser.add_argument('-o', '--outfile', help="output file (type determined by extension)",
metavar='FILE')
parser.add_argument('--op', dest="operation",
help="Intersection, Union & Uniques (default Intersection)",
default='intersection',
# can't use type=operation.get. See http://bugs.python.org/issue16516
type=lambda x: operation[x])
args = parser.parse_args()
print(args)
if args.outfile:
with open(args.outfile, 'w') as f:
for group_size in (len(args.featurefiles), 2):
for group in IT.combinations(args.featurefiles, group_size):
result = ''.join(sorted(reduce(
args.operation,
(set(open(filename))
for filename in group))))
f.write('{g}:\n{r}\n'.format(g=str(group), r=result))
You could call the script from the CLI like this:
script.py -i file1 file2 file3 --op uniques -o /tmp/out
References to tools used above:
* [argparse](https://docs.python.org/2.7/library/argparse.html)
* [itertools.combinations](https://docs.python.org/2/library/itertools.html#itertools.combinations)
* [reduce](https://docs.python.org/2/library/functions.html#reduce)
* [with-statement](http://effbot.org/zone/python-with-statement.htm)
* [str format method](http://docs.python.org/library/string.html#format-string-syntax)
|
Strip HTML tags to get strings in python
Question: I tried to get some strings from an HTML file with BeautifulSoup and everytime
I work with it I get partial results.
I want to get the strings in every li element/tag. So far I've been able to
get everything in ul like this.
#!/usr/bin/python
from bs4 import BeautifulSoup
page = open("page.html")
soup = BeautifulSoup(page)
source = soup.select(".sidebar li")
And what I get is this:
[<li class="first">
Def Leppard - Make Love Like A Man<span>Live</span> </li>, <li>
Inxs - Never Tear Us Apart </li>, <li>
Gary Moore - Over The Hills And Far Away </li>, <li>
Linkin Park - Numb </li>, <li>
Vita De Vie - Basul Si Cu Toba Mare </li>, <li>
Nazareth - Love Hurts </li>, <li>
U2 - I Still Haven't Found What I'm L </li>, <li>
Blink 182 - All The Small Things </li>, <li>
Scorpions - Wind Of Change </li>, <li>
Iggy Pop - The Passenger </li>]
I want to get only the strings from this.
Answer: Use beautiful soups - .strings method.
for string in soup.stripped_strings:
print(repr(string))
from the docs:
> If there’s more than one thing inside a tag, you can still look at just the
> strings. Use the .strings generator:
or
> These strings tend to have a lot of extra whitespace, which you can remove
> by using the .stripped_strings generator instead:
|
Partial fit multivariate SGDRegressor
Question: I am currently trying to use the SGDRegressor from scikits learn to solve a
multivariate target problem over a large dataset, X ~= (10^6,10^4). As such I
am generating the design matrix (X) in parts with the following code, where
each iteration produces a batch of size roughly (10^3,10^4):
design = self.__iterX__(events)
reglins = [linear_model.SGDRegressor(fit_intercept=True) for i in range(nTargets)]
for X,times in design:
for i in range(nTargets):
reglins[i].partial_fit(X,y.ix[times].values[:,i])
However I get the following stack trace:
File ".../Enthought/Canopy_64bit/User/lib/python2.7/site- packages/sklearn/linear_model/stochastic_gradient.py", line 841, in partial_fit
coef_init=None, intercept_init=None)
File ".../Enthought/Canopy_64bit/User/lib/python2.7/site-packages/sklearn/linear_model/stochastic_gradient.py", line 812, in _partial_fit
sample_weight, n_iter)
File ".../Enthought/Canopy_64bit/User/lib/python2.7/site-packages/sklearn/linear_model/stochastic_gradient.py", line 948, in _fit_regressor
intercept_decay)
File "sgd_fast.pyx", line 508, in sklearn.linear_model.sgd_fast.plain_sgd (sklearn/linear_model/sgd_fast.c:8651)
ValueError: floating-point under-/overflow occurred.
Looking around it seems that this can be cause by not normalizing X properly.
I understand scikits learn has a variety of functions for this, however given
that I generate X in blocks, is it enough to simply normalize each block or
would I need to figure out a way to normalize whole columns at a time?
Incidentally, is there a particular reason that the partial_fit function does
not allow multivariate targets?
Answer: You can fit one block and apply to others:
from sklearn import preprocessing
scaler = preprocessing.StandardScaler()
x1 = scalar.fit_transform(X_block_1)
xn = scalar.transform(X_block_n)
You can choose other normalization methods [from this page](http://scikit-
learn.org/stable/modules/preprocessing.html).
|
Python: How to (not) keep a COM-object persistent in memory after closing the parent application?
Question: i am working on a (pretty big) Python/Tkinter-application (Python 2.7 under
Windows 7) which (among many other things) calls Matlab via the COM-interface.
The basic structure of the Matlab/COM-part is like this:
import Tkinter
import pythoncom
import win32com.client
class App( object ):
def __init__( self, parent ):
Tkinter.Button( root, text="Start Matlab", command=self.start_matlab ).grid()
def start_matlab( self ):
self.matlab = win32com.client.Dispatch( "Matlab.Application" )
root = Tkinter.Tk()
App( root )
root.mainloop()
The behaviour i observe with this simplified code is: Running the application
and clicking the button creates a Matlab-Instance (opens a Matlab-Command
window), and when closing the Tkinter application also that Matlab-window and
the corresponding entry in the Task-Manager disappear. Repeating the
procedure, Matlab is started afresh. However, when i do "the same" with my
"real" application, the Matlab instance persists after closing my application
and, moreover, when i restart the application and run the part which "starts"
Matlab, it just retrieves and uses the instance which remained in memory after
quitting the first sessiong of my app. Unfortunately, i am not able to isolate
a reasonably small code example showing the latter behavior:(
Does anybody have an idea what the reason for this is/could be?
How can one control whether a COM-object is killed or persists in memory when
the parent Python application which created it is closed?
Answer: Here's how to explicitly remove the COM object, using Tkinter's protocol
handler:
import Tkinter
import pythoncom
import win32com.client
class App( object ):
def __init__( self, parent ):
self.parent = parent #reference to root
Tkinter.Button( root, text="Start Matlab", command=self.start_matlab ).grid()
self.parent.protocol('WM_DELETE_WINDOW', self.closeAll) #protocol method
def start_matlab( self ):
self.matlab = win32com.client.Dispatch( "Matlab.Application" )
def closeAll(self):
del self.matlab #delete the COM object
self.parent.destroy() #close the window
root = Tkinter.Tk()
App( root )
root.mainloop()
Reference: [Removing COM object from
memory](https://mail.python.org/pipermail/python-
win32/2008-February/007018.html)
[More on protocols](http://effbot.org/tkinterbook/tkinter-events-and-
bindings.htm)
|
python: how to use 'import' on character string variable?
Question: I want to create a function which uses import, like the following:
def test_module(module_name):
try:
import module_name
except ImportError:
print 'import '+module_name+' error'
test_module('os')
however, the function doesn't work because import seems not to recognize the
string variable. are there any workarounds?
thanks!
Answer: `imported = __import__(module_name)` will work.
Or you can use
[`importlib.import_module`](https://docs.python.org/2/library/importlib.html#importlib.import_module).
The difference is in convenience: in case of import like `module_name =
'top.lower'`
`__import__` will return `top`, whereas `importlib.import_module` will return
`lower`:
>>> __import__('os.path')
<module 'os' from nowhere>
>>> importlib.import_module('os.path')
<module 'ntpath' from nowhere>
|
Python 3.3 no module named MySQLdb, cymysql installed
Question: Working through the introductory Django project tutorial on the official site.
I have successfully installed the CyMySQL 0.5.5 connector through pip. The
server status is "Running". For the command `python manage.py syncdb` this is
what powershell returns. Could this have something to do with the DATABASES
section of `settings.py` file having wrong parameters? :
C:\Python33\lib\site-packages\win32\lib\pywintypes.py:39: DeprecationWarning: imp.get_suffixes() is deprecated; use
constants defined on importlib.machinery instead
for suffix_item in imp.get_suffixes():
Traceback (most recent call last):
File "C:\Python33\lib\site-packages\django\db\backends\mysql\base.py", line 14, in <module>
import MySQLdb as Database
ImportError: No module named 'MySQLdb'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "C:\Python33\lib\site-packages\django\core\management\__init__.py", line 399, in execute_from_command_line
utility.execute()
File "C:\Python33\lib\site-packages\django\core\management\__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "C:\Python33\lib\site-packages\django\core\management\base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "C:\Python33\lib\site-packages\django\core\management\base.py", line 280, in execute
translation.activate('en-us')
File "C:\Python33\lib\site-packages\django\utils\translation\__init__.py", line 130, in activate
return _trans.activate(language)
File "C:\Python33\lib\site-packages\django\utils\translation\trans_real.py", line 188, in activate
_active.value = translation(language)
File "C:\Python33\lib\site-packages\django\utils\translation\trans_real.py", line 177, in translation
default_translation = _fetch(settings.LANGUAGE_CODE)
File "C:\Python33\lib\site-packages\django\utils\translation\trans_real.py", line 159, in _fetch
app = import_module(appname)
File "C:\Python33\Lib\importlib\__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1584, in _gcd_import
File "<frozen importlib._bootstrap>", line 1565, in _find_and_load
File "<frozen importlib._bootstrap>", line 1532, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 584, in _check_name_wrapper
File "<frozen importlib._bootstrap>", line 1022, in load_module
File "<frozen importlib._bootstrap>", line 1003, in load_module
File "<frozen importlib._bootstrap>", line 560, in module_for_loader_wrapper
File "<frozen importlib._bootstrap>", line 868, in _load_module
File "<frozen importlib._bootstrap>", line 313, in _call_with_frames_removed
File "C:\Python33\lib\site-packages\django\contrib\admin\__init__.py", line 6, in <module>
from django.contrib.admin.sites import AdminSite, site
File "C:\Python33\lib\site-packages\django\contrib\admin\sites.py", line 4, in <module>
from django.contrib.admin.forms import AdminAuthenticationForm
File "C:\Python33\lib\site-packages\django\contrib\admin\forms.py", line 6, in <module>
from django.contrib.auth.forms import AuthenticationForm
File "C:\Python33\lib\site-packages\django\contrib\auth\forms.py", line 17, in <module>
from django.contrib.auth.models import User
File "C:\Python33\lib\site-packages\django\contrib\auth\models.py", line 48, in <module>
class Permission(models.Model):
File "C:\Python33\lib\site-packages\django\db\models\base.py", line 96, in __new__
new_class.add_to_class('_meta', Options(meta, **kwargs))
File "C:\Python33\lib\site-packages\django\db\models\base.py", line 264, in add_to_class
value.contribute_to_class(cls, name)
File "C:\Python33\lib\site-packages\django\db\models\options.py", line 124, in contribute_to_class
self.db_table = truncate_name(self.db_table, connection.ops.max_name_length())
File "C:\Python33\lib\site-packages\django\db\__init__.py", line 34, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
File "C:\Python33\lib\site-packages\django\db\utils.py", line 198, in __getitem__
backend = load_backend(db['ENGINE'])
File "C:\Python33\lib\site-packages\django\db\utils.py", line 113, in load_backend
return import_module('%s.base' % backend_name)
File "C:\Python33\Lib\importlib\__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "C:\Python33\lib\site-packages\django\db\backends\mysql\base.py", line 17, in <module>
raise ImproperlyConfigured("Error loading MySQLdb module: %s" % e)
django.core.exceptions.ImproperlyConfigured: Error loading MySQLdb module: No module named 'MySQLdb'
Answer: Please use django-cymysql and latest cymysql
djang-cymysql <https://github.com/nakagami/django-cymysql/>
cymysql 0.7.2 <https://pypi.python.org/pypi/cymysql/0.7.2>
and use 'mysql_cymysql' ENGINE
|
Oauth using Birdy Python on Flask: Not able to figure out what to do with the Access Token
Question: This is my code to fetch the tweets based on keyword. Here I am trying to
Authenticate using Birdy Oauth API. But the problem is that I don't seem to
understand why does it say that my token is expired when I enter the Token of
my Application.
**Please give specific feedback on what seems to be the issue here**
import datetime
from config import CONSUMER_KEY, CONSUMER_SECRET, SAVED_ACCESS_TOKEN
#Saved access token is a list of containing my apps Access token and
#Access token secret as its elements
from birdy.twitter import AppClient
from app import app
from app import db
from models import TweetInfo
#Create an instance of appclient for the application
client = AppClient(CONSUMER_KEY, CONSUMER_SECRET, SAVED_ACCESS_TOKEN)
#Add the keywords here which need to be searched over Twitter
QUERIES = ['#Vespa', '@Vespa']
#A method to fetch all the new entries for statuses
#regarding the keywords queried that do not have an entry in the DataBase
def get_new_tweets():
"""Return number of tweets if any and save them in Database"""
statuses = []
for query in QUERIES:
response = client.api.search.tweets.get(q = query, count = 100)
statuses += response.data.statuses
latest_tweets = 0
for status in statuses:
if not db.session.query(TweetInfo).filter(TweetInfo.domain_id == status.id_str).count(): #check whether a tweet is already present in the DataBase
created_at = datetime.datetime.strptime(status.created_at, r"%a %b %d %H:%M:%S +0000 %Y")
o = TweetInfo(tweet = status.text,
posted_by = '{} ({})'.format(status.user.screen_name,
status.user.followers_count),
recorded_at = datetime.datetime.now(),
occured_at = created_at,
domain_id = status.id_str)
latest_tweets += 1
db.session.add(o)
db.session.commit()
return latest_tweets
Answer: The Access Token is only valid for some time. So you have to get a new one
with each Connection. Birdy will save it automaticaly. Remember with this auth
you can only read.
Replace this
#Create an instance of appclient for the application
client = AppClient(CONSUMER_KEY, CONSUMER_SECRET, SAVED_ACCESS_TOKEN)
With this
client = AppClient(CONSUMER_KEY, CONSUMER_SECRET)
access_token = client.get_access_token()
|
using pygame to stream over sockets in python error
Question: i am working on with a webcam script i got of the internet in python and i am
using pygame module the code is
import socket
import pygame
import sys
port=5014
#create pygame screen
screen = pygame.display.set_mode((800,600),0)
while True:
s=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(("",port)) # server is available on the whole network by setting host to ""
s.listen(1)
connection, addr = s.accept()
received = []
# loop .recv, it returns empty string when done, then transmitted data is completely received
while True:
recvd_data = connection.recv(1440021)
if not recvd_data:
break
else:
received.append(recvd_data)
dataset = ''.join(received)
image = pygame.image.fromstring(dataset,(800,600),"RGB") # convert received image from string
#image = pygame.transform.scale(image,(800,600)) # scale image to 800*600
screen.blit(image,(0,0)) # "show image" on the screen
pygame.display.update()
# check for quit events
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
and the client code is
import socket
import pygame
import pygame.camera
import sys
import time
host = "localhost"
port = 5014
pygame.init()
pygame.camera.init()
cam_list = pygame.camera.list_cameras() # list available cameras
webcam = pygame.camera.Camera(cam_list[0],(800,600)) # use first camera in list and set resolution
webcam.start() # start camera
while True:
image = webcam.get_image() # capture image
data = pygame.image.tostring(image,"RGB") # convert captured image to string, use RGB color scheme
#print sys.getsizeof(data) # in case somebody wants to know the size of the captured image
# prepare for connection to server
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # TCP is used
s.connect((host, port))
s.sendall(data)
s.close()
time.sleep(0.1)
the error i get on the server is
Traceback (most recent call last):
File "/root/Desktop/serv.py", line 29, in <module>
image = pygame.image.fromstring(dataset,(800,600),"RGB") # convert received image from string
ValueError: String length does not equal format and resolution size
and the error i get on the client is
Traceback (most recent call last):
File "/root/Desktop/cli.py", line 28, in <module>
s.sendall(data)
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 104] Connection reset by peer
does anyone know what could be wrong
Answer: I was using the same code and getting a similar error, the solution was to
lower the resolution of the webcam because mine could not handle the 800x600.
I also changed the "server" and "client" so the weebcam server acts like the
"socket server"
try the following code, make sure that your video is correct, on my example
"/dev/video0" yours could be different. Start the webcam server first.
Webcam server:
import socket
import pygame
import pygame.camera
import sys
import time
port = 5000
pygame.init()
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serversocket.bind(("",port))
serversocket.listen(1)
pygame.camera.init()
webcam = pygame.camera.Camera("/dev/video0",(320,240))
webcam.start()
while True:
connection, address = serversocket.accept()
image = webcam.get_image() # capture image
data = pygame.image.tostring(image,"RGB") # convert captured image to string, use RGB color scheme
connection.sendall(data)
time.sleep(0.1)
connection.close()
Client server:
import socket
import pygame
import sys
host = "10.0.0.13"
port=5000
screen = pygame.display.set_mode((320,240),0)
while True:
clientsocket=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
clientsocket.connect((host, port))
received = []
# loop .recv, it returns empty string when done, then transmitted data is completely received
while True:
#print("esperando receber dado")
recvd_data = clientsocket.recv(230400)
if not recvd_data:
break
else:
received.append(recvd_data)
dataset = ''.join(received)
image = pygame.image.fromstring(dataset,(320,240),"RGB") # convert received image from string
screen.blit(image,(0,0)) # "show image" on the screen
pygame.display.update()
# check for quit events
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
sys.exit()
|
python csv replace listitem
Question: i have following output from a csv file:
word1|word2|word3|word4|word5|word6|01:12|word8
word1|word2|word3|word4|word5|word6|03:12|word8
word1|word2|word3|word4|word5|word6|01:12|word8
what i need to do is change the time string like this 00:01:12. my idea is to
extract the list item [7] and add a "00:" as string to the front.
import csv
with open('temp', 'r') as f:
reader = csv.reader(f, delimiter="|")
for row in reader:
fixed_time = (str("00:") + row[7])
begin = row[:6]
end = row[:8]
print begin + fixed_time +end
get error message:
TypeError: can only concatenate list (not "str") to list.
i also had a look on this post.
[how to change [1,2,3,4] to '1234' using
python](http://stackoverflow.com/questions/2597932/how-to-
change-1-2-3-4-to-1234-using-python/2597937#2597937)
i neeed to know if my approach to soloution is the right way. maybe need to
use split or anything else for this.
thx for any help
Answer: The line that's throwing the exception is
print begin + fixed_time +end
because `begin` and `end` are **both lists** and `fixed_time` is a string.
Whenever you take a **slice** of a list (that's the `row[:6]` and `row[:8]`
parts), a list is returned. If you just want to print it out, you can do
print begin, fixed_time, end
and you won't get an error.
### Corrected code:
I'm opening a new file for writing (I'm calling it 'final', but you can call
it whatever you want), and I'm just writing everything to it with the one
modification. It's easiest to just change the one element of the list that has
the line (`row[6] here`), and use `'|'.join` to write a pipe character between
each column.
import csv
with open('temp', 'r') as f, open('final', 'w') as fw:
reader = csv.reader(f, delimiter="|")
for row in reader:
# just change the element in the row to have the extra zeros
row[6] = '00:' + row[6]
# 'write the row back out, separated by | characters, and a new line.
fw.write('|'.join(row) + '\n')
|
Nginx and Flask-socketio Websockets: Alive but not Messaging?
Question: I've been having a bit of trouble getting Nginx to play nicely with the Python
Flask-socketio library (which is based on gevent). Currently, since we're
actively developing, I'm trying to get Nginx to just work as a proxy. For
sending pages, I can get this to work, either by directly running the flask-
socketio app, or by running through gunicorn. One hitch: the websocket
messaging does not seem to work. The pages are successfully hosted and
displayed. However, when I try to use the websockets, they do not work. They
are alive enough that the websocket thinks it is connected, but they will not
send a message. If I remove the Nginx proxy, they do work. Firefox gives me
this error when I try to send a message:
> Firefox can't establish a connection to the server at
> ws:///socket.io/1/websocket/.
Where web address is where the server is located and the unique id is just a
bunch of randomish digits. It seems to be doing enough to keep the connection
live (e.g., the client thinks it is connected), but can't send a message over
the websocket. I have to think that the issue has to do with some part of the
proxy, but am having mighty trouble debugging what the issue might be (in part
because this is my first go-round with both Flask-socketIO and nginx). The
configuration file I am using for nginx is:
user <user name>; ## This is set to the user name for the remote SSH session
worker_processes 5;
events {
worker_connections 1024; ## Default: 1024
}
http {
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] $status '
'"$request" $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
sendfile on;
server_names_hash_bucket_size 128; # this seems to be required for some vhosts
server {
listen 80;
server_name _;
location / {
proxy_pass http://localhost:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
}
I made the config file as an amalgam of a general example and a websocket
specific one, but trying to fiddle with it has not solved the issue. Also, I
am using the werkzeug Proxy_Fix call on my Flask app.wsgi_app when I use it in
wsgi mode. I've tried it with and without that, to no avail, however. If
anyone has some insight, I will be all ears/eyes.
Answer: I managed to fix this. The issues were not specific to flask-socketio, but
they were specific to Ubuntu, NginX, and gevent-socketio. Two significant
issues were present:
1. Ubuntu 12.04 has a truly ancient version of nginx (1.1.19 vs 1.6.x for stable versions). Why? Who knows. What we do know is that this version does not support websockets in any useful way, as 1.3.13 is [about the earliest](https://github.com/LearnBoost/socket.io/wiki/Nginx-and-Socket.io) you should be using.
2. By default, gevent-socketio expects your sockets to be at the location /socket.io . You can upgrade the whole HTTP connection, but I had some trouble getting that to work properly (especially after I threw SSL into the mix).
3. I fixed #1, but in fiddling with it I purged by nginx and apt-get installed... the default version of nginx on Ubuntu. Then, I was mysteriously confused as to why things worked even worse than before. Many .conf files valiantly lost their lives in this battle.
If trying to debug websockets in this configuration, I would recommend the
following steps:
1. Check your nginx version via 'nginx -v'. If it is anything less than 1.4, upgrade it.
2. Check your nginx.conf settings. You need to make sure the connection upgrades.
3. Check that your server IP and port match your nginx.conf reverse proxy.
4. Check that your client (e.g., socketio.js) connects to the right location and port, with the right protocol.
5. Check your blocked ports. I was on EC2, so you have to manually open 80 (HTTP) and 443 (SSL/HTTPS).
Having just checked all of these things, there are takeaways.
1. Upgrading to the latest stable nginx version on Ubuntu ([full ref](https://www.digitalocean.com/community/articles/how-to-install-the-latest-version-of-nginx-on-ubuntu-12-10)) can be done by:
sudo apt-get install python-software-properties
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:nginx/stable
sudo apt-get update
sudo apt-get install nginx
In systems like Windows, you can use an
[installer](http://nginx.org/en/download.html) and will be less likely to get
a bad version.
2. Many config files for this can be confusing, since nginx officially added sockets in about 2013, making earlier workaround configs obsolete. Existing config files don't tend to cover all the bases for nginx, gevent-socketio, and SSL together, but have them all separately ([Nginx Tutorial](https://chrislea.com/2013/02/23/proxying-websockets-with-nginx/), [Gevent-socketio](http://gevent-socketio.readthedocs.org/en/latest/server_integration.html), [Node.js with SSL](https://www.exratione.com/2013/06/websockets-over-ssl-with-nodejs-and-nginx/)). A config file for nginx 1.6 with flask-socketio (which wraps gevent-socketio) and SSL is:
user <user account, probably optional>;
worker_processes 2;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
# tcp_nopush on;
keepalive_timeout 3;
# tcp_nodelay on;
# gzip on;
client_max_body_size 20m;
index index.html;
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
# Listen on 80 and 443
listen 80 default;
listen 443 ssl; (only needed if you want SSL/HTTPS)
server_name <your server name here, optional unless you use SSL>;
# SSL Certificate (only needed if you want SSL/HTTPS)
ssl_certificate <file location for your unified .crt file>;
ssl_certificate_key <file location for your .key file>;
# Optional: Redirect all non-SSL traffic to SSL. (if you want ONLY SSL/HTTPS)
# if ($ssl_protocol = "") {
# rewrite ^ https://$host$request_uri? permanent;
# }
# Split off basic traffic to backends
location / {
proxy_pass http://localhost:8081; # 127.0.0.1 is preferred, actually.
proxy_redirect off;
}
location /socket.io {
proxy_pass http://127.0.0.1:8081/socket.io; # 127.0.0.1 is preferred, actually.
proxy_redirect off;
proxy_buffering off; # Optional
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
}
3. Checking that your Flask-socketio is using the right port is easy. This is sufficient to work with the above:
from flask import Flask, render_template, session, request, abort
import flask.ext.socketio
FLASK_CORE_APP = Flask(__name__)
FLASK_CORE_APP.config['SECRET_KEY'] = '12345' # Luggage combination
SOCKET_IO_CORE = flask.ext.socketio.SocketIO(FLASK_CORE_APP)
@FLASK_CORE_APP.route('/')
def index():
return render_template('index.html')
@SOCKET_IO_CORE.on('message')
def receive_message(message):
return "Echo: %s"%(message,)
SOCKET_IO_CORE.run(FLASK_CORE_APP, host=127.0.0.1, port=8081)
4. For a client such as socketio.js, connecting should be easy. For example:
<script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/socket.io/0.9.16/socket.io.min.js"></script>
<script type="text/javascript">
var url = window.location.protocol + document.domain + ':' + location.port,
socket = io.connect(url);
socket.on('message', alert);
io.emit("message", "Test")
</script>
5. Opening ports is really more of a [server-fault](http://serverfault.com/search?q=unblock%20port) or a [superuser](http://superuser.com/search?q=unblock%20port) issue, since it will depend a lot on your firewall. For Amazon EC2, see [here](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html).
6. If trying all of this does not work, cry. Then return to the top of the list. Because you might just have accidentally reinstalled an older version of nginx.
|
Docusign: Python code not able to upload PDF formats
Question: I have implemented embedded signing using python(Have followed the code given
in `Docusign` samples). Everything works good with `.txt` files.
Just any other format gives me an encoding error.
UnicodeDecodeError: 'ascii' codec can't decode byte 0x9c in position 351: ordinal not in range(128)
Code:
filepath = os.path.join(settings.MEDIA_ROOT, documentName)
fileContents = open(filepath, "r").read()
requestBody = "\r\n\r\n--BOUNDARY\r\n" + \
"Content-Type: application/json\r\n" + \
"Content-Disposition: form-data\r\n" + \
"\r\n" + \
envelopeDef + "\r\n\r\n--BOUNDARY\r\n" + \
"Content-Type: text/plain\r\n" + \
"Content-Disposition: file; filename=\"test_doc.txt\"; documentId=1\r\n" + \
"\r\n" + \
fileContents + "\r\n" + \
"--BOUNDARY--\r\n\r\n"
# append "/envelopes" to the baseUrl and use in the request
url = baseUrl + "/envelopes";
headers = {'X-DocuSign-Authentication': AUTHENTICATION_STR, 'Content-Type': 'multipart/form-data; boundary=BOUNDARY',
'Accept': 'application/json'};
http = httplib2.Http()
response, content = http.request(url, 'POST', headers=headers, body=requestBody)
I tried encoding in UTF-8
import codecs
fileContents = codecs.open(filepath,mode='r', encoding='utf-8').read()
Still it doesnt help.
I also tried changing the `Content-Type` to : `application/pdf`
Can anyone suggest a way of doing it?
# Docusign
Answer: What is your terminal setting.... it should be UTF-8. Also, take a look at
this [article](http://www.joelonsoftware.com/articles/Unicode.html) to get
more clear picture about unicode.
Try this:
import sys
reload(sys)
sys.setdefaultencoding("utf-8")
**EDIT:**
Defining source code encoding also helps me a lot like :
# -*- coding: utf-8 -*-
Take a look at [PEP 0263](http://legacy.python.org/dev/peps/pep-0263/) docs
|
No Such Resource 404 error
Question: I want to run index.html. so when I type localhost:8080 then index.html should
be executed in browser. but its giving no such resource. I am specifying the
entire path of index.html. please help me out.??
from twisted.internet import reactor
from twisted.web.server import Site
from twisted.web.static import File
resource = File('/home/venky/python/twistedPython/index.html')
factory = Site(resource)
reactor.listenTCP(8000, factory)
reactor.run()
Answer: This is related to the difference between a URL ending with a slash and one
without. It appears that Twisted considers a URL at the top level (like your
`http://localhost:8000`) to include an implicit trailing slash
(`http://localhost:8000/`). That means that the URL path includes an empty
segment. Twisted then looks in the `resource` for a child named `''` (empty
string). To make it work, add that name as a child:
from twisted.internet import reactor
from twisted.web.server import Site
from twisted.web.static import File
resource = File('/home/venky/python/twistedPython/index.html')
resource.putChild('', resource)
factory = Site(resource)
reactor.listenTCP(8000, factory)
reactor.run()
Also, your question has port `8080` but your code has `8000`.
|
How can i print columns in same rows with different 'codes' using csv python
Question: Here is my code.csv file:
one,two,three
1,2,3
2,3,4
3,4,5
and code2.csv file:
,two,three
,2,3
,3,4
,4,5
this is my code.py file:
import csv
with open('code.csv', 'rb') as input, open('result.csv', 'wb') as output:
reader = csv.DictReader(input)
rows = [row for row in reader]
writer = csv.writer(output, delimiter = ',')
writer.writerow(["new_one", "new_two", "new_three"])
for row in rows:
if 'two' in row:
writer.writerow(['',row["two"]])
for row in rows:
if 'one' in row:
writer.writerow([row["one"]])
The problem is that output is printed in different rows:
new_one,new_two,new_three
,2
,3
,4
1
2
3
How can I make the
for row in rows:
if 'one' in row:
writer.writerow([row["one"]])
to be printed like this
new_one,new_two,new_three
1,2
2,3
3,4
Without having to type it like this
for row in rows:
if 'one' in row and 'two' in row:
writer.writerow([row["one"],row["two"]])
Now if i would have code2.csv instead of code.csv, the latter (`if 'one' in
row and 'two' in row:`) would give me error "'one' not found". So I need to
have something like maybe "`for row in rows[1]:`" or something. Obviously that
doesn't work, but something similar?
Answer: You **must** write row by row, not column by column. I think this should do
it:
for row in rows:
writer.writerow([row.get("one", ""), row.get("two", "")])
|
Sorting a list of dates in Python
Question: I have a list in python that is directories who's names are the date they were
created;
import os
ConfigDir = "C:/Config-Archive/"
for root, dirs, files in os.walk(ConfigDir):
if len(dirs) == 0: # This directory has no subfolder
ConfigListDir = root + "/../" # Step back up one directory
ConfigList = os.listdir(ConfigListDir)
print(ConfigList)
['01-02-2014', '01-03-2014', '01-08-2013', '01-09-2013', '01-10-2013']
I want the most recent directory which is that example is `01-03-2014`, the
second in the list. The dates are DD-MM-YYYY.
Can this be sorted using the lamba sort key or should I just take the plunge
and write a simple sort function?
Answer: You'd parse the date in a sorting key:
from datetime import datetime
sorted(ConfigList, key=lambda d: datetime.strptime(d, '%d-%m-%Y'))
Demo:
>>> from datetime import datetime
>>> ConfigList = ['01-02-2014', '01-03-2014', '01-08-2013', '01-09-2013', '01-10-2013']
>>> sorted(ConfigList, key=lambda d: datetime.strptime(d, '%d-%m-%Y'))
['01-08-2013', '01-09-2013', '01-10-2013', '01-02-2014', '01-03-2014']
|
convert python sqlite db to hdf5
Question: A Pandas DataFrame can be converted to a hdf5 file like this;
`df.to_hdf('test_store.hdf','test',mode='w')`
I have an sqlite db file which has to be converted to a hdf5 file and then I
would read the hdf5 file through pandas using `pd.read_hdf`.
But first how do I convert a python sqlite db to a hdf5 file ?
EDIT:
I am aware of using the `.read_sql` method in pandas. But I would like to
convert the db to hdf5 first.
Answer: This is surprisingly simple: Use pandas!
pandas supports [reading data directly from a SQL
database](http://pandas.pydata.org/pandas-docs/stable/io.html#io-sql) into a
DataFrame. Once you've got the DataFrame, you can do with it as you wish.
Short example, taken [from the docs](http://pandas.pydata.org/pandas-
docs/stable/io.html#io-sql):
import sqlite3
from pandas.io import sql
# Create your connection.
cnx = sqlite3.connect('mydbfile.sqlite')
# read the result of the SQL query into a DataFrame
data = sql.read_sql("SELECT * FROM data;", cnx)
# now you can write it into a HDF5 file
data.to_hdf('test_store.hdf','test',mode='w')
|
3D plot with an 2D array python matplotlib
Question: I have 2 1D arrays with the values of x and y, and also a 2D array with the
values of z for each point where the columns correspond to the x values and
the rows to the y values. Is there any way to get a plot_surface with this
data? when I try to do it it returns me no plot. Here is the code:
(calculate_R is a function I made for the program)
x=np.arange(0,10,1)
y=np.arange(0,1,0.2)
lx= len(x)
ly=len(y)
z=np.zeros((lx,ly))
for i in range(lx):
for j in range(ly):
z[i,j]=calculate_R(y[j],x[i])
fig = plt.figure()
ax = Axes3D(fig)
x, y = np.meshgrid(x, y)
ax.plot_surface(x, y, z, rstride=1, cstride=1, cmap='hot')
Answer: You forgot to call
[`plt.show()`](http://matplotlib.org/1.3.0/api/pyplot_api.html#matplotlib.pyplot.show)
to display your plot.
Note that you might be able to exploit numpy vectorization to speed up the
calculation of `z`:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.axes3d import Axes3D
x = np.arange(0,10,1)
y = np.arange(0,1,0.2)
xs, ys = np.meshgrid(x, y)
# z = calculate_R(xs, ys)
zs = xs**2 + ys**2
fig = plt.figure()
ax = Axes3D(fig)
ax.plot_surface(xs, ys, zs, rstride=1, cstride=1, cmap='hot')
plt.show()
Here, I used a simple function, since you didn't supply a fully working
example.
|
How can I get a list like this [30,31,32,33,34,35] when i give "in between 30 and 35" to a python function
Question: As I am not good in regular expression, please help me to get the define the
method. I have a method which takes string as argument and returns a list
1. get_values("IN BETWEEN 30 AND 35") => [30,31,32,33,34,35]
2. get_values("(in between 35 and 40) and (in [56,57,58])") => [35,36,37,38,39,40,56,57,58]
3. get_values("(in between 30 and 35) and (IN BETWEEN 40 AND 45)") =>[30,31,32,33,34,35,40,41,42,43,44,45]
These and their combinations are the possible cases
Answer: This is a potential solution, but this should really be handled by properly
parsing the SQL syntax using some kind of parser:
import re
def get_values(sql):
sql = sql.upper()
between_regex = '(\d+)\s+AND\s+(\d+)'
ranges = [range(int(a), int(b)) for a, b in re.findall(between_regex, sql)]
in_regex = '\[(.*)]'
ranges += [[int(y) for y in x.split(',')] for x in re.findall(in_regex, sql)]
return [x for r in ranges for x in r]
print get_values("IN BETWEEN 30 AND 35")
print get_values("(in between 35 and 40) and (in [56,57,58])")
print get_values("(in between 30 and 35) and (IN BETWEEN 40 AND 45)")
#[30, 31, 32, 33, 34]
#[35, 36, 37, 38, 39, 56, 57, 58]
#[30, 31, 32, 33, 34, 40, 41, 42, 43, 44]
|
Fast way to convert rgb to lab in python
Question: Is there a quick way to convert RGB to LAB in Python3 using D50 sRGB?
[Python-Colormath](https://github.com/gtaylor/python-colormath) too slow
[skimage](http://scikit-image.org/docs/0.9.x/api/skimage.color.html#xyz2lab)
use D65
Answer: For now, the white reference in skimage cannot be passed as a parameter (pull
request welcome), but here is a workaround:
from skimage import color
color.colorconv.lab_ref_white = np.array([0.96422, 1.0, 0.82521])
lab = color.rgb2lab(image)
|
relative import from __init__.py file throws error
Question: So I'm using a template set by a co-worker and as a newbie to python I may be
missing something very obvious.
The main directory has the init file which has a module I need for the main
python file that generates the error.
Co-worker used:
from . import X
Where X was the module, but when copying this (and a populated init file) to
my own directory it generates this error:
> ValueError: Attempted relative import in non-package
From google and SO I gather that this is perfectly fine and logical so I'm
wondering, as a newbie, what I've missed.
Answer: You have to understand how Python handles modules.
When you start the interpreter with a script, this script becomes the main
module, with the matching name `__main__`.
When using `import`, other modules are searched in the search path, that you
can also access (and change) using `sys.path`. The first entry of `sys.path`
is usually empty and stands for the current directory.
A directory within the search path is a package if it contains a `__init__.py`
file.
Now, when you execute a script within a directory that contains an
`__init__.py` file, this script will become `__main__` and the directory is
not regarded as a package, as it is not in `sys.path`!
For example, consider the following directory layout:
root/
pkg/
__init__.py
b.py
c.py
a.py
When you run `python a.py` from the `root/` directory, you can import `pkg/`
and use relative imports within this package (e.g., `from . import c` in
`b.py` or `__init__.py`).
When you run `python b.py` from the `pkg` directory, you cannot use relative
imports since Python does not recognize `pkg` as a package: it is not in
`sys.path`. For Python, `pkg` is an ordinary directory, no matter if it
contains a `__init__.py`. The fully qualified name of `c.py` is just `c`, not
`pkg.c`, so a relative `from . import c` won't work.
|
AttributeError: 'float' object has no attribute 'getLow' using Pyalgotrade in Python
Question: I trying to write a Stochastic Oscillator in python using the list function in
Pyalgotrade library.
Pyalgotrade library is a Python library for backtesting stock trading
strategies. Let’s say you have an idea for a trading strategy and you’d like
to evaluate it with historical data and see how it behaves. PyAlgoTrade allows
you to do so with minimal effort.
The python code is like this:
from pyalgotrade.tools import yahoofinance
from pyalgotrade import strategy
from pyalgotrade.barfeed import yahoofeed
from pyalgotrade.technical import stoch
from pyalgotrade import dataseries
from pyalgotrade.technical import ma
from pyalgotrade import technical
from pyalgotrade.technical import highlow
class MyStrategy(strategy.BacktestingStrategy):
def __init__(self, feed, instrument):
strategy.BacktestingStrategy.__init__(self, feed)
self.__stoch = stoch.StochasticOscillator(feed[instrument].getCloseDataSeries(),20, dSMAPeriod=3, maxLen=3)
self.__instrument = instrument
def onBars(self, bars):
bar = bars[self.__instrument]
self.info("%s %s" % (bar.getClose(), self.__stoch[-1]))
# Downdload then Load the yahoo feed from the CSV file
yahoofinance.download_daily_bars('AAPL', 2013, 'aapl.csv')
feed = yahoofeed.Feed()
feed.addBarsFromCSV("AAPL", "aapl.csv")
# Evaluate the strategy with the feed's bars.
myStrategy = MyStrategy(feed, "AAPL")
myStrategy.run()
The error is like this,including all the trace back.
Traceback (most recent call last):
File "/Users/johnhenry/Desktop/simple_strategy.py", line 47, in <module>
myStrategy.run()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyalgotrade/strategy/__init__.py", line 519, in run
self.__dispatcher.run()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyalgotrade/dispatcher.py", line 102, in run
eof, eventsDispatched = self.__dispatch()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyalgotrade/dispatcher.py", line 90, in __dispatch
if self.__dispatchSubject(subject, smallestDateTime):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyalgotrade/dispatcher.py", line 68, in __dispatchSubject
ret = subject.dispatch() is True
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyalgotrade/feed/__init__.py", line 101, in dispatch
dateTime, values = self.getNextValuesAndUpdateDS()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyalgotrade/feed/__init__.py", line 85, in getNextValuesAndUpdateDS
ds.appendWithDateTime(dateTime, value)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyalgotrade/dataseries/bards.py", line 49, in appendWithDateTime
self.__closeDS.appendWithDateTime(dateTime, value.getClose())
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyalgotrade/dataseries/__init__.py", line 134, in appendWithDateTime
self.getNewValueEvent().emit(self, dateTime, value)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyalgotrade/observer.py", line 59, in emit
handler(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyalgotrade/technical/__init__.py", line 89, in __onNewValue
newValue = self.__eventWindow.getValue()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyalgotrade/technical/stoch.py", line 60, in getValue
lowestLow, highestHigh = get_low_high_values(self.__barWrapper, self.getValues())
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyalgotrade/technical/stoch.py", line 42, in get_low_high_values
lowestLow = barWrapper.getLow(currBar)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyalgotrade/technical/stoch.py", line 31, in getLow
return bar_.getLow(self.__useAdjusted)
AttributeError: 'float' object has no attribute 'getLow'
Answer: You're not building the StochasticOscillator properly. You passing the close
dataseries, and as explained in the doc, you need to pass in the bar
dataseries:
self.__stoch = stoch.StochasticOscillator(feed[instrument], 20, dSMAPeriod=3, maxLen=3)
|
Implementing Facebook FQL in Python
Question: It will probably be obvious soon enough, but I am new to coding and
StackOverflow -- apologies if I come off as a bit oblivious.
I have been trying to query Facebook with FQL out of a function on my views.py
page in a Django app. I did have a working solution utilizing the Facebook
Graph API, but it was ultimately a poor solution since I am aiming to retrieve
the user’s newsfeed.
While there are a number of answers relating to Facebook FQL in PHP, there are
no explicit examples in Python. Instead, most of the questions are just
related to the syntax of the query, and not how to actually integrate it into
an actual function.
The query I am looking to make is:
SELECT post_id, app_id, source_id, updated_time, filter_key, attribution, message, action_links, likes, permalink FROM stream WHERE filter_key IN (SELECT filter_key FROM stream_filter WHERE uid = me() AND type = 'newsfeed’)
As I understand, the best way to make this request is using Requests but
frankly, I am just looking for a working solution.
I considered posting some of my non-working code up here, but it has pretty
much spiraled into a mess.
Any help or direction is greatly appreciated!
**_UPDATE_** Just wanted to document where I am at, with Sohan Jain's help,
I've gotten to this point:
def get_fb_post(request):
user_id = UserSocialAuth.objects.get(user=request.user)
api = facebook.GraphAPI(user_id.tokens)
# response gets deserialized into a python object
response = api.fql('''SELECT post_id, app_id, source_id, updated_time, filter_key, attribution, message,
action_links, likes, permalink FROM stream WHERE filter_key IN (SELECT filter_key FROM stream_filter
WHERE uid = me() AND type = 'newsfeed')''')
print(response)
data = {'profile': '...something...' } # what should be going in here?
return HttpResponse(json.dumps(data), mimetype='application/json')
In PyCharm I am getting the following (link to image if you prefer:
<http://i.stack.imgur.com/8sZDF.png>):
/Library/Python/2.7/site-packages/facebook.py:52: DeprecationWarning: django.utils.simplejson is deprecated; use json instead.
from django.utils import simplejson as json
In my browser I am getting this(image link again:
<http://i.stack.imgur.com/njXo8.png>):
GraphAPIError at /fbpost/
I am not sure how to fix the DeprecationWarning. I tried at the top of the
file putting "from django.utils import simplejson as json" but that didn't
work (I thought it was probably a bit too simplistic, but worth a shot).
If I replace the query with this:
response = api.fql("SELECT name, type, value, filter_key FROM stream_filter WHERE uid = me()")
I do get a very long response that looks like this:
[{u'filter_key': u'nf', u'type': u'newsfeed', u'name': u'News Feed', u'value': None}, {u'filter_key'... ...}]
It goes on and on and does appear to be returning valid information.
Answer: I have figured out how to get a response, the code looks something like this:
from facepy import GraphAPI
def get_fb_post(request):
user_id = UserSocialAuth.objects.get(user=request.user)
access_token = user_id.tokens
graph = GraphAPI(access_token)
data = graph.fql('''SELECT post_id, app_id, source_id, updated_time, filter_key, attribution, message,
action_links, likes, permalink FROM stream WHERE filter_key IN (SELECT filter_key FROM stream_filter
WHERE uid = me() AND type = 'newsfeed')''')
return HttpResponse(json.dumps(data), mimetype='application/json')
This is returning the information that I was looking for, which looks
something like this:
{u'data': [{u'filter_key': u'nf', u'permalink': u'facebook-link',
u'attribution': None, u'app_id': u'123456789', u'updated_time': 1398182407,
u'post_id': u'1482173147_104536679665087666', u'likes': ...etc, etc, etc....
Now, I am just going to try and randomize the displayed post by randomly
iterating through the JSON -- granted, this is outside of the scope of my
question here. Thanks to those who helped.
|
I made a copy of a working Django poll, however i did something wrong, because my poll does not work
Question: I am trying to post a value to my database. I have been looking for a long
time now at a working poll app, but i cannot seem to get my poll working.
Hereunder the parts used by the poll app where somewhere the error must be.
Thanks in advance for any help.
The error i get when i dont make a choice for the poll is:
**NoReverseMatch at /wedding/2/vote/ Reverse for 'vote' with arguments '('',)'
and keyword arguments '{}' not found. 1 pattern(s) tried:
[u'wedding/(?P\d+)/vote/$']**
The error i get when i do make a choice for the poll is:
**ValueError at /wedding/2/vote/ invalid literal for int() with base 10: ''**
urls.py
from django.conf.urls import patterns, url
from wedding import views
urlpatterns = patterns('',
url(r'^$', views.IndexView.as_view(), name='index'),
url(r'^(?P<pk>\d+)/$', views.DetailView.as_view(), name='detail'),
url(r'^(?P<pk>\d+)/results/$', views.ResultsView.as_view(), name='results'),
url(r'^(?P<invitee_id>\d+)/vote/$', views.vote, name='vote'),
)
views.py
from django.shortcuts import render, get_object_or_404
from django.http import HttpResponseRedirect, HttpResponse
from django.core.urlresolvers import reverse
from django.views import generic
from django.template import RequestContext, loader
# Create your views here.
from wedding.models import Invitee, Invitee_extra
class IndexView(generic.ListView):
template_name = 'wedding/index.html'
context_object_name = 'party_list'
def get_queryset(self):
"""Return the party names"""
return Invitee.objects.all().order_by('-party_name')[:5]
class DetailView(generic.DetailView):
model = Invitee
template_name = 'wedding/detail.html'
class ResultsView(generic.DetailView):
model = Invitee
template_name = 'wedding/results.html'
def vote(request, invitee_id):
p = get_object_or_404(Invitee, pk=invitee_id)
try:
selected_choice = p.invitee_extra_set.get(pk=request.POST['choice'])
except (KeyError, Invitee_extra.DoesNotExist):
# Redisplay the poll voting form.
return render(request, 'wedding/detail.html', {
'wedding': p,
'error_message': "You didn't select a attendance.",
})
else:
selected_choice.attend += 1
selected_choice.save()
# Always return an HttpResponseRedirect after successfully dealing
# with POST data. This prevents data from being posted twice if a
# user hits the Back button.
return HttpResponseRedirect(reverse('details:results', args=(p.id,)))
details.html
<h1>name: {{ invitee.party_name }} id: {{ invitee.id }} </h1>
{% if error_message %}<p><strong>{{ error_message }}</strong></p>{% endif %}
<form action="{% url 'wedding:vote' invitee.id %}" method="post">
{% csrf_token %}
{% for guests in invitee.invitee_extra_set.all %}
<input type="radio" name="choice" id="choice{{ forloop.counter }}" value="{{ choice.id }}" />
<label for="choice{{ forloop.counter }}">{{ guests.guest }}</label><br />
{% endfor %}
<input type="submit" value="Update gegevens" />
</form>
models.py
from django.db import models
# Create your models here.
class Invitee(models.Model):
party_name = models.CharField(max_length=50)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __unicode__(self): # Python 3: def __str__(self):
return self.party_name
class Invitee_extra(models.Model):
invitee = models.ForeignKey(Invitee)
guest = models.CharField(max_length=50)
attend = models.IntegerField(default=0)
def __unicode__(self): # Python 3: def __str__(self):
return self.invitee
Full traceback when i do select a radiobutton
Environment:
Request Method: POST
Request URL: http://127.0.0.1:8000/wedding/2/vote/
Django Version: 1.6.2
Python Version: 2.7.5
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'wedding',
'food',
'polls')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware')
Traceback:
File "/Library/Python/2.7/site-packages/django/core/handlers/base.py" in get_response
114. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/patrick/python/mysite/wedding/views.py" in vote
28. selected_choice = p.invitee_extra_set.get(pk=request.POST['choice'])
File "/Library/Python/2.7/site-packages/django/db/models/manager.py" in get
151. return self.get_queryset().get(*args, **kwargs)
File "/Library/Python/2.7/site-packages/django/db/models/query.py" in get
298. clone = self.filter(*args, **kwargs)
File "/Library/Python/2.7/site-packages/django/db/models/query.py" in filter
590. return self._filter_or_exclude(False, *args, **kwargs)
File "/Library/Python/2.7/site-packages/django/db/models/query.py" in _filter_or_exclude
608. clone.query.add_q(Q(*args, **kwargs))
File "/Library/Python/2.7/site-packages/django/db/models/sql/query.py" in add_q
1198. clause = self._add_q(where_part, used_aliases)
File "/Library/Python/2.7/site-packages/django/db/models/sql/query.py" in _add_q
1234. current_negated=current_negated)
File "/Library/Python/2.7/site-packages/django/db/models/sql/query.py" in build_filter
1125. clause.add(constraint, AND)
File "/Library/Python/2.7/site-packages/django/utils/tree.py" in add
104. data = self._prepare_data(data)
File "/Library/Python/2.7/site-packages/django/db/models/sql/where.py" in _prepare_data
79. value = obj.prepare(lookup_type, value)
File "/Library/Python/2.7/site-packages/django/db/models/sql/where.py" in prepare
352. return self.field.get_prep_lookup(lookup_type, value)
File "/Library/Python/2.7/site-packages/django/db/models/fields/__init__.py" in get_prep_lookup
369. return self.get_prep_value(value)
File "/Library/Python/2.7/site-packages/django/db/models/fields/__init__.py" in get_prep_value
613. return int(value)
Exception Type: ValueError at /wedding/2/vote/
Exception Value: invalid literal for int() with base 10: ''
and the fulltraceback if i dont make a choice:
Environment:
Request Method: POST
Request URL: http://127.0.0.1:8000/wedding/1/vote/
Django Version: 1.6.2
Python Version: 2.7.5
Installed Applications:
('django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'wedding',
'food',
'polls')
Installed Middleware:
('django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware')
Template error:
In template /Users/patrick/python/mysite/wedding/templates/wedding/detail.html, error at line 5
Reverse for 'vote' with arguments '('',)' and keyword arguments '{}' not found. 1 pattern(s) tried: [u'wedding/(?P<invitee_id>\\d+)/vote/$']
1 : <h1>name: {{ invitee.party_name }} id: {{ invitee.id }} </h1>
2 :
3 : {% if error_message %}<p><strong>{{ error_message }}</strong></p>{% endif %}
4 :
5 : <form action=" {% url 'wedding:vote' invitee.id %} " method="post">
6 : {% csrf_token %}
7 : {% for guests in invitee.invitee_extra_set.all %}
8 : <input type="radio" name="choice" id="choice{{ forloop.counter }}" value="{{ choice.id }}" />
9 : <label for="choice{{ forloop.counter }}">{{ guests.guest }}</label><br />
10 : {% endfor %}
11 : <input type="submit" value="Update gegevens" />
12 : </form>
Traceback:
File "/Library/Python/2.7/site-packages/django/core/handlers/base.py" in get_response
114. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/patrick/python/mysite/wedding/views.py" in vote
33. 'error_message': "You didn't select a attendance.",
File "/Library/Python/2.7/site-packages/django/shortcuts/__init__.py" in render
53. return HttpResponse(loader.render_to_string(*args, **kwargs),
File "/Library/Python/2.7/site-packages/django/template/loader.py" in render_to_string
169. return t.render(context_instance)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render
140. return self._render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in _render
134. return self.nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py" in render
840. bit = self.render_node(node, context)
File "/Library/Python/2.7/site-packages/django/template/debug.py" in render_node
78. return node.render(context)
File "/Library/Python/2.7/site-packages/django/template/defaulttags.py" in render
447. six.reraise(*exc_info)
File "/Library/Python/2.7/site-packages/django/template/defaulttags.py" in render
433. url = reverse(view_name, args=args, kwargs=kwargs, current_app=context.current_app)
File "/Library/Python/2.7/site-packages/django/core/urlresolvers.py" in reverse
509. return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
File "/Library/Python/2.7/site-packages/django/core/urlresolvers.py" in _reverse_with_prefix
429. (lookup_view_s, args, kwargs, len(patterns), patterns))
Exception Type: NoReverseMatch at /wedding/1/vote/
Exception Value: Reverse for 'vote' with arguments '('',)' and keyword arguments '{}' not found. 1 pattern(s) tried: [u'wedding/(?P<invitee_id>\\d+)/vote/$']
Answer: In your vote view, if the submission is not valid, you re-render the template
but you don't pass the Invitee to it - or rather you do, but you call it
`wedding` rather than `invitee`. So when the template tries to calculate the
URL of the vote submission for the invitee, it can't find the if value to use.
You need to be sure to pass the data with the correct names.
Note this would have been easier to debug if you'd provided more information
about the error, such as the full traceback and what you did leading up to it.
|
How to obtain random access of a gzip compressed file
Question: According to [this FAQ on zlib.net](http://www.zlib.net/zlib_faq.html#faq28)
it is possible to:
> access data randomly in a compressed stream
I know about the module Bio.bgzf of [Biopyton 1.60](http://news.open-
bio.org/news/2012/06/biopython-1-60-released/), which:
> supports reading and writing BGZF files (Blocked GNU Zip Format), a variant
> of GZIP with efficient random access, most commonly used as part of the BAM
> file format and in tabix. This uses Python’s zlib library internally, and
> provides a simple interface like Python’s gzip library.
But for my use case I don't want to use that format. Basically I want
something, which emulates the code below:
import gzip
large_integer_new_line_start = 10**9
with gzip.open('large_file.gz','rt') as f:
f.seek(large_integer_new_line_start)
but with the efficiency offered by the native zlib.net to provide random
access to the compressed stream. How do I leverage that random access
capability in Python?
Answer: I gave up on doing random access on a gzipped file using Python. Instead I
converted my gzipped file to a block gzipped file with a [block
compression/decompression
utility](http://samtools.sourceforge.net/tabix.shtml) on the command line:
zcat large_file.gz | bgzip > large_file.bgz
Then I used [BioPython](http://biopython.org/) and tell to get the
virtual_offset of line number 1 million of the bgzipped file. And then I was
able to rapidly seek the virtual_offset afterwards:
from Bio import bgzf
file='large_file.bgz'
handle = bgzf.BgzfReader(file)
for i in range(10**6):
handle.readline()
virtual_offset = handle.tell()
line1 = handle.readline()
handle.close()
handle = bgzf.BgzfReader(file)
handle.seek(virtual_offset)
line2 = handle.readline()
handle.close()
assert line1==line2
I would like to also point to the [SO answer by Mark
Adler](http://stackoverflow.com/a/14242594/778533) here on
[examples/zran.c](https://github.com/madler/zlib/blob/master/examples/zran.c)
in the [zlib](http://zlib.net/) distribution.
|
Python module equivalent to the unix command "fmt"?
Question: There is some module with similar functionality to the unix command
["fmt"](http://www.openbsd.org/cgi-
bin/man.cgi?query=fmt&apropos=0&sektion=0&manpath=OpenBSD%20Current&arch=i386&format=html)?.
`fmt` is pretty intelligent reflowing text and I would like to use something
similar with some big strings.
Answer:
import textwrap
with open('long_text.txt', 'r') as f:
long_text = f.read()
# Get a list of wrapped lines (default width 70)
long_text_lines = textwrap.wrap(long_text)
# Get a single string containing the wrapped text
# Same as: "\n".join(textwrap.wrap(long_text, ...))
long_text_string = textwrap.fill(long_text)
|
Python Tkinter: obtain tree node information
Question: Python 2.7 Linux
I am only using idlelib.TreeWidget to create a tree in Tkinter.Canvas, nothing
else.
How do I go about getting the information of the selected tree node (eg.
name)? I need to access this information later.
I am able to call a function when the Canvas is selected / double-clicked, but
not sure how with the tree nodes:
self.canvas.bind('<Double-Button-1>', self.onSave)
Please run the following code (Note that there are 2 trees, 1 tree in each
Canvas):
from Tkinter import Tk, Frame, BOTH, Canvas
from xml.dom.minidom import parseString
from idlelib.TreeWidget import TreeItem, TreeNode
class DomTreeItem(TreeItem):
def __init__(self, node):
self.node = node
def GetText(self):
node = self.node
if node.nodeType == node.ELEMENT_NODE:
return node.nodeName
elif node.nodeType == node.TEXT_NODE:
return node.nodeValue
def IsExpandable(self):
node = self.node
return node.hasChildNodes()
def GetSubList(self):
parent = self.node
children = parent.childNodes
prelist = [DomTreeItem(node) for node in children]
itemlist = [item for item in prelist if item.GetText().strip()]
return itemlist
data = '''
<angel>
<digital_core>
<radio_top>d</radio_top>
<uart_top>e</uart_top>
</digital_core>
<digital_core>
<c>f</c>
</digital_core>
</angel>
'''
class Application(Frame):
def __init__(self, parent):
Frame.__init__(self, parent, background = "white")
self.parent = parent
# Maximize window
self.screenWidth = self.parent.winfo_screenwidth() - 5
self.screenHeight = self.parent.winfo_screenheight() - 110
self.parent.geometry('%dx%d+%d+%d' % (self.screenWidth, self.screenHeight, 0, 0))
self.parent.resizable(0, 0)
dom = parseString(data)
item = DomTreeItem(dom.documentElement)
self.canvas = Canvas(self, bg = "cyan")
self.canvas.grid(column = 0, row = 0, rowspan = 2, sticky = 'NSWE', padx = 5, pady = 5)
node = TreeNode(self.canvas, None, item)
node.update()
self.canvas2 = Canvas(self, bg = "yellow")
self.canvas2.grid(column = 1, row = 0, rowspan = 2, sticky = 'NSWE', padx = 5, pady = 5)
node2 = TreeNode(self.canvas2, None, item)
node2.update()
parent.configure(bg = "black")
self.pack(fill = BOTH, expand = True, padx = 20, pady = 20)
def main():
root = Tk()
Application(root)
root.mainloop()
if __name__ == '__main__':
main()
Answer: The most direct approach I have seen is accessing the fields of the `TreeNode`
to retrieve content passed into the `TreeNode` constructor.
For instance,
node.parent
node.canvas
node.item
To get the text from the `DomTreeItem`, you can do
node.item.GetText()
You can also get additional information by looking at the methods in
dir(node.item)
|
Pythonic way to extract first line of a tabular file to a list
Question: This is a function that takes the tabular file as input and return the first
row as list:
def firstline_to_list(fvar):
"""
Input tab separated file.
Output first row as list.
"""
import csv
lineno = 0
with open(fvar, 'r') as tsvfile:
tabreader = csv.reader(tsvfile, delimiter='\t')
for row in tabreader:
lineno += 1
if lineno == 1:
return row
break
Is there a better way to do it than this clunky code of mine?
Answer: Just replace your for loop with a single call of `next` on the iterator
`tabreader`. In python 2.7, this should be `tabreader.next()`, and in python
3, I think it's `next(tabreader)`. You might also want to wrap the call in
`try except` block for `StopIteration` exception, just in case the file is
empty.
So putting everything together, here's version that's compatible with python 2
and 3:
def firstline_to_list(fvar):
"""
Input tab separated file.
Output first row as list.
"""
import csv, sys
with open(fvar, 'r') as tsvfile:
tabreader = csv.reader(tsvfile, delimiter='\t')
try:
if sys.version > '3':
result = next(tabreader)
else:
result = tabreader.next()
except StopIteration:
result = None
return result
|
python types in list during comprehension
Question: I have a sql query string such as the following:
intro text,
id int,
description varchar(50)
I am trying to create a string of types, with the goal of finding pieces of
text that do not match the types defined in the sql schema. The way I am
extracting the types from the sql text is as follows:
types = [re.sub('[^a-zA-Z]','',x.split()[1]) for x in schema]
types = [re.sub('varchar',types.StringType,x) for x in types]
types = [re.sub('text',types.StringType,x) for x in types]
types = [re.sub('bigint',types.IntType,x) for x in types]
types = [re.sub('decimal',types.IntType,x) for x in types]
However the interpreter complains that
types = [re.sub('varchar',types.StringTypes,x) for x in types]
AttributeError: 'list' object has no attribute 'StringTypes'
A SSCCE
With the following schema file
intro text,
id int,
description varchar(50)
and code (note, with fix as suggested by oscar below, but now with other
error)
import csv
import sys
import re
import types
sch = open(sys.argv[1], "rb")
#---------------------------------
# read schema
#---------------------------------
with sch as f:
schema = f.read().splitlines()
#---------------------------------
# extract schema types
#---------------------------------
foundtypes = [re.sub('[^a-zA-Z]','',x.split()[1]) for x in schema]
foundtypes = [re.sub('varchar',str,x) for x in foundtypes]
foundtypes = [re.sub('text',str,x) for x in foundtypes]
foundtypes = [re.sub('int',int,x) for x in foundtypes]
foundtypes = [re.sub('bigint',int,x) for x in foundtypes]
foundtypes = [re.sub('decimal',int,x) for x in foundtypes]
print foundtypes
I am using Python 2.7.5
thank you
Answer: You overwrote the binding (see: [variable
shadowing](http://en.wikipedia.org/wiki/Variable_shadowing)) to the `types`
[module](https://docs.python.org/2/library/types.html), in this line:
types = [re.sub('[^a-zA-Z]','',x.split()[1]) for x in schema]
After that, `types` is no longer pointing to the module, but to a list. Just
use another name in all the assignments:
my_types = [re.sub('[^a-zA-Z]','',x.split()[1]) for x in schema]
my_types = [re.sub('varchar',types.StringType,x) for x in my_types]
my_types = [re.sub('text',types.StringType,x) for x in my_types]
my_types = [re.sub('bigint',types.IntType,x) for x in my_types]
my_types = [re.sub('decimal',types.IntType,x) for x in my_types]
**UPDATE**
I think you overdesigned the solution, except for the first line this is not a
good fit for using regular expressions. A simple `if-elif-else` will work just
fine:
def transform(typestr):
if typestr in ('varchar', 'text'):
return types.StringType
elif typestr in ('int', 'bigint', 'decimal'):
return types.IntType
else:
return None
my_types = [re.sub(r'[^a-zA-Z]', '', x.split()[1]) for x in schema]
[transform(x) for x in my_types]
=> [<type 'str'>, <type 'int'>, <type 'str'>]
|
python SIP log file processing
Question: i have a sniff/log file for VoIP/SIP generated by python scapy in format time | src | srcport | dst | dstport | payload the sniff python script looks like this:
## Import Scapy module
from scapy.all import *
import sys
sys.stdout = open('data.txt', 'w')
pkts = sniff(filter="udp and port 5060 and not port 22", count=0,prn=lambda x:x.sprintf("%sent.time% | %IP.src% | %IP.sport% | %IP.dst% | %IP.dport% | Payload {Raw:%Raw.load%\n}"))
each packet in one line and each line can have different size depends on SIP
message type (Register, 200 OK, Invite, Notify and so on...)
What i would like to get from the file are fields `time, src, srcport, dst,
dstport` and from `Payload` type (just right after Payload) of `SIP message,
From, To, Call-iD, Contact` and the whole payload and then prepare these to
insert into MySQL database.
1st msg:
07:57:01.894990 | 192.168.1.10 | 5060 | 192.168.1.1 | 5060 | Payload 'INVITE sip:[email protected] SIP/2.0\r\nVia:
SIP/2.0/UDP 192.168.1.10:5060;rport;branch=z9hG4bK-9cbb0ba8\r\nRoute: <sip:192.168.1.1:5060;lr>\r\nFrom: "test-311" <sip:[email protected]>;tag=3d13bd6f\r\n
To: <sip:[email protected]>\r\nCall-ID: [email protected]\r\nCSeq: 1 INVITE\r\n
Contact: "test-311" <sip:[email protected]:5060;transport=UDP>\r\nMax-Forwards: 70\r\n
Supported: 100rel,replaces\r\nAllow: ACK, BYE, CANCEL, INFO, INVITE, OPTIONS, NOTIFY, PRACK, REFER, UPDATE, MESSAGE\r\nContent-Type: application/sdp\r\nContent-Length: 276\r\n\r\nv=0\r\no=- 3506863524 285638052 IN IP4 192.168.1.10\r\ns=-\r\nc=IN IP4 192.168.1.10\r\nt=0 0\r\nm=audio 8000 RTP/AVP 8 0 18 101\r\nc=IN IP4 192.168.1.10\r\na=rtpmap:8 PCMA/8000\r\na=rtpmap:0 PCMU/8000\r\na=rtpmap:18 G729/8000\r\na=rtpmap:101 telephone-event/8000\r\na=fmtp:101 0-15\r\na=ptime:20\r\n'
2nd msg:
07:57:01.902618 | 192.168.1.1 | 5060 | 192.168.1.10 | 5060 | Payload 'SIP/2.0 100 Trying\r\nVia: SIP/2.0/UDP 192.168.1.10:5060;received=192.168.1.10;branch=z9hG4bK-9cbb0ba8;rport=5060\r\nFrom: "test-311" <sip:[email protected]>;tag=3d13bd6f\r\nTo: <sip:[email protected]>\r\nCall-ID: [email protected]\r\nCSeq: 1 INVITE\r\n\r\n'
I have tried to read line by line and split but I do not know how to split and
take data from payload part. Any help is more then welcome.
Answer: Well, you can enter the data into mysql straight from this program too; it
might very well be the easiest approach.
from scapy.all import *
import sys
# connect to mysql
connection = ...
def insert_into_mysql(packet):
# now you can use packet.src, packet.sport, packet.dst, packet.dport, and
# I believe packet['Raw'].load
connection.execute(...)
# to not print the packet
return None
# to print the packet
return x.sprintf("%sent.time% | %IP.src% | %IP.sport% | %IP.dst% | %IP.dport% | Payload {Raw:%Raw.load%\n}"
pkts = sniff(filter="udp and port 5060", count=0, store=0, prn=insert_into_mysql)
But if you need to use the existing log, I think you need to use:
for line in open('log.txt'):
sent_time, src, sport, dst, dport, payload = line.split(' | ', 6)
payload = payload.replace('Payload ', '')
# to get the unquoted payload, I'd guess (can't test SIP though)
from ast import literal_eval
payload = literal_eval(payload)
|
unable to post tweet on python 3.4 and twitter-1.14.2 api
Question:
from twitter.api import Api
Api = twitter.api(consumer_key='[gdgfdfhgfuff] ',
consumer_secret='[jhhjf] ',
access_token_key=' [jhvhvvhjvhvhvh]',
access_token_secret='[hvghgvvh] ')
friends=Api.PostUpdate("First Tweet from PYTHON APP ")
error given
TTraceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
from twitter.api import Api
ImportError: cannot import name 'Api'
I am using python 3.4 and twitter-1.14.2 api
Answer: Replace
`from twitter.api import Api`
with
`from twitter.api import Twitter`
There is no class or method in
[`twitter.api`](https://github.com/sixohsix/twitter/blob/master/twitter/api.py)
called `Api`
|
Strange poco reactor's notifications
Question: I have code that implements Poco SocketReactor, as described from Echo
example.
class TSPConnectionHandler
{
public:
TSPConnectionHandler(const StreamSocket& socket, SocketReactor& reactor) : _socket(socket),
_reactor(reactor)
{
_reactor.addEventHandler(_socket, Poco::NObserver<TSPConnectionHandler, ReadableNotification>(*this, &TSPConnectionHandler::onSocketReadable));
_reactor.addEventHandler(_socket, Poco::NObserver<TSPConnectionHandler, WritableNotification>(*this, &TSPConnectionHandler::onSocketWritable));
_reactor.addEventHandler(_socket, Poco::NObserver<TSPConnectionHandler, ShutdownNotification>(*this, &TSPConnectionHandler::onSocketShutdown));
_reactor.addEventHandler(_socket, Poco::NObserver<TSPConnectionHandler, ErrorNotification>(*this, &TSPConnectionHandler::onSocketError));
_reactor.addEventHandler(_socket, Poco::NObserver<TSPConnectionHandler, TimeoutNotification>(*this, &TSPConnectionHandler::onSocketTimeout));
}
virtual ~TSPConnectionHandler()
{
_reactor.removeEventHandler(_socket, Poco::NObserver<TSPConnectionHandler, ReadableNotification>(*this, &TSPConnectionHandler::onSocketReadable));
_reactor.removeEventHandler(_socket, Poco::NObserver<TSPConnectionHandler, WritableNotification>(*this, &TSPConnectionHandler::onSocketWritable));
_reactor.removeEventHandler(_socket, Poco::NObserver<TSPConnectionHandler, ShutdownNotification>(*this, &TSPConnectionHandler::onSocketShutdown));
_reactor.removeEventHandler(_socket, Poco::NObserver<TSPConnectionHandler, ErrorNotification>(*this, &TSPConnectionHandler::onSocketError));
_reactor.removeEventHandler(_socket, Poco::NObserver<TSPConnectionHandler, TimeoutNotification>(*this, &TSPConnectionHandler::onSocketTimeout));
}
void onSocketReadable(const Poco::AutoPtr<ReadableNotification>& pNf)
{
cout << "READable !!" << endl;
try
{
vector<char> m_buffer;
m_buffer.resize(1024, '\0');
LONG32 m_buflen = _socket.receiveBytes(&m_buffer[0], 1024);
if (m_buflen == 0)
{
cout << "Connection reset by peer normally" << endl;
delete this;
}
_socket.sendBytes(&m_buffer[0], m_buflen);
}
catch(Poco::Net::NetException& e)
{
cout << "socket read exception: " << e.displayText() << endl;
delete this;
}
}
void onSocketWritable(const Poco::AutoPtr<WritableNotification>& pNf)
{
cout << "WRITEable !!" << endl;
}
void onSocketShutdown(const Poco::AutoPtr<ShutdownNotification>& pNf)
{
cout << "SHUTDOWN!!!!!!!!!!!!" << endl;
delete(this);
}
void onSocketError(const Poco::AutoPtr<ErrorNotification>& pNf)
{
cout << "Error!!" << endl;
}
void onSocketTimeout(const Poco::AutoPtr<TimeoutNotification>& pNf)
{
cout << "Timeout!!" << endl;
}
private:
StreamSocket _socket;
SocketReactor& _reactor;
};
It starts normally somewhere else in program using this code:
Poco::Net::ServerSocket tcpsock("9495");
Poco::Net::SocketReactor reactor;
Poco::Net::SocketAcceptor<TSPConnectionHandler> acceptor(tcpsock, reactor);
Poco::Thread thread;
thread.start(reactor);
waitForTerminationRequest();
reactor.stop();
thread.join();
return Application::EXIT_OK;
Also I have python client to test it:
import socket
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client_socket.connect(('localhost', 9495))
data = raw_input ( "Something:" )
client_socket.send(data)
data = client_socket.recv(1024)
print "RECIEVED:" , data
client_socket.close()
And I got list of questions that I don't understand how to resolve:
* after python string "client_socket.connect(('localhost', 9495))" is done, server starts to spam "onSocketWritable", how to stop it? I understand I have to be notified if socket become writable, but why do it continue doing that while all socket's life? If that is normal situation, what for was designed WritableNotification?
* if I will start python in debug mode and close it before "client_socket.close()", server gets exception and will be deleted throw "socket read exception: ". But errornotification and shutdownnotification will not be dispatched. Why? I never saw them dispatched anyhow.
* If I will let "client_socket.close()" be done, on server side I will receive ReadableNotification and will leave throw "Connection reset by peer normally". But still there will be no ShutdownNotification. Why?
Answer: 1. As soon as you are connecting to the server a socket will be instantiated and if you registred a writable notification you will get notified about this state. This means only add this event handler when you have to send data. The socket will send the data and when done, this handler is called again (asynchronous socket...). If you don't have to send more block's call removeEventHandler for this event.
2. The socket reactor uses the select command (Socket::select(readable, writable, except, _timeout)) The name errornotification is a little bit misleading (will get some error conditions but also out of bounds data). If a socket closes gracefully onReadable will be called with no available char's, otherwise Poco::Net::ConnectionResetException will be thrown.
3. The ShutdownNotification is sent when the SocketReactor is stopped.
You should have a look in SocketRactor.cpp then everything is much clearer.
|
Why do I get "ImportError: Twisted requires zope.interface 3.6.0 or later." when running stratum mining proxy?
Question: The entire return when running "sudo python ./mining_proxy.py" is:
Traceback (most recent call last):
File "./mining_proxy.py", line 67, in <module>
from twisted.internet import reactor, defer
File "/Library/Python/2.7/site-packages/Twisted-13.2.0-py2.7-macosx-10.8-intel.egg/twisted/__init__.py", line 53, in <module>
_checkRequirements()
File "/Library/Python/2.7/site-packages/Twisted-13.2.0-py2.7-macosx-10.8-intel.egg/twisted/__init__.py", line 51, in _checkRequirements
raise ImportError(required + ".")
ImportError: Twisted requires zope.interface 3.6.0 or later.
This leads me to believe that zope.interface is not installed. So I try to
install it:
sudo easy_install zope.interface
Searching for zope.interface
Best match: zope.interface 4.1.1
Processing zope.interface-4.1.1-py2.7-macosx-10.8-intel.egg
zope.interface 4.1.1 is already the active version in easy-install.pth
Using /Library/Python/2.7/site-packages/zope.interface-4.1.1-py2.7-macosx-10.8-intel.egg
Processing dependencies for zope.interface
Finished processing dependencies for zope.interface
I also find [this post](https://github.com/kpdyer/fteproxy/issues/66) which
says you basically need to put an **init**.py into the folder. So I do:
sudo touch /usr/local/lib/python2.7/site-packages/zope.interface-4.1.1/__init__.py
I try to run mining proxy again, same error. Please help.
Answer: I had the same error.
After googling around I found that the touch should be in a different place:
sudo touch /usr/local/lib/python2.7/site-packages/zope/__init__.py
however it didn't work.
I just fixed it using virtualenv
steps here:
pip install virtualenv virtualenvwrapper
mkvirtual stratum-proxy
pip install https://github.com/slush0/stratum-mining-proxy.git
pip install zope2
After those step inside the virtualenv stratum-proxy was working successfully.
hope it helps
|
Adding a text control window to a wxMessageDialog box/AboutDialogInfo in wxPython
Question: I am working with python v2.7 and wxPython v3.0 on Windows 7 OS.
In my app I have a about menu. Upon clicking the about menu I want to display
some information about my app. I am trying to create a dialog box/AboutBox
exactly as shown in the image below.(This is the about dialog of
[notepad++](http://notepad-plus-plus.org/). Click on `?` in the menu bar of
notepad++.)
The special thing about the dialog box of notepad++ is that I need a text
control window too. One can copy the info.

I tried to do the same in wxPython, but unfortunately I failed. I tried two
different hit and trial approaches.
**1.** I tried to add the text control window to the ~~dialog box~~
wxMessageDialog but it doesn't shows up at all.
**2.** I tried to use the [AboutBox](http://www.wxpython.org/docs/api/wx-
module.html#AboutBox) in wxPython, and tried to add the text control to it but
it failed because the AboutDialogInfo is not a window and the parent of the
text control should be of a window type.
Error:
aboutPanel = wx.TextCtrl(info, -1, style = wx.TE_MULTILINE|wx.TE_READONLY|wx.HSCROLL)
File "C:\Python27\lib\site-packages\wx-3.0-msw\wx\_controls.py", line 2019, in __init__
_controls_.TextCtrl_swiginit(self,_controls_.new_TextCtrl(*args, **kwargs))
TypeError: in method 'new_TextCtrl', expected argument 1 of type 'wxWindow *'
It would be great if someone could provide some idea on how to add a text
control windows to a dialog box/AboutBox?
**Code:** Here is my code sample for playing around:
import wx
from wx.lib.wordwrap import wordwrap
class gui(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self,None, id, title, style=wx.DEFAULT_FRAME_STYLE)
panel1 = wx.Panel(self, -1)
panel1.SetBackgroundColour('#fffaaa')
menuBar = wx.MenuBar()
file = wx.Menu()
file.Append(101, '&About1', 'About1')
file.Append(102, '&About2', 'About2')
menuBar.Append(file, '&File')
self.SetMenuBar(menuBar)
wx.EVT_MENU(self, 101, self.onAbout)# Event for the About1 menu
wx.EVT_MENU(self, 102, self.onAboutDlg)# Event for the About2 menu
def onAbout(self, event):
message = 'This fantastic app was developed using wxPython.\nwxPython is c00l :)'
dlg = wx.MessageDialog(self, message, 'My APP', wx.OK|wx.ICON_INFORMATION)
aboutPanel = wx.TextCtrl(dlg, -1, style = wx.TE_MULTILINE|wx.TE_READONLY|wx.HSCROLL)
aboutPanel.WriteText('Experimentation is the part of our life.\n')
dlg.ShowModal()
dlg.Destroy()
def onAboutDlg(self, event):
self.panel = wx.Panel(self, -1)
info = wx.AboutDialogInfo()
info.Name = "My About Box"
info.Version = "0.1"
info.Copyright = "(C) 2014 xxx"
info.Description = wordwrap(
"This is an example application that shows the problem "
"that I am facing :)",
350, wx.ClientDC(self.panel))
info.WebSite = ("http://stackoverflow.com/users/2382792/pss", "My Home Page")
info.Developers = ["PSS"]
info.License = wordwrap("Driving license and a AK-47 too :P ", 500,wx.ClientDC(self.panel))
# Uncomment the following line to get the error!
#aboutPanel = wx.TextCtrl(info, -1, style = wx.TE_MULTILINE|wx.TE_READONLY|wx.HSCROLL)
#aboutPanel.WriteText('Experimentation is the part of our life.\n')
wx.AboutBox(info)
if __name__ == '__main__':
app = wx.App()
frame = gui(parent=None, id=-1, title="My-App")
frame.Show()
app.MainLoop()
Thank you for your time!
Answer: `wxAboutBox()` uses the standard about dialog for the current platform so it
won't help you to achieve your goal. Similarly, `wxMessageDialog` is the
native message box dialog which, again, can't have your custom text box. OTOH
there is absolutely no problem with building any dialog you want using
`wxDialog` and adding elements to it (and using sizers to lay them out).
IOW your mistake is to think that the class you need to use is
`wxMessageDialog`: it isn't, you need `wxDialog`.
|
Python CSV file reader : bad cells format for high numbers
Question: I'm using the csv file reader of python to read a csv file. My code is the
following :
with open(self.csv_file, 'rb') as csvfile:
data = list(csv.reader(csvfile, delimiter=';'))
Everything works fine except for one column of my csv file. I have columns
with float numbers that are between 1 and -1 (ex : 0,34430203959) and it works
perfectly but one of my column is dealing with very high numbers (ex :
142266963436).Because the cells of the file are formatted as "standards", my
CSV file is printig the number 142266963436 as 142.266.963.436 (as shown in
the figure below) :

There is the problem. When I parse my CSV file, he has no problems dealing
with little numbers because there is only one "." but when I parse high
numbers with a lot of "." python interpret them as "float" and put the "."
after the fist number.
So, instead of having the number :
142266963436
I have the number :
1.42266963436
Anyone has an idea of how i could fix this ?
Thanks for your help !
Answer: Your CSV file contains that oddity; it looks as if all your `z` columns are
afflicted (`ankle_pos.z` and `toe_pos.z` in your screenshot). If you are
certain these values are meant to be floats then you can use Python to
'repair' these:
import csv
def repaired_float(c):
try:
return float(c)
except ValueError:
# interpret ddd.ddd.ddd.ddd as 0.ddddddddd instead
return float('.{}'.format(c.replace('.', ''))
with open(filename, 'rb') as infh:
reader = csv.reader(infh)
next(reader, None) # skip header row
for row in reader:
row = map(repaired_float, row)
# do something with row
If you wanted to interpret those values as (large) integers, replace the last
2 lines in `repaired_float()` with:
# interpret ddd.ddd.ddd.ddd as ddddddddd instead
return int(c.replace('.', ''))
|
Split Integer List in Python
Question: I have a list of integers. I want to split the main list into multiple sub
lists of non-zero integers.
Example:
main_lists = [0,0,0,0,0,0,123,432,5,54,0,654,645,34,23,12,0,0,0,0,0,0,123,1,312,312,132,3,123,0,0,0,0,0,0,0]
output =
[123,432,5,54,0,654,645,34,23,12]
[123,1,312,312,132,3,123]
Condition is, If one or two (less than < 5) zero is found in between two non
zero values, then it should include those zeros int out put lists. If more
that 5 consecutive zero is found then it should split the list from that
position.
main_lists = [0,0,0,0,0,0,123,432,5,54,0,654,645,34,23,12,0,0,0,0,0,0,123,1,312,312,132,3,123,0,0,0,0,0,0,0]
zero_count = 0
non_zero_temp = []
for i in main_lists:
if i == 0:
zero_count = zero_count + 1
else:
if zero_count < 5:
non_zero_temp.append(i)
zero_count = 0
Thanks in Advance,
Answer:
from itertools import groupby
main_lists = [
0,0,0,0,0,0,123,432,5,54,0,654,645,34,23,12,0,
0,0,0,0,0,123,1,312,312,132,3,123,0,0,0,0,0,0,0
]
# group numbers into contiguous lists (by 0 or not-0)
is_zero = lambda n: not n
groups = (list(nums) for zero,nums in groupby(main_lists, key=is_zero))
# group lists into contiguous chunks (by to-keep or to-discard)
is_keeper = lambda lst: bool(lst[0]) or len(lst) < 5
chunks = (chunk for keep,chunk in groupby(grouped, key=is_keeper) if keep)
# reassemble chunks
final = [[i for lst in chunk for i in lst] for chunk in chunks]
results in
[[123, 432, 5, 54, 0, 654, 645, 34, 23, 12], [123, 1, 312, 312, 132, 3, 123]]
|
BeautifulSoup Specify table column by number?
Question: Using Python 2.7 and BeautifulSoup 4, I'm scraping song names from a table.
Right now the script finds links in the row of a table; how can I specify I
want the first column?
Ideally I'd be able to switch numbers around to change which ones got
selected.
Right now the code looks like this:
from bs4 import BeautifulSoup
import requests
r = requests.get("http://evamsharma.finosus.com/beatles/index.html")
data = r.text
soup = BeautifulSoup(data)
for table in soup.find_all('table'):
for row in soup.find_all('tr'):
for link in soup.find_all('a'):
print(link.contents)
How do I, in effect, index the `<td>` tags within each `<tr>` tag?
The URL in there right now is a page on my site where I basically copied the
table source from Wikipedia to make the scraping a little simpler.
Thanks!
evamvid
Answer: Find all `td` tags inside `tr` and get the one you need by index:
index = 2
for table in soup.find_all('table'):
for row in soup.find_all('tr'):
try:
td = row.find_all('td')[index]
except IndexError:
continue
for link in td.find_all('a'):
print(link.contents)
|
How can I efficiently create a user graph based on transaction data using Python?
Question: I'm attempting to create a graph of users in Python using the networkx
package. My raw data is individual payment transactions, where the payment
data includes a user, a payment instrument, an IP address, etc. My nodes are
users, and I am creating edges if any two users have shared an IP address.
From that transaction data, I've created a Pandas dataframe of unique [user,
IP] pairs. To create edges, I need to find [user_a, user_b] pairs where both
users share an IP. Let's call this DataFrame 'df' with columns 'user' and
'ip'.
I keep running into memory problems, and have tried a few different solutions
outlined before. For reference, the raw transaction list was ~500,000,
includes ~130,000 users, ~30,000 IPs, and likely ~30,000,000 links.
1. Join df to itself, sort pairs and remove duplicates (so that [X, Y] and [Y, X] don't both show up as unique pairs).
df_pairs = df.join(df, how='inner', lsuffix='l', rsuffix='r')
df_sorted_pairs = [np.sort([df_pairs['userl'][i], df_pairs['userr'][i]]) for i in range(len(df_pairs))]
edges = np.asarray(pd.DataFrame(df_sorted_pairs).drop_duplicates())
This works pretty well, but gives me a Memory Error fairly quickly, as joining
a table to itself grows very quickly.
2. Create a matrix, where users are the rows, IPs are the columns, and matrix elements are 1 if that user transacted on the IP and 0 otherwise. Then X.dot(X.transpose()) is a square matrix whose elements (i,j) represent how many IPs were shared by user i and user j.
user_list = df['user'].unique()
ip_list = df['ip'].unique()
df_x = pd.DataFrame(index=user_list, columns=ip_list)
df_x.fillna(0, inplace=True)
for row in range(len(df)):
df_x[df['ip'][row]][df['user'][row]] = 1
df_links = df_x.dot(df_x.transpose())
This works extremely well unless len(ip_list) > 5000. Just creating the empty
dataframe of say, 500,000 rows x 200,000 columns gives a Memory Error.
3. Brute force. Iterate across the users one by one. For each user, find the distinct IPs. For each IP, find the distinct users. Those resulting users are therefore linked to the user in the current iteration. Add that [User1, User2] list to master list of links.
user_list = df['user'].unique()
ip_list = df['ip'].unique()
links=[]
for user in user_list:
related_ip_list = df[df['user'] == user]['ip'].unique()
for ip in related_ip_list:
related_user_list = df[df['ip'] == ip]['user'].unique()
for related_user in related_user_list:
if related_user != user:
links.append([user, related_user])
This works, but extremely slow. It ran for 3 hours and finally gave me a
Memory Error. Because links was being saved along the way, I could check how
big it got - about 23,000,000 links.
Any advice would be appreciated. Have I simply gone too far into "Big Data"
where traditional methods like the above aren't going to cut it? I didn't
think having 500,000 transactions qualified as "Big Data" but I guess storing
a 130,000 x 30,000 matrix or creating a list with 30,000,000 elements is
pretty large?
Answer: I think your problem is that a matrix representation is not going to cut it:
Note that memory wise, you do very inefficient stuff. For example, you create
a matrix with a lot of zeros that need to be allocated in RAM. It would be a
lot more efficient to not have any object in RAM for a connection that does
not exist instead of a zero float. You "abuse" linear algebra math to solve
your problem, which makes you use a lot of RAM. (The amount of elements is in
your matrix is 130k*30k = a gazilion, but you "only" have 30m links that you
actually care about)
I truly feel for you, because pandas was the first library I learned and I was
trying to solve almost every problem with pandas. I noticed over time though
that the matrix approach is not optimal for a lot of problems.
There is a "spare matrix" somewhere in numpy, but let's not go there.
let me suggest another approach:
use a simple default dict:
from collections import defaultdict
# a dict that makes an empty set if you add a key that doesnt exist
shared_ips = defaultdict(set)
# for each ip, you generate a set of users
for k, row in unique_user_ip_pairs.iterrows():
shared_ips[row['ip']].add(row['user'])
#filter the the dict for ips that have more than 1 user
shared_ips = {k, v for k, v in shared_ips.items() if len(v) > 1}
I'm not sure if this is 100% going to solve your usercase, but note the
efficiency:
This will at most duplicate the RAM usage from your initial unique user-ip
pairs object. But you will get the information which ip was shared amongst
which users.
# The big lesson is this:
_If most cells in a matrix represent the same type of information, don't use a
matrix approach if you run into memory problems_
I've seen so many pandas solutions for problems that could have been done with
the simple usage of pythons builtin types like _dict_ , _set_ , _frozenset_
and _Counters_. Especially people coming to Python from statistical toolboxes
like MATLAB and R or Excel are very very prone to it (they sure like them
tables). I suggest that one tries to not make pandas his personal builtin
library where he resorts to first...
|
Is there some equivalent function in python for pack from php?
Question: I'm not familiar with binary operations in general, so for PHP i did it by
examples, but can't find anything for python... I need something in python
like in php `pack('H*'` and `pack('a*'`.
Answer: The python [`struct`](https://docs.python.org/2/library/struct.html%29) module
is what you're looking for.
>>> import struct
>>> struct.pack("<H", 5) # Little endian short, 2 bytes.
'\x05\x00'
Do note that [`php.pack()`](http://www.php.net/manual/en/function.pack.php)
`H` is not the same as struct `H`.
If you want to convert an integer to a hexadecimal:
>>> import random
>>> hex(random.randint(0, 100))
'0x4c'
but in python hexadecimals are strings, so I'm not sure if that's what you're
asking. If you want to actually _pack_ the output of `random.randint()` as
binary data, pack the value as some integer. You can use the format characters
`B, H, I, L, Q` at least.
|
Multiplying array in python
Question: From [this](http://stackoverflow.com/questions/8194959/in-python-how-will-you-
multiply-individual-elements-of-an-array-with-a-floating) question I see how
to multiply a whole numpy array with the same number (second answer, by
JoshAdel). But when I change P into the maximum of a (long) array, is it
better to store the maximum on beforehand, or does it calculate the maximum of
H just once in the second example?
import numpy as np
H = [12,12,5,32,6,0.5]
P=H.max()
S=[22, 33, 45.6, 21.6, 51.8]
SP = P*np.array(S)
or
import numpy as np
H = [12,12,5,32,6,0.5]
S=[22, 33, 45.6, 21.6, 51.8]
SP = H.max()*np.array(S)
So does it calculate `H.max()` for every item it has to multiply, or is it
smart enough to it just once? In my code `S` and `H` are longer arrays then in
the example.
Answer: There is little difference between the 2 methods:
In [74]:
import numpy as np
H = np.random.random(100000)
%timeit P=H.max()
S=np.random.random(100000)
%timeit SP = P*np.array(S)
%timeit SP = H.max()*np.array(S)
10000 loops, best of 3: 51.2 µs per loop
10000 loops, best of 3: 165 µs per loop
1000 loops, best of 3: 217 µs per loop
Here you can see that the individual step of pre-calculating `H.max()` is no
different from calculating it in a single line
|
How to use a Matplotlib proxy artist with stackplot?
Question: Example (runs in `ipython --pylab`):
x = arange(25)
Y = maximum(0,2+randn(7,25))
stackplot(x,Y)
legend(('A','B','C','D','E','F','G'))
**The problem:** instead of a proper legend, I get an empty rectangle. This is
a [known issue](https://github.com/matplotlib/matplotlib/issues/1943) and one
work-around is using another plot element that supports legend, also called a
_[proxy artist](http://matplotlib.org/users/legend_guide.html#using-proxy-
artist)_. Now I wonder how this idiom is translated to my case with seven data
series. Here is what I tried:
proxy = [Rectangle((0,0), 0,0) for _ in Y]
legend(proxy, ('A','B','C','D','E','F','G'))
Now I have a legend with 7 elements, but they are all blue. How can I have the
proxy artists match the stackplot colors?
Answer: You can read the color from the `PolyCollections` which form the stackplot.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
x = np.arange(25)
Y = np.maximum(0,2+np.random.randn(7,25))
fig, ax = plt.subplots()
sp = ax.stackplot(x,Y)
proxy = [mpl.patches.Rectangle((0,0), 0,0, facecolor=pol.get_facecolor()[0]) for pol in sp]
ax.legend(proxy, ('A','B','C','D','E','F','G'))

|
Save utf-8 strings into sqlite table with python (sqlite3 module)
Question: I am using python 2.7.6 and sqlite3. Here the part of code which must create
the table named goods:
c.execute("""create table goods(
Art varchar(12) primary key,
Name_G varchar(30),
Meas varchar(3),
Price_G double unsigned,
check (Meas in ('кг.','л.','шт.')))""")
Don't blame me for such method, it's the exercise requirement. THe field Meas
can contain only one of three strings. They are listed in constraint and they
are in Russian. This query executes succesfully but if I check the data in
database I see it stores the following SQL:
CREATE TABLE goods(
Art varchar(12) primary key,
Name_G varchar(30),
Meas varchar(3),
Price_G double unsigned,
check (Meas in ('кг.','л.','шт.')))
As you can see, it's not Russian. Is there any method to store correct data in
the database? encode() method, u'' didn't work for me.
Answer: The following instructions work correctly for me:
alex@rhyme ~/tmp $ echo $LANG
ru_RU.UTF-8
alex@rhyme ~/tmp $ python
Python 2.7.6 (default, Feb 5 2014, 11:50:47)
[GCC 4.7.2 20121109 (ALT Linux 4.7.2-alt8)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
>>> db = sqlite3.connect('sample1.db')
>>> c = db.cursor()
>>> c1 = c.execute('PRAGMA encoding="UTF-8";')
>>> c1 = c.execute('CREATE TABLE sample (id INTEGER PRIMARY KEY AUTOINCREMENT, t VARCHAR(3) NOT NULL DEFAULT "руб");')
>>> c1 = c.execute('INSERT INTO sample DEFAULT VALUES;')
>>> db.commit()
|
Analyse frequency of mp3 files with python
Question: I am trying to write a Python script to read an MP3 file and perform some
analysis on the frequencies in it. In particular, I want a spectrogram
(frequency vs time) as output.
However, when I read the file using open() and piped the contents to a file, I
got something like this:
3763 1e65 0311 1814 b094 d3e3 25b3 641b
15a1 f146 62d6 ade6 7708 c5ec 1a0d 7395
201c 46e6 65a9 5276 688a 47eb 80e8 617e
4d66 2d82 2677 f74e e664 6220 69fa 1b46
On further research, I figured that these were somehow related to the MP3
headers and data discussed in this wiki:
<http://en.wikipedia.org/wiki/MP3#File_structure>
How can I use this information to extract frequency data of the file?
PS: I specifically want to analyse MP3 files, NOT WAV files. A workaround
would be to convert the MP3 to WAV format and then work on that, as there is a
Python module to handle WAV files. But is there a solution to this problem
without this conversion?
Thanks in advance.
Answer: If you went with .wav files, there is a python standard library that can
handle them (<https://docs.python.org/2/library/wave.html>). I have played
with this in the past and found it quite easy to use.
For mp3 the mutagen package is an option <https://code.google.com/p/mutagen/>
Also, this SO question my help: [Importing sound files into Python as NumPy
arrays (alternatives to
audiolab)](http://stackoverflow.com/questions/2356779/importing-sound-files-
into-python-as-numpy-arrays-alternatives-to-audiolab)
|
Flask - Setting root directory
Question: I have a folder structure that looks like so:
myproject/
- flask_app/
- app.py
- templates/
- module1.py
- module2/
- db.py
- models/
I'm running into ImportErrors in my Flask application when trying to do the
following:
from module2 import db
# or something else like
import module1
from flask import Flask
app = Flask(__name__)
if __name__ == "__main__":
app.run(debug=True)
This is obviously because the Flask app is running from its own subdirectory
"flask_app". I normally would try and get around this by running it like so:
python -m flask_app.app
But this doesn't seem to solve the problem.
Is there a way I run my flask app from the root folder without resorting to
relative imports (using older version of Python)
Answer: When you run `python -m flask_app.app`, most likely `sys.path` won't include
the `myproject` path.
Easiest would be to put a top-level script in the `myproject` directory; lets
call it `runsite.py`:
from flask_app.app import app
app.run(debug=True)
and run that with Python:
$ python runsite.py
|
Crossplatform sound with Python standard library
Question: I have searched plenty to find an answer, I have seen winsound repeatedly but
that is just for windows and other modules seem to all be outside of the
standard library.
Is there a simple way to play a `.wav` or any sound file using something from
the standard library that will work on most platforms?
Answer: You can find all the information here: <https://wiki.python.org/moin/Audio/>
You can see that there are some built in modules like `wave` and others which
are probably way better and are also platform independent.
If you choose to go with external libraries, you can plenty options like
`PyMedia`, `PySDL`, `wxPython` and many others.
For example, using `PyMedia`:
import pymedia.audio.sound as sound
f = wave.open('myFile.wav', 'rb')
snd = sound.Output( f.getframerate(), f.getnchannels(), sound.AFMT_S16_LE )
s = f.readframes( 300000 )
snd.play(s)
|
Python openCV: I get an unchanged image when I use cvtColor
Question: I have an image as a numpy array with the shape (480,640) in grayscale.
I want to lay a colored mask over the image and need to get the image in the
same shape to do it, which is (480,640,3).
Here is what I tried:
print str(img.shape) +' '+ str(type(img)) +' '+ str(img.dtype)
# prints: (480, 640) <type 'numpy.ndarray'> uint8
cv2.cvtColor(img, cv2.COLOR_GRAY2BGR, img, 3)
# this line seems to have no effect although I set it to 3 channels
print str(img.shape) +' '+ str(type(img)) +' '+ str(img.dtype)
# prints: (480, 640) <type 'numpy.ndarray'> uint8
rowCounter = 0
while rowCounter < img.shape[0]:
columnCounter = 0
while columnCounter < img.shape[1]:
if img[rowCounter, columnCounter] == 0:
img[rowCounter, columnCounter] = [0, 0, 0]
else:
img[rowCounter, columnCounter] = [255, 255, 255]
columnCounter += 1
rowCounter += 1
Ok, the code stops on the line where I want to assign the three values ([0, 0,
0]) instead of the single value (0). The error message reads as follows:
ValueError: setting an array element with a sequence.
How can I change from single value to three values? Is there a function I am
not finding?
Thank you!
Answer: The main thing is that you need to assign the converted image to a new name.
I'm not sure if using the c++ format of providing the target image as an
argument works. I would just do it the usual python (cv2) way of assigning to
a name (same name is fine).
Also, you don't need to assign the number of channels. The conversion type
takes care of that.
# cv2.cvtColor(img, cv2.COLOR_GRAY2BGR, img, 3)
color_mask = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
Does that get you the image you want?
By the way, as long as you are using numpy/opencv you probably want to look
into ways of making it more efficient. If you do individual pixel access
across a whole image/numpy array, that's a red flag (for opencv in python).
Below is code that shows the conversion but then ignores that and shows (as I
understand it) how to apply a more efficient mask.
# Copy-Paste Working (and More Efficient) Example
import cv2
import numpy as np
# setup an original image (this will work for anyone without needing to load one)
shape = (480, 640)
img_gray = np.ndarray(shape, dtype=np.uint8)
img_gray.fill(127)
img_gray[0:40, 100:140] = 0 # some "off" values
cv2.imshow('original grayscale image', img_gray)
cv2.waitKey(0) # press any key to continue
# convert the gray image to color (not used. just to demonstrate)
img_color = cv2.cvtColor(img_gray, cv2.COLOR_GRAY2BGR)
cv2.imshow('color converted grayscale image (not used. just to show how to use cvtColor)', img_color)
cv2.waitKey(0) # press any key to continue
# a simplified version of what your code did to apply a mask
# make a white image.
# and then set it to black wherever the original grayscale image is 0
img_color = np.ndarray(img_gray.shape + (3,), dtype=np.uint8)
img_color.fill(255)
cv2.imshow('base color image', img_color)
cv2.waitKey(0) # press any key to continue
# this is not the fastest way, but I think it's more logical until you need more speed
# the fastest way specifically to black out parts of the image would
# be np.bitwise_and(...)
black_points = np.where(img_gray == 0)
print('found {} points to mask'.format(len(black_points[0])))
img_color[black_points] = (0, 0, 0)
# a more efficient and still straightforward method you could use:
img_color[img_gray==0] = (0, 0, 0)
cv2.imshow('masked color image', img_color)
cv2.waitKey(0) # press any key to continue
# clean up explicitly
cv2.destroyAllWindows()
|
PyGame: draw.rect() has invalid parameters
Question: I'm trying to learn mouse events with PyGame, and I'm trying to draw a box
wherever the user clicks. I'm setting a variable equal to
pygame.mouse.get_pos(), and calling individual tuple members according to the
mouse's x, y position. Here's the code:
import pygame, sys
pygame.init()
screen = pygame.display.set_mode((800, 600))
mousepos = pygame.mouse.get_pos()
while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
sys.exit()
if event.type == pygame.MOUSEBUTTONDOWN:
pygame.draw.rect(mousepos[0], mousepos[1], 20, 20)
The game starts up, but when I click, it crashes, giving this error:
Traceback (most recent call last):
File "C:\Users\User\Documents\proj\Python\mouse.py", line 13, in <module>
pygame.draw.rect(mousepos[0], mousepos[1], 20, 20)
TypeError: must be pygame.Surface, not int
I know what I'm doing wrong: my parameters for `draw.rect()` are of invalid
types, but I don't know how to change the parameters so they're appropriate.
So how do I fix this?
Answer: Lets take a look at the function definition:
pygame.draw.rect(Surface, Color, Rect, Thickness)
* Surface is a surface where you want to draw
* Color is a tupple with RGB values defining the color to be used
* Rect is a tupple in the format: (x,y,width,height)
* x,y are the coordinates of the upper left hand corner
* width, height are the width and height of the rectangle
* Thickness is the thickness of the line. If it is zero, the rectangle is filled.
Based on this, you shoud do something like:
redColor = (255,0,0)
pygame.draw.rect(screen, redColor, (mousepos[0], mousepos[1], 20, 20), 1)
## Sources:
The official documentation for `python.draw` can be found here:
<http://www.pygame.org/docs/ref/draw.html>
Mind the usefull `Search examples for <function>` button under every function
description, which can lead you to multiple real world examples of usage.
Useful tutorials can also be found on the official pages:
<http://www.pygame.org/wiki/tutorials>
Other unofficial tutorials, like [this
one](http://inventwithpython.com/pygame/chapter2.html), can be found with a
bit of Googling effort.
|
How to find URL containing one word AND another using re in Python?
Question: Say I have two types of links in a html file. I want to filter out all links
with type 1. How would I do it in Python using the `re` module?
Type 1:
http://www.domain.com/firstlevel/02-02-13/secondlevel-slug.html
Type 2:
http://www.domain.com/levelone/02-02-13/secondlevel-slug.html
I want to find all links that contain both `firstlevel` AND `secondlevel`.
This is how I tried it:
import re
text = "here goes the code with various links of type 1 and type 2…"
findURL = re.findall('.*firstlevel.*secondlevel.*',text)
Here is what I thought the regex means:
find all strings that has ONE OR MORE occurances of ANY CHARACTER
followed by the word firstlevel
followed by ONE OR MORE occurances of ANY CHARACTER
followed by the word secondlevel
followed by ONE OR MORE occurances of ANY CHARACTER
However I get an empty list as result.
What do I do wrong?
Answer: You must determine start and end of link. I.e.
findURL = re.findall('http:.*firstlevel.*secondlevel.*\.html', text)
|
Why will my unit tests only run on a my apps, not my entire project?
Question: I've just started writing my first unit tests for a django project, but have
come across an issue when attempting to run all the projects tests.
If I run the tests for a specific module/app it works fine:
./manage.py test my_project.api --settings=my_project.settings.test
But when I attempt to run all tests in the project:
./manage.py test my_project --settings=my_project.settings.test
I get an error for every tests.py that's found:
======================================================================
ERROR: my_project.api.tests.tests (unittest.loader.ModuleImportFailure)
----------------------------------------------------------------------
ImportError: Failed to import test module: my_project.api.tests.tests
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/loader.py", line 252, in _find_tests
module = self._get_module_from_name(name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/loader.py", line 230, in _get_module_from_name
__import__(name)
ImportError: No module named my_project.api.tests.tests
I am using virtualenvwrapper with the following,
python 2.7.1
django 1.6.1
Anyone know why this might be happening?
**EDIT**
This is an abbreviated view of my project structure:
my_project_root
|
|
|--- my_project
| |
| |--- api
| | |
| | ---- tests
| | |
| | --- tests.py
| |--- core
| | |
| | ---- tests
| | |
| | --- tests.py
| --- settings
| |
| ---- test.py
|
|
--- manage.py
Reading the django docs,
"Test discovery is based on the unittest module's built-in test discovery. By
default, this will discover tests in any file named “test*.py” under the
current working directory."
Which is /my_project_root/
Answer: This problem was (partially) solved within the comments, it was a combination
of an environment configuration error and how test discovery works.
Put simply, `__init__.py` should not be present within `my_project_root`.
To make heroku be able to discover `wsgi.py`, the `$PYTHONPATH` environment
variable should be adjusted to include `my_project_root`.
Additionally, you can specify
heroku config:set WSGI_PATH=path/to/my/file/wsgi.py
|
wxPython thread that does not block the GUI?
Question: I am new to wxPython and threading. I am trying to move some code into a
thread so that it no longer blocks my GUI while executing, and I can push
another button to kill the thread. Having done this, however, the code still
blocks my GUI. How can I write this code so that my thread does not block the
GUI?
import wx, sys
import threading
import time
class mywxframe(wx.Frame):
global sizer2, WorkerThread
def __init__(self):
wx.Frame.__init__(self, None)
pnl = wx.Panel(self)
szr = wx.BoxSizer(wx.VERTICAL)
pnl.SetSizer(szr)
szr2 = sizer2(self, pnl)
szr.Add(szr2, 1, wx.ALL | wx.EXPAND, 10)
log = wx.TextCtrl(pnl, -1,style=wx.TE_MULTILINE, size=(300,-1))
szr.Add(log, 1, wx.ALL, 10)
btn3 = wx.Button(pnl, -1, "Stop")
btn3.Bind(wx.EVT_BUTTON, self.OnStop)
szr.Add(btn3, 0, wx.ALL, 10)
redir = RedirectText(log)
sys.stdout=redir
szr.Fit(self)
self.Show()
def sizer2(self, panel):
sizer = wx.BoxSizer(wx.HORIZONTAL)
btn2 = wx.Button(panel, -1, "OK",)
self.Bind(wx.EVT_BUTTON, self.OnStart, btn2)
sizer.Add(btn2, 0, wx.ALL, 10)
return sizer
def WorkerThread(self):
self.dead = False
while (not self.dead):
for i in range(0,10):
print "printing", i
time.sleep(3)
def OnStart(self, event):
our_thread = threading.Thread(target=WorkerThread(self))
our_thread.start()
def OnStop(self, event):
self.dead = True
class RedirectText(object):
def __init__(self, aWxTextCtrl):
self.out=aWxTextCtrl
def write(self, string):
self.out.WriteText(string)
app = wx.PySimpleApp()
frm = mywxframe()
app.MainLoop()
Answer: You've got a couple problems there.
First of all, if you're going to define `sizer2` and `WorkerThread` as
methods, you should use them as methods, not globals.
Next, when you were creating the thread you were calling `WorkerThread` and
passing its return value (None) to the thread. This is where you were blocking
the GUI.
def OnStart(self, event):
our_thread = threading.Thread(target=WorkerThread(self))
our_thread.start()
Instead you should be passing a callable object (`WorkerThread` without the
`()`) to the thread so it will then be able to call it in the context of the
new thread.
Finally, since `self.out.WriteText` manipulates a UI object it must be called
in the context of the GUI thread. Using `wx.CallAfter` is an easy way to do
that.
Here is your example updated with these changes:
import wx, sys
import threading
import time
print wx.version()
class mywxframe(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None)
pnl = wx.Panel(self)
szr = wx.BoxSizer(wx.VERTICAL)
pnl.SetSizer(szr)
szr2 = self.sizer2(pnl)
szr.Add(szr2, 1, wx.ALL | wx.EXPAND, 10)
log = wx.TextCtrl(pnl, -1,style=wx.TE_MULTILINE, size=(300,-1))
szr.Add(log, 1, wx.ALL, 10)
btn3 = wx.Button(pnl, -1, "Stop")
btn3.Bind(wx.EVT_BUTTON, self.OnStop)
szr.Add(btn3, 0, wx.ALL, 10)
redir = RedirectText(log)
sys.stdout=redir
szr.Fit(self)
self.Show()
def sizer2(self, panel):
sizer = wx.BoxSizer(wx.HORIZONTAL)
btn2 = wx.Button(panel, -1, "OK",)
self.Bind(wx.EVT_BUTTON, self.OnStart, btn2)
sizer.Add(btn2, 0, wx.ALL, 10)
return sizer
def WorkerThread(self):
self.dead = False
while (not self.dead):
for i in range(0,10):
print "printing", i
if self.dead:
break
time.sleep(3)
print 'thread exiting'
def OnStart(self, event):
our_thread = threading.Thread(target=self.WorkerThread)
our_thread.start()
def OnStop(self, event):
self.dead = True
class RedirectText(object):
def __init__(self, aWxTextCtrl):
self.out=aWxTextCtrl
def write(self, string):
wx.CallAfter(self.out.WriteText, string)
app = wx.App()
frm = mywxframe()
app.MainLoop()
|
Push rejected, failed to compile Python app
Question: I'm getting a rejected error when trying to deploy a Django app to Heroku. I
looked at possible solutions here:
1. [Heroku push rejected, failed to compile Python/django app (Python 2.7)](http://stackoverflow.com/questions/13896736/heroku-push-rejected-failed-to-compile-python-django-app-python-2-7)
2. [Error pushing Django project to Heroku](http://stackoverflow.com/questions/14492538/error-pushing-django-project-to-heroku)
But neither worked for me.
This is my flow from initiating the push to heroku:
git push heroku master
Initializing repository, done.
Counting objects: 7024, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (5915/5915), done.
Writing objects: 100% (7024/7024), 8.77 MiB | 104 KiB/s, done.
Total 7024 (delta 2183), reused 0 (delta 0)
-----> Python app detected
-----> No runtime.txt provided; assuming python-2.7.6.
-----> Preparing Python runtime (python-2.7.6)
-----> Installing Setuptools (2.1)
-----> Installing Pip (1.5.4)
-----> Installing dependencies using Pip (1.5.4)
Downloading/unpacking Django==1.6.2 (from -r requirements.txt (line 1))
Downloading/unpacking argparse==1.2.1 (from -r requirements.txt (line 2))
argparse an externally hosted file and may be unreliable
Running setup.py (path:/tmp/pip_build_u16439/argparse/setup.py) egg_info for package argparse
no previously-included directories found matching 'doc/_build'
no previously-included directories found matching 'env24'
no previously-included directories found matching 'env25'
no previously-included directories found matching 'env26'
no previously-included directories found matching 'env27'
Downloading/unpacking distribute==0.6.24 (from -r requirements.txt (line 3))
Running setup.py (path:/tmp/pip_build_u16439/distribute/setup.py) egg_info for package distribute
warning: no files found matching 'Makefile' under directory 'docs'
warning: no files found matching 'indexsidebar.html' under directory 'docs'
Downloading/unpacking dj-database-url==0.3.0 (from -r requirements.txt (line 4))
Downloading dj_database_url-0.3.0-py2.py3-none-any.whl
Downloading/unpacking dj-static==0.0.5 (from -r requirements.txt (line 5))
Downloading dj-static-0.0.5.tar.gz
Running setup.py (path:/tmp/pip_build_u16439/dj-static/setup.py) egg_info for package dj-static
Downloading/unpacking django-toolbelt==0.0.1 (from -r requirements.txt (line 6))
Downloading django-toolbelt-0.0.1.tar.gz
Running setup.py (path:/tmp/pip_build_u16439/django-toolbelt/setup.py) egg_info for package django-toolbelt
Downloading/unpacking gunicorn==18.0 (from -r requirements.txt (line 7))
Running setup.py (path:/tmp/pip_build_u16439/gunicorn/setup.py) egg_info for package gunicorn
Downloading/unpacking psycopg2==2.5.2 (from -r requirements.txt (line 8))
Running setup.py (path:/tmp/pip_build_u16439/psycopg2/setup.py) egg_info for package psycopg2
Downloading/unpacking pystache==0.5.3 (from -r requirements.txt (line 9))
Running setup.py (path:/tmp/pip_build_u16439/pystache/setup.py) egg_info for package pystache
pystache: using: version '2.1' of <module 'setuptools' from '/app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg/setuptools/__init__.pyc'>
Downloading/unpacking static==1.0.2 (from -r requirements.txt (line 10))
Downloading static-1.0.2.tar.gz
Running setup.py (path:/tmp/pip_build_u16439/static/setup.py) egg_info for package static
Installing collected packages: Django, argparse, distribute, dj-database-url, dj-static, django-toolbelt, gunicorn, psycopg2, pystache, static
Running setup.py install for argparse
no previously-included directories found matching 'doc/_build'
no previously-included directories found matching 'env24'
no previously-included directories found matching 'env25'
no previously-included directories found matching 'env26'
no previously-included directories found matching 'env27'
Running setup.py install for distribute
Before install bootstrap.
Scanning installed packages
Setuptools installation detected at /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg
Egg installation
Patching...
Renaming /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg into /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg.OLD.1397160440.32
Patched done.
Relaunching...
Traceback (most recent call last):
File "<string>", line 1, in <module>
NameError: name 'install' is not defined
Complete output from command /app/.heroku/python/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_u16439/distribute/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-7JPdSe-record/install-record.txt --single-version-externally-managed --compile:
Before install bootstrap.
Scanning installed packages
Setuptools installation detected at /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg
Egg installation
Patching...
Renaming /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg into /app/.heroku/python/lib/python2.7/site-packages/setuptools-2.1-py2.7.egg.OLD.1397160440.32
Patched done.
Relaunching...
Traceback (most recent call last):
File "<string>", line 1, in <module>
NameError: name 'install' is not defined
----------------------------------------
Cleaning up...
Command /app/.heroku/python/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_u16439/distribute/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-7JPdSe-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_u16439/distribute
Storing debug log for failure in /app/.pip/pip.log
! Push rejected, failed to compile Python app
My `requirements.txt`
Django==1.6.2
argparse==1.2.1
distribute==0.6.24
dj-database-url==0.3.0
dj-static==0.0.5
django-toolbelt==0.0.1
gunicorn==18.0
psycopg2==2.5.2
pystache==0.5.3
static==1.0.2
wsgiref==0.1.2
`Procfile`
web: gunicorn app.wsgi
I've followed the instructions from the Heroku website. Any idea what I'm
missing?
Answer: This is apparently a [bug in the `distribute`
package](https://bitbucket.org/tarek/distribute/issue/91/install-glitch-when-
using-pip-virtualenv), which doesn't seem likely to get fixed:
> Distribute is now considered deprecated and replaced by setuptools. I
> suggest replacing 'distribute==0.6.28' in requirements.txt with
> 'setuptools==1.0' or similar. The latest versions of pip (>=1.4) and
> setuptools (>=0.7) have better support for the unified code and upgrades and
> seek to obviate issues like the one encountered here.
|
How to apply SVG Filter to Group using svgwrite with Python?
Question: Does anybody have an example of how to apply SVG filter to a SVG group using
svgwrite?
Here's what I'm trying to do:
import svgwrite
dwg = svgwrite.Drawing('test.svg', profile='full')
grp = dwg.g()
grp.add(dwg.rect(insert=(5,5),size=(20,20)))
filtr = dwg.defs.add( dwg.filter(id="Ga",filterUnits="userSpaceOnUse") )
feGauss = filtr.feGaussianBlur()
grp.filter = feGauss # This does not work
dwg.add(grp)
dwg.save()
The result does not pass filter onto the group as expected.
>>> dwg.tostring()
u'<svg baseProfile="full" height="100%" version="1.1" width="100%" xmlns="http://www.w3.org/2000/svg" xmlns:ev="http://www.w3.org/2001/xml-events" xmlns:xlink="http://www.w3.org/1999/xlink"><defs /><g><rect height="20" width="20" x="5" y="5" /></g></svg>'
Any help is highly appreciated!
Answer: Below is an example of using a simple blur filter on a group with a single
rectangle. More of my svgwrite examples can be found at
<https://docs.google.com/folder/d/0BwFQiTKfux0qY1Y2d1hRdndtSEk/edit>
#!/usr/bin/python3
# License: MIT
import svgwrite
progname = 'example_filter_on_group'
def create_svg(name):
svg_size_w = 900
svg_size_h = 1500
title_size = 20
y = 0
title = name + ': example of filter on a group'
dwg = svgwrite.Drawing(name, (svg_size_w, svg_size_h), debug=True)
# background will be white.
dwg.add(dwg.rect(insert=(0, 0), size=('100%', '100%'), fill='white'))
# create simple filter to blur rectangle
blur6_filter = dwg.defs.add(dwg.filter())
blur6_filter.feGaussianBlur(in_='SourceGraphic', stdDeviation=6)
# group with filter
g_f = dwg.add(dwg.g(filter=blur6_filter.get_funciri()))
g_f.add( dwg.rect(insert=(50, 50), size=(50, 50), fill='aqua'))
dwg.save()
if __name__ == '__main__':
create_svg(progname + '.svg')
* Lawrence
|
Python code not reading file, says file has no lines
Question: I had an assignment due that asked to read a file and display the top 10 lines
of the file, or all of the lines if the file is less than 10 lines long. When
I tried to run my code on my computer, it registered the file as having 0
lines (no matter what file I used) and displayed only a blank line as output.
I would like to understand where I went wrong so I can avoid my mistake for
the next assignment. Any style or other tips are also welcome.
Here is my code:
#Displays the top 10 lines in a file
import sys
# Make sure the input is correct, take file name
if len(sys.argv) == 2:
filename = sys.argv[1]
else:
print("You must start the program with 1 command line parameter.")
quit()
# open file
fle = open(filename, "r")
#Count number of lines
linecount = 0
for line in fle:
linecount = linecount + 1
# If the file is less than 10 lines, print the entire file
# If the file has more than 10 lines, print only first 10.
lines = fle.readlines()
if linecount < 10:
for line in fle:
print(line,)
else:
for i in range(10):
print(lines[i])
Answer: Could it be that
for line in fle:
linecount = linecount + 1
reads every line from the file so that, after the loop is done, there are no
more lines to read from that file in `lines = fle.readlines()`?
Try inserting a `fle.seek(0)` before `lines = fle.readlines()` to "rewind" the
file to the beginning before re-reading it.
(See also [here](http://stackoverflow.com/questions/10255273/iterating-on-a-
file-using-python) for example.)
|
RegEx pattern returning all words except those in parenthesis
Question: I have a text of the form:
> können {konnte, gekonnt} Verb
And I want to get a match for all words in it that are not in parenthesis.
That means:
> können = 1st match, Verb = 2nd match
Unfortunately I still don't get the knock of regular expression. There is a
lot of testing possibility but not much help for creation unless you want to
read a book.
I will use them in Java or Python.
Answer: In Python you could do this:
import re
regex = re.compile(r'(?:\{.*?\})?([^{}]+)', re.UNICODE)
print 'Matches: %r' % regex.findall(u'können {konnte, gekonnt} Verb')
Result:
Matches: [u'können ', u' Verb']
Although I would recommend simply replacing everything between `{` and `}`
like so:
import re
regex = re.compile(r'\{.*?\}', re.UNICODE)
print 'Output string: %r' % regex.sub('', u'können {konnte, gekonnt} Verb')
Result:
Output string: u'können Verb'
|
Biopython Large Sequence splitting
Question: I'm a newbie in the field of python programming. As I was trying to do some
analysis,(I've tried to find the answer on other posts, but nothing) I decided
to post my first and probably very foolish question. Why does this create only
one output file although in this example there were supposed to be at least 8
(sequence is more than 8000 characters). Thank you for your answer upfront.
def batch_iterator(iterator, batch_size) :
entry = True
while entry :
batch = []
while len(batch) < batch_size :
try :
entry = iterator.next()
except StopIteration :
entry = None
if entry is None :
#End of file
break
batch.append(entry)
if batch :
yield batch
from Bio import SeqIO
record_iter = SeqIO.parse(open("some.fasta"),"fasta")
for i, batch in enumerate(batch_iterator(record_iter, 1000)) : #I think sth is wrong here?
filename = "group_%i.fasta" % (i+1)
handle = open(filename, "w")
count = SeqIO.write(batch, handle, "fasta")
handle.close()
print "Wrote %i records to %s" % (count, filename)
Answer: ### Sequence chunks
After a long discussion with the OP, here is my very restructured proposal,
using the generator function defined in [this other SO
thread](http://stackoverflow.com/questions/312443/how-do-you-split-a-list-
into-evenly-sized-chunks-in-python)
# file: main.py
from Bio import SeqIO
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
def chunks(l, n):
"""Yield successive n-sized chunks from l."""
for i in xrange(0, len(l), n):
yield l[i:i+n]
if __name__ == '__main__':
handle = open('long.fasta', 'r')
records = list(SeqIO.parse(handle, "fasta"))
record = records[0]
for pos, chunk in enumerate(chunks(record.seq.tostring(), 1000)):
chunk_record = SeqRecord(Seq(
chunk, record.seq.alphabet),
id=record.id, name=record.name,
description=record.description)
outfile = "group_%d.fasta" % pos
SeqIO.write(chunk_record, open(outfile, 'w'), "fasta")
Note that your original code does something very different: it takes **new
records** from the generator provided by the `SeqIO.parse` function, and tries
to store them in different files. If you want to split **a single record** in
smaller sub-sequences, you have to access the record's internal data, which is
done by `record.seq.tostring()`. The `chunks` generator function, as described
in the other thread linked above, returns as many chunks as is possible to
build from the passed in sequence. Each of them is stored as a new fasta
record in a different file (if you want to keep just the sequence, write the
`chunk` directly to the opened `outfile`).
### Check that it works
Consider the following code:
# file: generate.py
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from Bio.Alphabet import IUPAC
from Bio import SeqIO
long_string = "A" * 8000
outfile = open('long.fasta', 'w')
record = SeqRecord(Seq(
long_string,
IUPAC.protein),
id="YP_025292.1", name="HokC",
description="toxic membrane protein, small")
SeqIO.write(record, outfile, "fasta")
It writes a single record to a file named "long.fasta". This single record has
a Sequence inside that is 8000 characters long, as generated in `long_string`.
How to use it:
$ python generate.py
$ wc -c long.fasta
8177 long.fasta
The overhead over 8000 characters is the file header.
How to split that file in chunks of 1000 length each, with the code snippet
above:
$ python main.py
$ ls
generate.py group_1.fasta group_3.fasta group_5.fasta group_7.fasta main.py
group_0.fasta group_2.fasta group_4.fasta group_6.fasta long.fasta
$ wc -c group_*
1060 group_0.fasta
1060 group_1.fasta
1060 group_2.fasta
1060 group_3.fasta
1060 group_4.fasta
1060 group_5.fasta
1060 group_6.fasta
1060 group_7.fasta
8480 total
|
Max retries exceeded with URL
Question: I'm trying to get the content of this url
"<https://itunes.apple.com/in/genre/ios-business/id6000?mt=8>" and its showing
this error
Traceback (most recent call last):
File "/home/preetham/Desktop/eg.py", line 17, in <module>
page1 = requests.get(ap)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 383, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 486, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 378, in send
raise ConnectionError(e)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='itunes.apple.com', port=443): Max retries exceeded with url: /in/app/adobe-reader/id469337564?mt=8 (Caused by <class 'socket.gaierror'>: [Errno -2] Name or service not known)
the code is
url="https://itunes.apple.com/in/genre/ios-business/id6000?mt=8"
page = requests.get(url)
tree = html.fromstring(page.text)
flist=[]
plist=[]
for i in range(0,100):
app = tree.xpath("//div[@class='column first']/ul/li/a/@href")
ap=app[0]
page1 = requests.get(ap)
when I try the range with (0,2) it works but when I put the range in 100's it
shows this error.
Answer: What happened here is that **itunes** server refuses your connection (you're
sending too many requests from same ip address in short period of time)
> Max retries exceeded with url: /in/app/adobe-reader/id469337564?mt=8
error trace is misleading it should be something like **"No connection could
be made because the target machine actively refused it"**.
There is an issue at about python.requests lib at Github, check it out
[here](https://github.com/kennethreitz/requests/issues/1198)
To overcome this issue (not so much an issue as it is misleading debug trace)
you should catch connection related exceptions like so:
try:
page1 = requests.get(ap)
except requests.exceptions.ConnectionError:
r.status_code = "Connection refused"
Another way to overcome this problem is if you use enough time gap to send
requests to server this can be achieved by `sleep(timeinsec)` function in
python (don't forget to import sleep)
from time import sleep
All in all requests is awesome python lib, hope that solves your problem.
|
how to read "\n\n" in python module pandas?
Question: There is a data file which has `\n\n` at the end of every line.
<http://pan.baidu.com/s/1o6jq5q6>
My system:win7+python3.3+R-3.0.3
In R
sessionInfo()
[1] LC_COLLATE=Chinese (Simplified)_People's Republic of China.936
[2] LC_CTYPE=Chinese (Simplified)_People's Republic of China.936
[3] LC_MONETARY=Chinese (Simplified)_People's Republic of China.936
[4] LC_NUMERIC=C
[5] LC_TIME=Chinese (Simplified)_People's Republic of China.936
In python: chcp 936
I can read it in R.
read.table("test.pandas",sep=",",header=TRUE)
It is so simple.
and I can read it in python to get almost same output.
fr=open("g:\\test.pandas","r",encoding="gbk").read()
data=[x for x in fr.splitlines() if x.strip() !=""]
for id,char in enumerate(data):
print(str(id)+","+char)
When i read it in python module pandas,
import pandas as pd
pd.read_csv("test.pandas",sep=",",encoding="gbk")
I found two problems in the output:
1)how to make right alignment(the problem i have asked in other post)
[how to set alignment in pandas in
python](http://stackoverflow.com/questions/23008636/how-to-set-alignment-in-
pandas-in-python)
2)there is a NaN line in every real data.
Can i improve my pandas code to get better display in console?



Answer: Your file when read with `open('test.pandas', 'rb')` seems to contain '\r\r\n'
as its line terminators. Python 3.3 does seem to convert this to '\n\n' while
Python 2.7 converts it to '\r\n' when read with `open('test.pandas', 'r',
encoding='gbk')`.
[pandas.read_csv](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.read_csv.html?highlight=lineterminator) does have
a lineterminator parameter but it only accepts single character terminators,
so that doesn't help us.
What you can do is process the file a bit before passing it to
`pandas.read_csv()`, and you can use
[StringIO](https://docs.python.org/3.3/library/io.html?#io.StringIO) which
will wrap a string buffer in a file interface so that you don't need to write
out a temporary file first.
import pandas as pd
from io import StringIO
contents = open('test.pandas', 'r', encoding='gbk')
contents = contents.replace('\n\n', '\n')
df = pd.read_csv(StringIO(contents))
(I don't have the GBK charset for the output below.)
>>> df[0:10]
??????? ??? ????????
0 HuangTianhui ?? 1948/05/28
1 ?????? ? 1952/03/27
2 ??? ? 1994/12/09
3 LuiChing ? 1969/08/02
4 ???? ?? 1982/03/01
5 ???? ?? 1983/08/03
6 YangJiabao ? 1988/08/25
7 ?????????????? ?? 1979/07/10
8 ?????? ? 1949/10/20
9 ???»? ? 1951/10/21
In Python 2.7 `StringIO()` was in module `StringIO` instead of `io`.
|
Check if string contains a certain amount of words of another string
Question: Say we have a string 1 `A B C D E F` and a string 2 `B D E` (The letters are
just for demo, in reality they are words). Now I would like to find out if
there are any `n` conscutive "words" from string 2 in string 1. To convert the
string to "words", I'd use `string.split()`.
For example for `n` equals 2, I would like to check whether `B D` or `D E` is
- **in this order** \- in string 1. `B D` is not in this order in the string,
but `D E` is.
Does anyone see a pythonic way of doing this?
I do have a solution for `n` equals 2 but realized that I need it for
arbitrary n. Also it is not particularily beautiful:
def string_contains_words_of_string(words_str, words_to_check_str):
words = words_str.split()
words_to_check = words_to_check_str.split()
found_word_index = None
for word in words:
start = 0 if found_word_index is None else found_word_index + 1
for i, word_to_check in enumerate(words_to_check[start:]):
if word_to_check == word:
if found_word_index is not None:
return True
found_word_index = i
break
else:
found_word_index = None
return False
Answer: This is easy with a regex:
>>> import re
>>> st1='A B C D E F'
>>> st2='B D E'
>>> n=2
>>> pat=r'(?=({}))'.format(r's+'.join(r'\w+' for i in range(n)))
>>> print [(s, s in st1) for s in re.findall(pat, st2)]
[('B D', False), ('D E', True)]
The key is to use a zero width look ahead to find overlapping matches in the
string. So:
>>> re.findall('(?=(\\w+\\s+\\w+))', 'B D E')
['B D', 'D E']
Now build that for `n` repetitions of the word found by `\w+` with:
>>> n=2
>>> r'(?=({}))'.format(r's\+'.join(r'\w+' for i in range(n)))
'(?=(\\w+\\s+\\w+))'
Now since you have two strings, use Python's `in` operator to produce a tuple
of the result of `s` from the regex matches to the target string.
* * *
Of course if you want a non-regex to do this, just produce substrings n words
by n:
>>> li=st2.split()
>>> n=2
>>> [(s, s in st1) for s in (' '.join(li[i:i+n]) for i in range(len(li)-n+1))]
[('B D', False), ('D E', True)]
And if you want the index (either method) you can use
[str.find](https://docs.python.org/2/library/stdtypes.html#str.find):
>>> [(s, st1.find(s)) for s in (' '.join(li[i:i+n]) for i in range(len(li)-n+1))
... if s in st1]
[('D E', 6)]
* * *
For regex that goes word by word, make sure you use a word boundary anchor:
>>> st='wordW wordX wordY wordZ'
>>> re.findall(r'(?=(\b\w+\s\b\w+))', st)
['wordW wordX', 'wordX wordY', 'wordY wordZ']
|
Access to ctypes **argv from binary file through Python
Question: I have the following struct output in a binary file from hashcat restore file:
typedef struct
{
uint32_t version_bin;
char cwd[256];
uint32_t argc;
char **argv;
uint32_t pid;
uint32_t devices_cnt;
uint32_t dictpos;
uint32_t maskpos;
uint64_t *pw_off;
uint64_t *pw_num;
uint64_t pw_cur;
uint32_t digests_cnt;
uint32_t digests_done;
uint *digests_shown;
uint32_t salts_cnt;
uint32_t salts_done;
uint *salts_shown;
float ms_running;
} restore_data_t;
I'm trying to import the raw data and parse it with a Python script using the
ctypes data structure as follows:
class RestoreStruct(Structure):
_fields_ = [
("version_bin", c_uint32),
("cwd", c_char*256),
("argc", c_uint32),
("argv", POINTER(POINTER(c_char))),
("pid", c_uint32),
("devices_cnt", c_uint32),
("dictpos", c_uint32),
("maskpos", c_uint32),
("pw_off", POINTER(c_uint64)),
("pw_NUM", POINTER(c_uint64)),
("pw_CUR", c_uint64),
("digests_cnt", c_uint32),
("digests_done", c_uint32),
("digests_shown", POINTER(c_uint32)),
("salts_cnt", c_uint32),
("salts_done", c_uint32),
("salts_shown", POINTER(c_uint*30)),
("ms_running", c_float)
]
with open("cudaHashcat.restore", "rb") as restore_file:
status = []
struct = RestoreStruct()
while restore_file.readinto(struct) == sizeof(struct):
status.append((struct.version_bin, struct.cwd, struct.argc, struct.argv, \
struct.pid, struct.devices_cnt, struct.dictpos, struct.maskpos, struct.pw_off, \
struct.pw_NUM, struct.pw_CUR, struct.digests_cnt, struct.digests_done, struct.digests_shown, \
struct.salts_cnt, struct.salts_done, struct.salts_shown, struct.ms_running))
print struct._fields_[0][0], status[0][0]
print struct._fields_[1][0], status[0][1]
print struct._fields_[2][0], status[0][2]
print struct._fields_[3][0], status[0][3]
print struct._fields_[4][0], status[0][4]
print struct._fields_[5][0], status[0][5]
print struct._fields_[6][0], status[0][6]
print struct._fields_[7][0], status[0][7]
print struct._fields_[8][0], status[0][8]
print struct._fields_[9][0], status[0][9]
print struct._fields_[10][0], status[0][10]
print struct._fields_[11][0], status[0][11]
print struct._fields_[12][0], status[0][12]
print struct._fields_[13][0], status[0][13]
print struct._fields_[14][0], status[0][14]
print struct._fields_[15][0], status[0][15]
print struct._fields_[16][0], status[0][16]
print struct._fields_[17][0], status[0][17]
This issue I'm having is how to access the data in the pointer ctypes
(argv,pw_off, etc)? I've tried the "contents", but I get "NULL pointer access"
error. argv should be an array of char arrays, and the others should be a
simple pointer to an int.
How would I access the actual data that the pointers are addressing? Am I
completely off on how I'm going about this?
Here is a base64 encoded version of the restore file:
> ZQAAAEU6XG9jbEhhc2hjYXQtMS4wMQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
> AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
> AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
> AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
> AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJAAAACBdvAHwiAAABAAAAAAAAAAAAAAAA
> AAAAAAAAAAAAAAABACgBAAAAAA8AAAAFAAAAAAAAAAEAAAAAAAAAAAAAAEDPnkYAAAAAY3VkYUhh
> c2hjYXQzMgotbQoxMDAwCi1hCjAKLXIKcnVsZXNccmljaF9wd19ydWxlcy5ydWxlCi4uXHBsYWlu
> X3RleHRfaGFzaC50eHQKLi5cMTZfV2Fsa19taW4udHh0Cv//JwEAAAAAAAAIAAAAAAAAAAAAAQAA
> AAAAAAABAAAAAQAAAAAAAAAAAAAAAAAAAAEAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
As pointed out in the comments it looks like the argv values are stored in the
file as plain text. My issue is still how to read this information into a
struct properly. All of the other values in the restore struct are read in and
stored correctly in the python object, but the code above. Any pointer
structure though doesn't read in as easy.
Answer: Taking eryksun's advice here what I did to get the argv values.
class RestoreStruct(Structure):
_fields_ = [
("version_bin", c_uint32),
("cwd", c_char*256),
("argc", c_uint32),
("argv", POINTER(POINTER(c_char))),
("pid", c_uint32),
("devices_cnt", c_uint32),
("dictpos", c_uint32),
("maskpos", c_uint32),
("pw_off", POINTER(c_uint64)),
("pw_NUM", POINTER(c_uint64)),
("pw_CUR", c_uint64),
("digests_cnt", c_uint32),
("digests_done", c_uint32),
("digests_shown", POINTER(c_uint32)),
("salts_cnt", c_uint32),
("salts_done", c_uint32),
("salts_shown", POINTER(c_uint*30)),
("ms_running", c_float)
]
with open("cudaHashcat.restore", "rb") as restore_file:
status = []
struct = RestoreStruct()
restore_file.readinto(struct)
rest = restore_file.read()
print struct.version_bin
# Print rest of variables that are not pointers
print rest.splitlines()[0:struct.argc] # Prints a list structure of argv values
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.