text
stringlengths 226
34.5k
|
---|
Django template not loaded when deploy on openshift
Question: Totally new to Openshift and has been following various stepbystep guides.
Able to get django 1.6, Python 2.7 and Mezzanine 3.0.9 up with the application
working - partially. For some reason, the template is not loaded, both if the
template is part of a HTML include tag or a part of view.py.
When tailing access-log, cannot see errors or anything to go by. The
settings.py has
PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__))
PROJECT_DIRNAME = PROJECT_ROOT.split(os.sep)[-1]
TEMPLATE_DIRS = (os.path.join(PROJECT_ROOT, "templates"),)
It seems to not be able to find the path of the template files but don't know
why as the value of TEMPLATE_DIRS seems to be correct when checking it.
Everything is working ok on my local machine but not on Openshift. Any
pointers are much appreciated as have been googling and search around for a
few days and still get no where.
Thank you.
EDIT: Decide to turn on DEBUG mode and that is a lot clearer to investigate.
Turns out that without providing absolute module name when importing a method
the application just fail but this is not the case on local machine. e.g.
instead of providing from projectname.appname.view import some_function I was
putting from appname.view import some_function
Silly me. That teach me a good few days lesson!!!durr!!
Answer: Problem is resolved by providing a full module name to the import statement.
e.g.
from projectname.appname.view import some_function
I had, see below, and that caused the issue.
from appname.view import some_function
Once a full module name is provided, it all works perfectly.
Thank you.
|
pandas sparse data frame value_counts not working
Question: I am encountering a TypeError with a pandas sparse data frame when I use the
value_counts method. I have listed the versions of the packages that I am
using.
Any suggestions on how to make this work ?
Thanks in advance. Also, please let me know if any more information is needed.
Python 2.7.6 |Anaconda 1.9.1 (x86_64)| (default, Jan 10 2014, 11:23:15)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas
>>> print pandas.__version__
0.13.1
>>> import numpy
>>> print numpy.__version__
1.8.0
>>> dense_df = pandas.DataFrame(numpy.zeros((10, 10))
,columns=['x%d' % ix for ix in range(10)])
>>> dense_df['x5'] = [1.0, 0.0, 0.0, 1.0, 2.1, 3.0, 0.0, 0.0, 0.0, 0.0]
>>> print dense_df['x5'].value_counts()
0.0 6
1.0 2
3.0 1
2.1 1
dtype: int64
>>> sparse_df = dense_df.to_sparse(fill_value=0) # Tried fill_value=0.0 also
>>> print sparse_df.density
0.04
>>> print sparse_df['x5'].value_counts()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "//anaconda/lib/python2.7/site-packages/pandas/core/series.py", line 1156, in value_counts
normalize=normalize, bins=bins)
File "//anaconda/lib/python2.7/site-packages/pandas/core/algorithms.py", line 231, in value_counts
values = com._ensure_object(values)
File "generated.pyx", line 112, in pandas.algos.ensure_object (pandas/algos.c:38788)
File "generated.pyx", line 117, in pandas.algos.ensure_object (pandas/algos.c:38695)
File "//anaconda/lib/python2.7/site-packages/pandas/sparse/array.py", line 377, in astype
raise TypeError('Can only support floating point data for now')
TypeError: Can only support floating point data for now
Answer: This is not implemented ATM, convert to dense first.
In [12]: sparse_df['x5'].to_dense().value_counts()
Out[12]:
0.0 6
1.0 2
3.0 1
2.1 1
dtype: int64
|
python - ryu handling packets using a switch after flow was added to switch
Question: I'm using a Ryu open flow controller switch written in python to monitor
packets in my virtual mininet.I have 3 hosts and I'm blocking transportation
from host2 to host3 and from host3 to host2. Other packets are added to the
switch flow table. My problem is that after a flow is added, if their is a
packet between 2 hosts that have a rule in the flow table of the switch, my
event doesn't trigger. For example, if the switch saw a packet from host1 to
host2 it is legal so the flow is added to the table, but if another packet
from host1 to host2 is sent it won't go through the method again. I looked in
Ryu guides but didn't find anyhting regarding the case when a flow was already
added to the switch flow table. How can I catch the packets?
Thanks in advance.
Here's my code:
import logging
import struct
from ryu.base import app_manager
from ryu.controller import mac_to_port
from ryu.controller import ofp_event
from ryu.controller.handler import MAIN_DISPATCHER
from ryu.controller.handler import set_ev_cls
from ryu.ofproto import ofproto_v1_0
from ryu.lib.mac import haddr_to_str
class SimpleSwitch(app_manager.RyuApp):
OFP_VERSIONS = [ofproto_v1_0.OFP_VERSION]
counterTraffic=0
def __init__(self, *args, **kwargs):
super(SimpleSwitch, self).__init__(*args, **kwargs)
self.mac_to_port = {}
def add_flow(self, datapath, in_port, dst, actions):
ofproto = datapath.ofproto
wildcards = ofproto_v1_0.OFPFW_ALL
wildcards &= ~ofproto_v1_0.OFPFW_IN_PORT
wildcards &= ~ofproto_v1_0.OFPFW_DL_DST
match = datapath.ofproto_parser.OFPMatch(
wildcards, in_port, 0, dst,
0, 0, 0, 0, 0, 0, 0, 0, 0)
mod = datapath.ofproto_parser.OFPFlowMod(
datapath=datapath, match=match, cookie=0,
command=ofproto.OFPFC_ADD, idle_timeout=0, hard_timeout=0,
priority=ofproto.OFP_DEFAULT_PRIORITY,
flags=ofproto.OFPFF_SEND_FLOW_REM, actions=actions)
datapath.send_msg(mod)
@set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER)
def _packet_in_handler(self, ev):
print("Im in main function")
msg = ev.msg
datapath = msg.datapath
ofproto = datapath.ofproto
dst, src, _eth_type = struct.unpack_from('!6s6sH', buffer(msg.data), 0)
dpid = datapath.id
self.mac_to_port.setdefault(dpid, {})
self.logger.info("packet in %s %s %s %s",
dpid, haddr_to_str(src), haddr_to_str(dst),
msg.in_port)
if (haddr_to_str(dst) == "00:00:00:00:00:01"):
print "dst"
self.counterTraffic +=1
if not ((haddr_to_str(src) == "00:00:00:00:00:02" and haddr_to_str(dst) =="00:00:00:00:00:03")or (haddr_to_str(src) == "00:00:00:00:00:03" and haddr_to_str(dst) =="00:00:00:00:00:02")):
# learn a mac address to avoid FLOOD next time.
print("after condition")
self.mac_to_port[dpid][src] = msg.in_port
if dst in self.mac_to_port[dpid]:
out_port = self.mac_to_port[dpid][dst]
else:
out_port = ofproto.OFPP_FLOOD
actions = [datapath.ofproto_parser.OFPActionOutput(out_port)]
# install a flow to avoid packet_in next time
if out_port != ofproto.OFPP_FLOOD:
self.add_flow(datapath, msg.in_port, dst, actions)
out = datapath.ofproto_parser.OFPPacketOut(
datapath=datapath, buffer_id=msg.buffer_id, in_port=msg.in_port,
actions=actions)
datapath.send_msg(out)
if (haddr_to_str(src) == "00:00:00:00:00:01"):
print "src"
self.counterTraffic +=1
print(self.counterTraffic)
@set_ev_cls(ofp_event.EventOFPPortStatus, MAIN_DISPATCHER)
def _port_status_handler(self, ev):
msg = ev.msg
reason = msg.reason
port_no = msg.desc.port_no
ofproto = msg.datapath.ofproto
if reason == ofproto.OFPPR_ADD:
self.logger.info("port added %s", port_no)
elif reason == ofproto.OFPPR_DELETE:
self.logger.info("port deleted %s", port_no)
elif reason == ofproto.OFPPR_MODIFY:
self.logger.info("port modified %s", port_no)
else:
self.logger.info("Illeagal port state %s %s", port_no, reason)
Answer: > I tried to print haddr_to_str(src), haddr_to_str(dst) and got
> 00:00:00:00:00:03 and ff:ff:ff:ff:ff:ff which is not what I expected. I
> wanted to get 2 as source and 3 as dest.
The short story is that you're decoding the destination mac address
correctly... However, IP must ARP to resolve mac addresses, which is why you
see `ff:ff:ff:ff:ff:ff`... those are just the ARP frames in the [ryu
controller](https://github.com/osrg/ryu).
I built a complete controller which, decodes up to the IPv4 layer below...
## Updated ryu switch packet decoder
You've been decoding raw `structs`, but it's much easier to use the [ryu
Packet library](http://ryu.readthedocs.org/en/latest/library_packet.html)
instead of unpacking a raw `struct` of the packet. This is my very quick
replacement of `_packet_in_handler()`, which just prints out the source and
destination mac addresses, as well as the upper layer protocols...
from ryu.lib.packet import packet, ethernet, arp, ipv4
import array
@set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER)
def _packet_in_handler(self, ev):
### Mike Pennington's logging modifications
## Set up to receive the ethernet src / dst addresses
pkt = packet.Packet(array.array('B', ev.msg.data))
eth_pkt = pkt.get_protocol(ethernet.ethernet)
arp_pkt = pkt.get_protocol(arp.arp)
ip4_pkt = pkt.get_protocol(ipv4.ipv4)
if arp_pkt:
pak = arp_pkt
elif ip4_pkt:
pak = ip4_pkt
else:
pak = eth_pkt
self.logger.info(' _packet_in_handler: src_mac -> %s' % eth_pkt.src)
self.logger.info(' _packet_in_handler: dst_mac -> %s' % eth_pkt.dst)
self.logger.info(' _packet_in_handler: %s' % pak)
self.logger.info(' ------')
src = eth_pkt.src # Set up the src and dst variables so you can use them
dst = eth_pkt.dst
## Mike Pennington's modifications end here
msg = ev.msg
datapath = msg.datapath
ofproto = datapath.ofproto
dpid = datapath.id
self.mac_to_port.setdefault(dpid, {})
# learn a mac address to avoid FLOOD next time.
self.mac_to_port[dpid][src] = msg.in_port
if dst in self.mac_to_port[dpid]:
out_port = self.mac_to_port[dpid][dst]
else:
out_port = ofproto.OFPP_FLOOD
actions = [datapath.ofproto_parser.OFPActionOutput(out_port)]
# install a flow to avoid packet_in next time
if out_port != ofproto.OFPP_FLOOD:
self.add_flow(datapath, msg.in_port, dst, actions)
out = datapath.ofproto_parser.OFPPacketOut(
datapath=datapath, buffer_id=msg.buffer_id, in_port=msg.in_port,
actions=actions)
datapath.send_msg(out)
Now, whenever an ethernet packet is sent, you'll see this inside your mininet
session...
_packet_in_handler: src_mac -> 00:00:00:00:00:03
_packet_in_handler: dst_mac -> 33:33:00:00:00:02
_packet_in_handler: ethernet(dst='33:33:00:00:00:02',ethertype=34525,src='00:00:00:00:00:03')
------
ARP packets look like this...
_packet_in_handler: src_mac -> 00:00:00:00:00:01
_packet_in_handler: dst_mac -> ff:ff:ff:ff:ff:ff
_packet_in_handler: arp(dst_ip='10.0.0.2',dst_mac='00:00:00:00:00:00',hlen=6,hwtype=1,opcode=1,plen=4,proto=2048,src_ip='10.0.0.1',src_mac='00:00:00:00:00:01')
------
## Demo
Assume I saved the modified code above (including other parts of your source)
as `ne_question.py`.
* First I set up the [ryu controller](https://github.com/osrg/ryu) in [mininet](http://mininet.org/):
root@mininet-vm:/home/mininet# ryu-manager ne_question.py &
[1] 14073
loading app ne_question.py
loading app ryu.controller.ofp_handler
instantiating app ryu.controller.ofp_handler of OFPHandler
instantiating app ne_question.py of SimpleSwitch
root@mininet-vm:/home/mininet#
* Next I build the switch topology, as you mentioned in your comment
root@mininet-vm:/home/mininet# mn --topo single,3 --mac --switch ovsk --controller remote
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 h3
*** Adding switches:
s1
*** Adding links:
(h1, s1) (h2, s1) (h3, s1)
*** Configuring hosts
h1 h2 h3
*** Starting controller
*** Starting 1 switches
s1
*** Starting CLI:
mininet> _packet_in_handler: src_mac -> 00:00:00:00:00:02
_packet_in_handler: dst_mac -> 33:33:00:00:00:02
_packet_in_handler: ethernet(dst='33:33:00:00:00:02',ethertype=34525,src='00:00:00:00:00:02')
------
_packet_in_handler: src_mac -> 00:00:00:00:00:01
_packet_in_handler: dst_mac -> 33:33:00:00:00:02
_packet_in_handler: ethernet(dst='33:33:00:00:00:02',ethertype=34525,src='00:00:00:00:00:01')
------
_packet_in_handler: src_mac -> 00:00:00:00:00:03
_packet_in_handler: dst_mac -> 33:33:00:00:00:02
_packet_in_handler: ethernet(dst='33:33:00:00:00:02',ethertype=34525,src='00:00:00:00:00:03')
------
* Finally, I run the web server, and try to pull a page... notice that ARPs are sent to resolve the destination address of the http GET. The destination address of the ARPs are `ff:ff:ff:ff:ff:ff`. BTW, if you change the `wget` to `h2 wget h1`, everything works correctly...
mininet> h1 python -m SimpleHTTPServer 80 &
mininet> h2 wget -O - h1
--2014-03-28 04:22:25-- http://10.0.0.1/
Connecting to 10.0.0.1:80... _packet_in_handler: src_mac -> 00:00:00:00:00:02
_packet_in_handler: dst_mac -> ff:ff:ff:ff:ff:ff
_packet_in_handler: arp(dst_ip='10.0.0.1',dst_mac='00:00:00:00:00:00',hlen=6,hwtype=1,opcode=1,plen=4,proto=2048,src_ip='10.0.0.2',src_mac='00:00:00:00:00:02')
------
--2014-03-28 04:00:58-- http://10.0.0.1/
Connecting to 10.0.0.1:80... _packet_in_handler: src_mac -> 00:00:00:00:00:02
_packet_in_handler: dst_mac -> 00:00:00:00:00:01
_packet_in_handler: ipv4(csum=33886,dst='10.0.0.1',flags=2,header_length=5,identification=41563,offset=0,option=None,proto=6,src='10.0.0.2',tos=0,total_length=60,ttl=64,version=4)
------
_packet_in_handler: src_mac -> 00:00:00:00:00:01
_packet_in_handler: dst_mac -> 00:00:00:00:00:02
_packet_in_handler: ipv4(csum=9914,dst='10.0.0.2',flags=2,header_length=5,identification=0,offset=0,option=None,proto=6,src='10.0.0.1',tos=0,total_length=60,ttl=64,version=4)
------
connected.
HTTP request sent, awaiting response... _packet_in_handler: src_mac -> 00:00:00:00:00:02
_packet_in_handler: dst_mac -> 00:00:00:00:00:01
_packet_in_handler: ipv4(csum=33893,dst='10.0.0.1',flags=2,header_length=5,identification=41564,offset=0,option=None,proto=6,src='10.0.0.2',tos=0,total_length=52,ttl=64,version=4)
------
_packet_in_handler: src_mac -> 00:00:00:00:00:02
_packet_in_handler: dst_mac -> 00:00:00:00:00:01
_packet_in_handler: ipv4(csum=33784,dst='10.0.0.1',flags=2,header_length=5,identification=41565,offset=0,option=None,proto=6,src='10.0.0.2',tos=0,total_length=160,ttl=64,version=4)
------
_packet_in_handler: src_mac -> 00:00:00:00:00:01
_packet_in_handler: dst_mac -> 00:00:00:00:00:02
_packet_in_handler: ipv4(csum=61034,dst='10.0.0.2',flags=2,header_length=5,identification=14423,offset=0,option=None,proto=6,src='10.0.0.1',tos=0,total_length=52,ttl=64,version=4)
------
_packet_in_handler: src_mac -> 00:00:00:00:00:01
_packet_in_handler: dst_mac -> 00:00:00:00:00:02
_packet_in_handler: ipv4(csum=61016,dst='10.0.0.2',flags=2,header_length=5,identification=14424,offset=0,option=None,proto=6,src='10.0.0.1',tos=0,total_length=69,ttl=64,version=4)
------
_packet_in_handler: src_mac -> 00:00:00:00:00:01
_packet_in_handler: dst_mac -> 00:00:00:00:00:02
_packet_in_handler: ipv4(csum=60037,dst='10.0.0.2',flags=2,header_length=5,identification=14425,offset=0,option=None,proto=6,src='10.0.0.1',tos=0,total_length=1047,ttl=64,version=4)
------
200 OK
Length: 858 [text/html]
Saving to: `STDOUT'
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li><a href=".bash_history">.bash_history</a>
<li><a href=".bash_logout">.bash_logout</a>
<li><a href=".bashrc">.bashrc</a>
<li><a href=".cache/">.cache/</a>
<li><a href=".gitconfig">.gitconfig</a>
<li><a href=".profile">.profile</a>
<li><a href=".rnd">.rnd</a>
<li><a href=".wireshark/">.wireshark/</a>
<li><a href="install-mininet-vm.sh">install-mininet-vm.sh</a>
<li><a href="mininet/">mininet/</a>
<li><a href="ne_question.py">ne_question.py</a>
<li><a href="ne_question.pyc">ne_question.pyc</a>
<li><a href="of-dissector/">of-dissector/</a>
<li><a href="oflops/">oflops/</a>
<li><a href="oftest/">oftest/</a>
<li><a href="openflow/">openflow/</a>
<li><a href="pox/">pox/</a>
</ul>
<hr>
</body>
</html>
0K 100% 161M=0s
2014-03-28 04:00:58 (161 MB/s) - written to stdout [858/858]
_packet_in_handler: src_mac -> 00:00:00:00:00:02
mininet> _packet_in_handler: dst_mac -> 00:00:00:00:00:01
_packet_in_handler: ipv4(csum=33891,dst='10.0.0.1',flags=2,header_length=5,identification=41566,offset=0,option=None,proto=6,src='10.0.0.2',tos=0,total_length=52,ttl=64,version=4)
------
_packet_in_handler: src_mac -> 00:00:00:00:00:02
_packet_in_handler: dst_mac -> 00:00:00:00:00:01
_packet_in_handler: ipv4(csum=33890,dst='10.0.0.1',flags=2,header_length=5,identification=41567,offset=0,option=None,proto=6,src='10.0.0.2',tos=0,total_length=52,ttl=64,version=4)
------
_packet_in_handler: src_mac -> 00:00:00:00:00:02
_packet_in_handler: dst_mac -> 00:00:00:00:00:01
_packet_in_handler: ipv4(csum=33889,dst='10.0.0.1',flags=2,header_length=5,identification=41568,offset=0,option=None,proto=6,src='10.0.0.2',tos=0,total_length=52,ttl=64,version=4)
------
_packet_in_handler: src_mac -> 00:00:00:00:00:01
_packet_in_handler: dst_mac -> 00:00:00:00:00:02
_packet_in_handler: ipv4(csum=9922,dst='10.0.0.2',flags=2,header_length=5,identification=0,offset=0,option=None,proto=6,src='10.0.0.1',tos=0,total_length=52,ttl=64,version=4)
------
mininet>
mininet>
|
String replacement in excel using Python
Question: I want to remove `%` and replace with a string, and to remove space and
replace it with under score.
This is what I have done so far:
# Open Excel file from a user imput
import xlrd, xlwt
filename = raw_input("Enter Excel file name with extension (.xls) and path")
oldbook = xlrd.open_workbook(filename)
newbook = xlwt.Workbook()
# For all the sheets in the workbook
for sheetname in oldbook.sheet_names():
oldsheet = oldbook.sheet_by_name(sheetname)
newsheet = newbook.add_sheet(sheetname)
# For all the rows and all the columns in an excel
for ii in range(oldsheet.nrows):
for jj in range(oldsheet.ncols):
# Replace
range.replace("%", "Perc")
# Save the file in a desired location with the desired name
savelocation = raw_input("Enter a new path and file name with extension (.xls) to save the new Excel spread sheet ")
newbook.save(savelocation)
Answer: One advice, read cell data into a string and then manipulate it.
Try this: (Unfortunately I cannot run it at the moment)
# Open Excel file from a user imput
import xlrd, xlwt
filename = raw_input("Enter Excel file name with extension (.xls) and path")
oldbook = xlrd.open_workbook(filename)
newbook = xlwt.Workbook()
# For all the sheets in the workbook
for sheetname in oldbook.sheet_names():
oldsheet = oldbook.sheet_by_name(sheetname)
newsheet = newbook.add_sheet(sheetname)
# For all the rows and all the columns in an excel
for ii in range(oldsheet.nrows):
for jj in range(oldsheet.ncols):
# Replace
CellString=str(oldsheet.cell(ii, jj).Value)
CellString=CellString.replace("%", "Perc")
CellString=CellString.replace(" ", "_")
newsheet.write(ii, jj, CellString)
# Save the file in a desired location with the desired name
savelocation = raw_input("Enter a new path and file name with extension (.xls) to save the new Excel spread sheet ")
newbook.save(savelocation)
|
Read multiple bytes in Hexadecimal file using python
Question: I have a hex file which appears as below:-
00000000 AA AA 11 FF EC FF E7 3E FA DA D8 78 39 75 89 4E
00000010 FD FD BF E5 FF DD FF AA E9 78 67 84 90 E4 87 83
00000020 9F E7 80 FD FE 73 75 78 93 47 58 93 EE 33 33 3F
I want to read 3rd and 4th byte. Swap these two bytes and save them in a
variable. For e.g, i want to save 0xFF11 (after byteswap) in variable "num"
This is what i tried: I read these two bytes one by one
data=open('xyz.bin','rb').read()
num1=data[2]
num2=data[3]
num1,num2=num2,num1
num= num1*100+num2
print(num)
Now the problem is num variable has integer value and i have no idea how to
get hex into it. I am stuck here and not able to proceed further. Any help
would be welcomed.
PS: I am very new to python.
Answer:
import struct
with open("xyz.bin", "rb") as f:
f.seek(2)
num, = struct.unpack("<H", f.read(2))
print "little endian:", hex(num), num # little endian: 0xff11 65297
|
Caught SyntaxError while rendering: invalid syntax
Question: I am creating an django app 'cms' with django1.1 via 《Python Web Development
with Django》. And I`m getting an "TemplateSyntaxError" error while accessing
admin page.
Here's the traceback:
TemplateSyntaxError at /admin/
Caught SyntaxError while rendering: invalid syntax (views.py, line 18)
Request Method: GET
Exception Type: TemplateSyntaxError
Exception Value:
Caught SyntaxError while rendering: invalid syntax (views.py, line 18)
Exception Location: D:\Program Files\python\lib\site-packages\django\utils\importlib.py in import_module, line 35
Python Executable: D:\Program Files\python\python.exe
Python Version: 2.7.6
my views.py are :
from django.shortcuts import render_to_response, get_object_or_404
from django.db.models import Q
from cms.models import Story, Category
from markdown import markdown
def category(request, slug):
category = get_object_or_404(Category, slug = slug)
story_list = Story.objects.filter(category=category)
heading = "Category: %s " % category.label
return render_to_response("cms/story_list.html", locals())
def search(request):
if "q" in request.GET:
term = request.GET['q']
story_list = Story.objects.filter(Q(title__contains=term) | (markdown_content__contains=term))
heading = "Search results"
return render_to_response("cms/story_list.html", locals())
Line 18 is story_list = Story.objects.filter(Q(title__contains=term) | (markdown_content__contains=term))
And urls.py in cms:
from django.conf.urls.defaults import *
from cms.models import Story
info_dict = {'queryset': Story.objects.all(), 'template_object_name': 'story' }
urlpatterns = patterns('django.views.generic.list_detail',
url(r'^(?P<slug>[-\w]+)/$', 'object_detail', info_dict, name="cms-story"),
url(r'^$', 'object_list', info_dict, name='cms-home'),
)
urlpatterns += patterns('cmsproject.cms.views',
url(r'^category/(?P<slug>[-\w]+)/$', 'category', name="cms-category"),
url(r'^search/$', 'search', name="cms-search"),
)
Answer: You have a small typo on line 18, where you're missing the Q class on the
second filter:
It should read:
story_list = Story.objects.filter(Q(title__contains=term) | Q(markdown_content__contains=term))
|
How to import python-mysql package not installed in virtualenvironment?
Question: I installed python-mysql using the following:
<http://www.lfd.uci.edu/~gohlke/pythonlibs/#mysql-python>
The issue is on install, I have only the option to pick my root python install
C:\Python27 and no virtualenvironment.
When I create my virtualenv "testenv", it does not have the "python-mysql"
package installed. How can I make it such that "testenv" can access the
"python-mysql" installed outside of my environment using the installer from
the link above?
I am running Windows 7.
Answer: If you create your virtual environment as below you will have access to the
main Python packages.
virtualenv --system-site-packages ENV
It is usual to install all your packages inside the virtual environment. To do
this you need to use pip or easy_install.
|
How to convert a string (with non-digit place value separators) into an integer?
Question:
price = '20,355'
This is a python string variable. How do I convert it into an integer
variable?
For example:
price = 20355
Answer: This is my prefered way to convert strings with place-value separators to
integers:
>>> import locale
>>> price = '20,355'
>>> locale.setlocale(locale.LC_NUMERIC, '') # Or any other appropriate locale.
'English_United Kingdom.1252'
>>> locale.atoi(price)
20355
This is better than just replacing commas with empty-strings, because in some
locales commas are used as decimal separators, while periods play the part of
delimiting the thousands, millions etc.
|
Python : How to make QGroupBox borderless
Question: What QGroupBox attribute needs to be set to hide its outline borders?

from PyQt4 import QtGui, QtCore
import sys, os
class Dialog_01(QtGui.QMainWindow):
def __init__(self):
super(QtGui.QMainWindow,self).__init__()
mainWidget=QtGui.QWidget()
self.setCentralWidget(mainWidget)
mainLayout = QtGui.QVBoxLayout()
mainWidget.setLayout(mainLayout)
tabWidget = QtGui.QTabWidget()
mainLayout.addWidget(tabWidget)
WidgetA = QtGui.QWidget()
LayoutA = QtGui.QVBoxLayout()
WidgetA.setLayout(LayoutA)
GroupBox1 = QtGui.QGroupBox('Goupbox 1')
Layout1 = QtGui.QHBoxLayout()
GroupBox1.setLayout(Layout1)
LayoutA.addWidget(GroupBox1)
GroupBox2 = QtGui.QGroupBox('Goupbox 2')
Layout2 = QtGui.QHBoxLayout()
GroupBox2.setLayout(Layout2)
Layout1.addWidget(GroupBox2)
GroupBox3 = QtGui.QGroupBox('Goupbox 3')
Layout3 = QtGui.QHBoxLayout()
GroupBox3.setLayout(Layout3)
Layout2.addWidget(GroupBox3)
tabWidget.addTab(WidgetA,'A')
tabWidget.addTab(QtGui.QWidget(),'B')
tabWidget.addTab(QtGui.QWidget(),'C')
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
dialog_1 = Dialog_01()
dialog_1.show()
dialog_1.resize(480,320)
sys.exit(app.exec_())
Answer: You could try [QGroupBox.setFlat](http://qt-
project.org/doc/qt-4.8/qgroupbox.html#flat-prop), but the exact results will
depend on the widget-style currently in use.
|
Using a declared variable to divide by number in PythonWin
Question: I declared my variables but as with other intro programming languages, I have
been able to do simple division once the variable is set. I can see Python is
a bit different. Here's what I have:
import arcpy
from arcpy import env
#set workspace, allow overwrites
arcpy.env.workspace = "C:/Users/../PoliceData.gdb"
arcpy.env.overwriteOutput = True
#create variables
patrolZone = "C:/Users/.../PatrolZones"
graffitiIncidents = "C:/Users/.../GraffitiIncidents"
incidentField = "INCIDENTS"
priorityField = "PRIORITY"
area = "SHAPE_area"
nameField = "NAME"
areaSQMI = area/2589988.11
The SHAPE_area is a field within an attribute table. I realize I can't do this
because of the traceback error? I am just trying to change sq m to sq mi.
Answer: You are trying to intialize area with a **String** Try this :
area = SHAPE_area
|
Python: Tkinter: Why is it root.mainloop() and not app.mainloop()
Question: I'm a new member to Stack Overflow. I found this thread, but was not allowed
to comment or ask questions on it, so I thought I'd just reference it here:
[How can I make a in interactive list in Python's Tkinter complete with
buttons that can edit those
listings?](http://stackoverflow.com/questions/18201195/how-can-i-make-a-in-
interactive-list-in-pythons-tkinter-complete-with-buttons-t)
from tkinter import *
import os
import easygui as eg
class App:
def __init__(self, master):
frame = Frame(master)
frame.pack()
# character box
Label(frame, text = "Characters Editor").grid(row = 0, column = 0, rowspan = 1, columnspan = 2)
charbox = Listbox(frame)
for chars in []:
charbox.insert(END, chars)
charbox.grid(row = 1, column = 0, rowspan = 5)
charadd = Button(frame, text = " Add ", command = self.addchar).grid(row = 1, column = 1)
charremove = Button(frame, text = "Remove", command = self.removechar).grid(row = 2, column = 1)
charedit = Button(frame, text = " Edit ", command = self.editchar).grid(row = 3, column = 1)
def addchar(self):
print("not implemented yet")
def removechar(self):
print("not implemented yet")
def editchar(self):
print("not implemented yet")
root = Tk()
root.wm_title("IA Development Kit")
app = App(root)
root.mainloop()
Could somebody explain to me why the very last line is root.mainloop()? Being
a novice with Python, and coming from a background that's procedural-oriented
with no object-orient experience, I would have thought it would have been
app.mainloop().
In fact app = App(root) , app is never used again in the rest of the code! I'm
having trouble understanding why root.mainloop() still works.
Answer: I'm not sure if you'll find this answer satisfying, but you call
`root.mainloop()` because `root` is the object that has the `mainloop` method.
In the code you've given, `App` has no `mainloop` function.
In simpler terms, this is just how tkinter works -- you always end your script
by calling the `mainloop` method of the root window. When that method returns,
your program will exit.
Let's start at the beginning. The simplest, non-OO Tkinter program is going to
look like the following example. Note that this is a python 2.x example, and I
do not use a global import since I think global imports are bad.
import Tkinter as tk
root = tk.Tk()
<your widgets go here>
root.mainloop()
Many people find that a pure procedural style is not an effective way to write
code, so they might choose to write this in an object-oriented style. It's
natural to think of "the app" as a singleton object. There are many ways to do
this -- the one in your question is, unfortunately, not one of the clearer
ways to do it.
A slightly better way, IMO, would be to structure the code like this:
class App(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
<your widgets go here>
app = App()
app.mainloop()
In this case, `mainloop` is still being called, though now it's a method of
`App` since `App` inherits from `Tk`. It is conceptually the same as
`root.mainloop()` since in this case, `app` _is_ the root window even though
it goes by a different name.
So, in both cases, `mainloop` is a method that belongs to the root window. And
in both cases, it must be called for the GUI to function properly.
There is a third variation which is what the code you picked is using. And
with this variation, there are several ways to implement it. The variation is
your question uses a class to define the GUI, but does _not_ inherit from
`Tk`. This is perfectly fine, but you still must call `mainloop` at some
point. Since you don't create (or inherit) a `mainloop` function in your
class, you must call the one associated with the root window. The variations I
speak of are how and where the instance of `App` is added to the root window,
and how `mainloop` is ultimately called.
Personally I prefer that `App` inherits from `Frame`, and that you pack the
app _outside_ the definition of the app. The template I use looks like this:
class App(tk.Frame):
def __init__(self, *args, **kwargs):
tk.Frame.__init__(self, *args, **kwargs)
<your widgets go here>
if __name__ == "__main__":
root = tk.Tk()
app = App(root)
app.pack(fill="both", expand=True)
root.mainloop()
In this final example, `app` and `root` are two completely different objects.
`app` represents a frame that exists _inside_ the root window. Frames are
commonly used this way, as a container for groups of other widgets.
So, in all cases, `mainloop` must be called. _where_ it is called, and how,
depends a bit on your coding style. Some people prefer to inherit from the
root window, some don't. In either case, you must call the `mainloop` function
of the root window.
|
Java soap client to wsdl url
Question: i want to call soap function with a few parameters. I did it python but how
can i do it on java ?
my code on python :
url = 'http://78.188.50.246:8086/iskultur?singleWsdl'
client = Client(url)
d = dict(UserId='a', UserPass='b', Barkod=str(value))
result = client.service.Stok(**d)
return int(result)
how can i do it on java ?
Thanks for all
Answer: First you need to generate proxy classes. You can do that using `wsimport`
(it's a Java SE tool):
wsimport -keep http://78.188.50.246:8086/iskultur?singleWsdl
This will generate classes (in packages) and place the results in the current
directory. I tested your URL and it generated two package hierarchies (one
starting in 'org' and the other in 'com'). The command above will keep the
source code, so you can move those directories to your Java project source
path (later you should include this code generation step in your build
process).
with the generated classes in your classpath, you now create a `Service`
instance from your `WSDL` (passing the URL and the namespace qualified name of
your service). I got that information from the WSDL.
URL wsdlLocation = new URL("http://78.188.50.246:8086/iskultur?singleWsdl");
QName serviceName = new QName("http://tempuri.org/", "EbWCFtoLogo");
Service service = Service.create(wsdlLocation, serviceName);
Then you get a proxy where you can call your SOAP methods with
`Service.getPort()` passing the interface of the port (`IEbWCFtoLogo`). Now
you have a reference where you can call your remote SOAP methods.
IEbWCFtoLogo proxy = service.getPort(IEbWCFtoLogo.class);
The `wsimport` tool generated a `stok()` method that receives 3 parameters. I
called with some of the values you used and it returned `-1.0` in the code
below:
double value = proxy.stok("a", "b", "code");
System.out.println("Result: " + value);
|
loop through all methods in a python program
Question: I want to do something like this:
def a():
# do stuff
return stuff
def b():
# do stuff
return different_stuff
def c():
# do one last thing
return 200
for func in this_file:
print func_name
print func_return_value
I essentially want to mimic this flask app, without the flask parts:
app = Flask(__name__)
app.register_blueprint(my_bp, url_prefix='/test')
my_bp.data = fake_data
def tests():
with app.test_client() as c:
for rule in app.url_map.iter_rules():
if len(rule.arguments) == 0 and 'GET' in rule.methods:
resp = c.get(rule.rule)
log.debug(resp)
log.debug(resp.data)
is this possible?
Answer: Like this:
import sys
# some functions...
def a():
return 'a'
def b():
return 'b'
def c():
return 'c'
# get the module object for this file
module = sys.modules[__name__]
# get a list of the names that are in this module
for name in dir(module):
# get the Python object from this module with the given name
obj = getattr(module, name)
# ...and if it's callable (a function), call it.
if callable(obj):
print obj()
running this gives:
bgporter@varese ~/temp:python moduleTest.py
a
b
c
Note that the functions will not necessarily be called in the order of
definition as they are here.
|
Replacing a column with another predefined column
Question: Currently I have a very simple question. I'm using Python 2.7 and have the
following.
from pylab import *
import numpy as np
Nbod = 55800
Nsteps = 7
r = zeros(shape=(Nbod, Nsteps))
r_i = np.random.uniform(60.4,275,Nbod)
r[1:Nbod][0] = r_i
I'm trying to replace the first column `r[1:end][0]` with `r_i`. I'm getting
the following error from my ipython notebook compiler.
ValueError Traceback (most recent call last)
/home/john/<ipython-input-6-1b7fabbd1fa9> in <module>()
----> 1 r[:][0] = r_i #impose the initial conditions of radial and
2 #theta coordinates at the first time step
ValueError: operands could not be broadcast together with shapes (7) (55800)
I tried to transpose the `r[0]` vector but still got the same issue. I'm not
quite sure I've followed correct format for questions on this forum so leave a
comment and I will edit accordingly.
Answer: I think you want this:
>>> import numpy as np
>>> Nbod = 55800
>>> Nsteps = 7
>>> r = np.zeros(shape=(Nbod, Nsteps))
>>> r_i = np.random.uniform(60.4,275,Nbod)^C
#Notice that we slice the 2nd column and replace it with r_i
>>> r[:,1] = r_i
#Examine the first row
>>> r[0]
array([ 0. , 105.6566683, 0. , 0. ,
0. , 0. , 0. ])
Slicing a numpy array like a list of list is not appropriate here, make sure
you use the numpy slicing operations for efficiency and extra capabilities.
More information on slicing can be found
[here](http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html).
|
Mapping lots of similar tables in SQLAlchemy
Question: I have many (~2000) locations with time series data. Each time series has
millions of rows. I would like to store these in a Postgres database. My
current approach is to have a table for each location time series, and a meta
table which stores information about each location (coordinates, elevation
etc). I am using Python/SQLAlchemy to create and populate the tables. I would
like to have a relationship between the meta table and each time series table
to do queries like "select all locations that have data between date A and
date B" and "select all data for date A and export a csv with coordinates".
What is the best way to create many tables with the same structure (only the
name is different) and have a relationship with a meta table? Or should I use
a different database design?
Currently I am using this type of approach to generate a lot of similar
mappings:
from sqlalchemy import create_engine, MetaData
from sqlalchemy.types import Float, String, DateTime, Integer
from sqlalchemy import Column, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker, relationship, backref
Base = declarative_base()
def make_timeseries(name):
class TimeSeries(Base):
__tablename__ = name
table_name = Column(String(50), ForeignKey('locations.table_name'))
datetime = Column(DateTime, primary_key=True)
value = Column(Float)
location = relationship('Location', backref=backref('timeseries',
lazy='dynamic'))
def __init__(self, table_name, datetime, value):
self.table_name = table_name
self.datetime = datetime
self.value = value
def __repr__(self):
return "{}: {}".format(self.datetime, self.value)
return TimeSeries
class Location(Base):
__tablename__ = 'locations'
id = Column(Integer, primary_key=True)
table_name = Column(String(50), unique=True)
lon = Column(Float)
lat = Column(Float)
if __name__ == '__main__':
connection_string = 'postgresql://user:pw@localhost/location_test'
engine = create_engine(connection_string)
metadata = MetaData(bind=engine)
Session = sessionmaker(bind=engine)
session = Session()
TS1 = make_timeseries('ts1')
# TS2 = make_timeseries('ts2') # this breaks because of the foreign key
Base.metadata.create_all(engine)
session.add(TS1("ts1", "2001-01-01", 999))
session.add(TS1("ts1", "2001-01-02", -555))
qs = session.query(Location).first()
print qs.timeseries.all()
This approach has some problems, most notably that if I create more than one
`TimeSeries` the foreign key doesn't work. Previously I've used some work
arounds, but it all seems like a big hack and I feel that there must be a
better way of doing this. How should I organise and access my data?
Answer: # `Alternative-1: Table Partitioning`
`Partitioning` immediately comes to mind as soon as I read **exactly** the
same table structure. I am not a DBA, and do not have much production
experience using it (even more so on PostgreSQL), but please read [`PostgreSQL
- Partitioning`](http://www.postgresql.org/docs/9.3/static/ddl-
partitioning.html) documentation. Table partitioning seeks to solve exactly
the problem you have, but over 1K tables/partitions sounds challenging;
therefore please do more research on forums/SO for scalability related
questions on this topic.
Given that both of your mostly used search criterias, `datetime` component is
very important, therefore there must be solid indexing strategy on it. If you
decide to go with **`partitioning`** root, the obvious partitioning strategy
would be based on date ranges. This might allow you to partition older data in
different chunks compared to most recent data, especially assuming that old
data is (almost never) updated, so physical layouts would be dense and
efficient; while you could employ another strategy for more "recent" data.
# `Alternative-2: trick SQLAlchemy`
This basically makes your sample code work by tricking SA to assume that all
those `TimeSeries` are `children` of one entity using [`Concrete Table
Inheritance`](http://docs.sqlalchemy.org/en/rel_0_9/orm/inheritance.html#concrete-
table-inheritance). The code below is self-contained and creates 50 table with
minimum data in it. But if you have a database already, it should allow you to
check the performance rather quickly, so that you can make a decision if it is
even a close possibility.
from datetime import date, datetime
from sqlalchemy import create_engine, Column, String, Integer, DateTime, Float, ForeignKey, func
from sqlalchemy.orm import sessionmaker, relationship, configure_mappers, joinedload
from sqlalchemy.ext.declarative import declarative_base, declared_attr
from sqlalchemy.ext.declarative import AbstractConcreteBase, ConcreteBase
engine = create_engine('sqlite:///:memory:', echo=True)
Session = sessionmaker(bind=engine)
session = Session()
Base = declarative_base(engine)
# MODEL
class Location(Base):
__tablename__ = 'locations'
id = Column(Integer, primary_key=True)
table_name = Column(String(50), unique=True)
lon = Column(Float)
lat = Column(Float)
class TSBase(AbstractConcreteBase, Base):
@declared_attr
def table_name(cls):
return Column(String(50), ForeignKey('locations.table_name'))
def make_timeseries(name):
class TimeSeries(TSBase):
__tablename__ = name
__mapper_args__ = { 'polymorphic_identity': name, 'concrete':True}
datetime = Column(DateTime, primary_key=True)
value = Column(Float)
def __init__(self, datetime, value, table_name=name ):
self.table_name = table_name
self.datetime = datetime
self.value = value
return TimeSeries
def _test_model():
_NUM = 50
# 0. generate classes for all tables
TS_list = [make_timeseries('ts{}'.format(1+i)) for i in range(_NUM)]
TS1, TS2, TS3 = TS_list[:3] # just to have some named ones
Base.metadata.create_all()
print('-'*80)
# 1. configure mappers
configure_mappers()
# 2. define relationship
Location.timeseries = relationship(TSBase, lazy="dynamic")
print('-'*80)
# 3. add some test data
session.add_all([Location(table_name='ts{}'.format(1+i), lat=5+i, lon=1+i*2)
for i in range(_NUM)])
session.commit()
print('-'*80)
session.add(TS1(datetime(2001,1,1,3), 999))
session.add(TS1(datetime(2001,1,2,2), 1))
session.add(TS2(datetime(2001,1,2,8), 33))
session.add(TS2(datetime(2002,1,2,18,50), -555))
session.add(TS3(datetime(2005,1,3,3,33), 8))
session.commit()
# Query-1: get all timeseries of one Location
#qs = session.query(Location).first()
qs = session.query(Location).filter(Location.table_name == "ts1").first()
print(qs)
print(qs.timeseries.all())
assert 2 == len(qs.timeseries.all())
print('-'*80)
# Query-2: select all location with data between date-A and date-B
dateA, dateB = date(2001,1,1), date(2003,12,31)
qs = (session.query(Location)
.join(TSBase, Location.timeseries)
.filter(TSBase.datetime >= dateA)
.filter(TSBase.datetime <= dateB)
).all()
print(qs)
assert 2 == len(qs)
print('-'*80)
# Query-3: select all data (including coordinates) for date A
dateA = date(2001,1,1)
qs = (session.query(Location.lat, Location.lon, TSBase.datetime, TSBase.value)
.join(TSBase, Location.timeseries)
.filter(func.date(TSBase.datetime) == dateA)
).all()
print(qs)
# @note: qs is list of tuples; easy export to CSV
assert 1 == len(qs)
print('-'*80)
if __name__ == '__main__':
_test_model()
# `Alternative-3: a-la BigData`
If you do get into performance problems using database, I would probably try:
* still keep the data in separate tables/databases/schemas like you do right now
* bulk-import data using "native" solutions provided by your database engine
* use [`MapReduce`-like](http://en.wikipedia.org/wiki/MapReduce) analysis.
* Here I would stay with python and sqlalchemy and implemnent own distributed query and aggregation (or find something existing). This, obviously, only works if you do not have requirement to produce those results directly on the database.
# _edit-1:_ `Alternative-4: TimeSeries databases`
I have no experience using those on a large scale, but definitely an option
worth considering.
* * *
_Would be fantastic if you could later share your findings and whole decision-
making process on this._
|
Nonetype object has no attribute 'id'
Question: I am retrieving the last id from mssql and trying to incrementing it and
storing the fie name with the id.. But I am getting "Attribute error :
Nonetype object has no attribute 'id'"..The code and the error goes here :
import Tkinter,tkFileDialog
import shutil
import pyodbc
cnxn = pyodbc.connect("DRIVER={SQL Server};SERVER=PAVAN;DATABASE=video;Trusted_Connection=yes;")
cursor = cnxn.cursor()
cursor.execute("SELECT TOP 1 id FROM files ORDER BY id DESC ")
while 1:
row = cursor.fetchone()
if not row:
break
print row.id
cnxn.close()
middle = Tkinter.Tk()
def withdraw():
dirname = tkFileDialog.askopenfilename(parent=middle,initialdir="H:/",title='Please
select a file')
a="H:python(test)\py_"+row.id+".mp4"
b=shutil.copyfile(dirname,a)
if b!="":
print "Successfully Copied"
else:
print "Could not be copied"
B = Tkinter.Button(middle, text ="UPLOAD", command = withdraw)
middle.geometry("450x300+100+100")
middle.configure(background="black")
B.pack()
middle.mainloop()
The error I'm getting is:
Traceback (most recent call last):
File "C:\Python27\lib\lib-tk\Tkinter.py", line 1470, in __call__
return self.func(*args)
File "C:\Users\hp\Desktop\upload.py", line 20, in withdraw
a="H:python(test)\py_"+row.id+".mp4"
AttributeError: 'NoneType' object has no attribute 'id'
Answer: This occurs when the you try to get **id** Attribute from object which is of
None type , Here is a case :
>> a = None
>> a.id
AttributeError: 'NoneType' object has no attribute 'id'
So it might be the case object **row** is of None type , and you are trying to
print row.id
You may check for the type of row using :
type(**row**)
With Regards Deepak
|
How to represent byte json string in Python
Question: I am working with Python and Zookeeper as I am using kazoo library in Python.
This question is not about Zookeeper or kazoo library. I guess this is mainly
related to Python.
Below are my two variables -
new_error_str = "Error occurred on machine %s in datacenter %s on the %s of process %s" % (host_info, local_dc, step, process_name)
new_error_key = "error_%s" % (timestamp_in_ms)
And now I need to use these two variables to make a byte json string and then
write the that json string as the data in the Zookeeper node so below is the
syntax by which we create a node and write the data as well -
zk.create(date_znode_path, b'{"'+new_error_key+'":"' + new_error_str + '"}', None, True)
Somehow the above line throws an exception as -
TypeError: value must be a byte string
Any thoughts what wrong I am doing here? I am trying to make a JSON String of
above variables.
Here is details about [kazoo
library](https://kazoo.readthedocs.org/en/latest/basic_usage.html#zookeeper-
crud)
**UPDATE:-**
If I use this syntax, then it works fine -
b'{"hostname":"' + get_hostname() + '"}', None, True)
I can see data like this -
{"hostname":"machineA"}
Answer: You should use json module:
import json
zk.create(date_znode_path, json.dumps({new_error_key : new_error_str}, ensure_ascii=True), None, True)
Manually constructing json is tricky, what if input contains quotes? It is
always better to use tools that can escape everything and are well tested
against all edge cases.
|
How to structure python application
Question: I have been a PHP developer for the last decade, but figured it might be a
wise move to learn another language. Like Python.
Python in and off itself isn't so hard to understand when you've got a
background in programming.
What I can't yet understand though is how to structure my application and how
to layout my classes. In PHP the convention is to do something like this:
Vendor\Module\SubModule\Class, which has a direct mapping to a directory and
file on the file system, Class.php being the file where the Class is defined.
The closest thing I have gotten in Python is this: Module.Class, where
Module.py is the file where Class is defined. So with this structure, all
classes belonging to this module should be defined in Module.py.
I don't want to emulate the PHP way of doing things, I just provided it for
context.
So the question really is: What is the pythonic way of structuring an
application?
**Edit**
Alright, maybe the question is too broad. To learn python I am converting some
php scripts to Python. I have a script that imports files with cluster
information into a database. My php structure looks like this:
.
`-- Company
|-- Cluster
| |-- Cluster.php
| |-- Import
| | |-- Array.php
| | `-- Csv.php
| |-- Import.php
| |-- Map.php
| `-- Sync.php
So I have an Import interface Company\Cluster\Import.php which is implemented
by two scripts: Company\Cluster\Import\Array.php and \Csv.php
Further I have some classes that take care of handling and preparing the
incoming data to send off to the DAO.
How might I structure this application in Python?
Answer: It is a very open question, but I would say your problem is that you are
missing the concept of package.
A package is a folder that contains modules, and other packages, allowing for
nesting as much as you want. For Python to identify a folder as a package, it
has to contain an `__init__.py` file (for the moment, just an empty file with
that name is enough).
In python a module is a `.py` file. I would recommend you not to use that name
for folders (aka packages).
Like this, you can create arbitrary structures. For instance, taking your
example:
main.py
vendor/
__init__.py
mymodule/
__init__.py
mysubmodule/
__init__.py
any_class.py
Then you define your `AnyClass` in `any_class.py`. After that you can import
from `main.py` this way:
from vendor.mymodule.mysubmodule.any_class import AnyClass
Bear in mind that you dont have to import explicitly the class. You can import
the module (== python file), the package or whatever your want. Following
examples are perfectly valid, although less common.
import vendor
my_obj = vendor.mymodule.mysubmodule.any_class.AnyClass()
from vendor import mymodule
my_obj = mymodule.mysubmodule.any_class.AnyClass()
...
from vendor.mymodule.mysubmodule import any_class
my_obj = any_class.AnyClass()
And another thing to keep in mind is, in python you are not forced to have one
class per file like in java. You are free to have more, and if they are
tightly related, very often makes sense to have them all in the same module.
|
Mezzanine ImportError when running tests
Question: I have recently upgraded the version of Django from 1.5.5 to 1.6.2 and
Mezzanine to 3.0.9.
When I run
python manage.py test
All the tests run without problem.
But When I run project specific tests using
python manage.py test <project-name>
Then I get ImportError. I get that its something to do with Circular Imports.
Here is the stack trace. Please help.
> ======================================================================
>
>> ERROR: Failure: ImportError (cannot import name DisplayableAdmin)
\----------------------------------------------------------------------
Traceback (most recent call last): File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/nose/loader.py", line 411, in loadTestsFromName addr.filename,
addr.module) File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/nose/importer.py", line 47, in importFromPath return
self.importFromDir(dir_path, fqname) File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/nose/importer.py", line 94, in importFromDir mod =
load_module(part_fqname, fh, filename, desc) File
"/Users/devarajn/repos/pari/pari/album/tests.py", line 8, in from
pari.album.admin import AlbumAdmin, AlbumImageInline File
"/Users/devarajn/repos/pari/pari/album/admin.py", line 2, in from
mezzanine.core.admin import TabularDynamicInlineAdmin File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/mezzanine/core/admin.py", line 4, in from django.contrib.auth.admin
import UserAdmin File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/contrib/auth/admin.py", line 182, in
admin.site.register(Group, GroupAdmin) File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/mezzanine/boot/lazy_admin.py", line 26, in register
super(LazyAdminSite, self).register(*args, **kwargs) File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/contrib/admin/sites.py", line 92, in register
admin_class.validate(model) File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/contrib/admin/options.py", line 105, in validate validator =
cls.validator_class() File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/contrib/admin/validation.py", line 20, in **init**
models.get_apps() File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/db/models/loading.py", line 139, in get_apps self._populate()
File "/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/db/models/loading.py", line 78, in _populate
self.load_app(app_name) File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/db/models/loading.py", line 99, in load_app models =
import_module('%s.models' % app_name) File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/utils/importlib.py", line 40, in import_module
**import**(name) File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/debug_toolbar/models.py", line 63, in patch_root_urlconf() File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/debug_toolbar/models.py", line 51, in patch_root_urlconf
reverse('djdt:render_panel') File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/core/urlresolvers.py", line 480, in reverse app_list =
resolver.app_dict[ns] File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/core/urlresolvers.py", line 310, in app_dict self._populate()
File "/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/core/urlresolvers.py", line 262, in _populate for pattern in
reversed(self.url_patterns): File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/core/urlresolvers.py", line 346, in url_patterns patterns =
getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/core/urlresolvers.py", line 341, in urlconf_module
self._urlconf_module = import_module(self.urlconf_name) File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/utils/importlib.py", line 40, in import_module
**import**(name) File "/Users/devarajn/repos/pari/pari/urls.py", line 7, in
admin.autodiscover() File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/mezzanine/boot/**init**.py", line 77, in autodiscover
django_autodiscover(*args, **kwargs) File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/contrib/admin/**init**.py", line 29, in autodiscover
import_module('%s.admin' % app) File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/django/utils/importlib.py", line 40, in import_module
**import**(name) File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/mezzanine/forms/admin.py", line 24, in from mezzanine.pages.admin
import PageAdmin File
"/Users/devarajn/.pythonbrew/venvs/Python-2.7.3/pari/lib/python2.7/site-
packages/mezzanine/pages/admin.py", line 12, in from mezzanine.core.admin
import DisplayableAdmin, DisplayableAdminForm ImportError: cannot import name
DisplayableAdmin
Answer: django-debug-toolbar module was causing the issue.
I rolled back from django-debug-toolbar v1.0.1 to v0.11.
This fixed the error.
|
python - check if list entry is contained in string
Question: I have a script where I'm walking through a list of passed (via a csv file)
paths. I'm wondering how I can determine if the path I'm currently working
with has been previously managed (as a subdirectory of the parent).
I'm keeping a list of managed paths like this:
pathsManaged = ['/a/b', '/c/d', '/e']
So when/if the next path is `'/e/a'`, I want to check in the list if a parent
of this path is present in the `pathsManaged` list.
My attempt so far:
if any(currPath in x for x in pathsManaged):
print 'subdir of already managed path'
This doesn't seem to be working though. Am I expecting too much from the any
command. Are there any other shortcuts that I could use for this type of look-
up?
Thanks
Answer: Perhaps:
from os.path import dirname
def parents(p):
while len(p) > 1:
p = dirname(p)
yield p
pathsManaged = ['/a/b', '/c/d', '/e']
currPath = '/e/a'
if any(p in pathsManaged for p in parents(currPath)):
print 'subdir of already managed path'
prints:
subdir of already managed path
|
Take column average of one file, repeat for all .txt files, write all averages to one file in python
Question: I have approximately 40 *.txt files, which all contain two columns of data -
they have been converted to str format from a previous script and are space
delimited. I would like to take the average of the second column for each .txt
file, and then put all the averages into one output file. They also need to be
read in numerical order, eg file1.txt, file2.txt.
I have the following scripts at the moment a) to read in the last line of all
.txt files and b) to take the average of one file, but when I try to combine
them I either get an error saying that it cannot convert the string to a float
or that the list index is out of range. I've also tried to do the line.strip()
method to confirm that there are no blank lines in the .txt file to sort out
the latter problem, to no avail.
a) code that reads in the last line of all .txt files:
import sys
import os
import re
import glob
numbers = re.compile(r'(\d+)')
def numericalSort(value):
parts = numbers.split(value)
parts[1::2] = map(int, parts[1::2])
return parts
list_of_files = glob.glob("./*.txt")
for file in sorted(list_of_files, key=numericalSort):
infiles = open(file, "r")
outfile = open("t-range","a")
contents = [""]
columns = []
counter = 0
for line in infiles:
counter += 1
contents.append(line)
for line in contents:
if line.startswith("20000"):
columns.append(float(line.split()[1]))
print columns
counter1 = 0
for line in columns:
counter1 += 1
outfile.write(','.join(map(str,columns))+"\n")
infiles.close()
outfile.close()
b) script that takes average value of one file:
data = open("file.txt","r").read().split()
s = sum ([float(i) for i in data])
average = s / len(data)
c) combined script
import sys
import os
import re
import glob
numbers = re.compile(r'(\d+)')
def numericalSort(value):
parts = numbers.split(value)
parts[1::2] = map(int, parts[1::2])
return parts
list_of_files = glob.glob("./totalenergies*")
for file in sorted(list_of_files, key=numericalSort):
infiles = open(file, "r")
outfile = open("t-range","a")
contents = [""]
columns = []
counter = 0
for line in infiles:
counter += 1
contents.append(float(line.split()[1]))
contents = ([float(i) for i in contents])
s = sum(contents)
average = s / len(contents)
columns.append(average)
counter1 = 0
for line in columns:
counter1 += 1
outfile.write("\n".join(map(str,columns)) + "\n")
infiles.close()
outfile.close()
This last part give an error of could not convert string to float - the
traceback shows there is a problem with contents = ([float(i) for i in
contents])
Answer: This is the line that is giving you the error:
outfile.write("".join(map(str,columns)+"\n"))
When you read the Traceback error, the last part will usually show you the
line number in your script that generated the issue, so you should check that
first.
The way the line currently reads, `+"\n"` is part of the `join` function, when
it should be part of the `write` method:
outfile.write("".join(map(str,columns)) + "\n")
If your intention was to write each average from `columns` on a new line, and
also insert a new line at the end of the list, you need this:
outfile.write("\n".join(map(str,columns)) + "\n")
More: <http://docs.python.org/2/library/stdtypes.html#str.join>
|
(?:) regular expression Python
Question: I came across a regular expression today but it was very poorly and scarcely
explained. What is the purpose of `(?:)` regex in python and where & when is
it used?
I have tried this but it doesn't seem to be working. Why is that?
word = "Hello. ) kahn. ho.w are 19tee,n doing 2day; (x+y)"
expressoin = re.findall(r'(?:a-z\+a-z)', word);
Answer: From the [`re` module
documentation](http://docs.python.org/2/library/re.html):
>
> (?:...)
>
>
> A non-capturing version of regular parentheses. Matches whatever regular
> expression is inside the parentheses, but the substring matched by the group
> cannot be retrieved after performing a match or referenced later in the
> pattern.
Basically, it's the same thing as `(...)` but without storing a captured
string in a group.
Demo:
>>> import re
>>> re.search('(?:foo)(bar)', 'foobar').groups()
('bar',)
Only one group is returned, containing `bar`. The `(?:foo)` group was not.
Use this whenever you need to group metacharacters that would otherwise apply
to a larger section of the expression, such as `|` alternate groups:
monty's (?:spam|ham|eggs)
You don't need to capture the group but do need to limit the scope of the `|`
meta characters.
As for your sample attempt; using `re.findall()` you often **do** want to
capture output. You most likely are looking for:
re.findall('([a-z]\+[a-z])', word)
where `re.findall()` will return a list tuples of all captured groups; if
there is only one captured group, it's a list of strings containing just the
one group per match.
Demo:
>>> word = "Hello. ) kahn. ho.w are 19tee,n doing 2day; (x+y)"
>>> re.findall('([a-z]\+[a-z])', word)
['x+y']
|
Replacing all regular expression matches except a certain character
Question: I would like to please know how I can replace all regular expressions matches
except for a certain chosen character.
I need to clean data. An example of the data is
`some-really,dirty.data%#$_.`
which I would like to look like
some-reallydirtydata_
Note the `-` between `some` and `really`. That is my chosen character that I
would **not** like to remove.
Here is a snippet of my code:
import re
unclean_string = "some-really,dirty.data%#$_."
clean_string = re.sub('\W', '', unclean_string)
print clean_string
>>>"somereallydirtydata_"
I know that `\W` removes all but "0 to 9, a to z, A to Z, and underscore".
I want to know how I can remove all of that, plus a chosen character (such as
`-`).
Disclaimer: I apologise in advance for asking such a basic question. I am new
to Python and using regex.
Answer: You can use:
clean_string = re.sub(r'[^\w-]', '', unclean_string)
`[^\w]` is the equivalent of `\W`. So, if you add a `-` in there, you will not
match it either.
Note: I also rawed the regex string above because it's a good practice to do
so. This prevents unexpected behaviour you might have especially during
escaping.
|
XML Tree parsing with condition in Python
Question: Here is my XML structure:
<images>


</images>
I am new to Python and wanted to know if retrieving the location data based on
the comment:name data was posible? In other words here is my code:
for itr1 in itemlist :
commentItemList = itr1.getElementsByTagName('name')
for itr2 in commentItemList:
if(itr2.firstChild.nodeValue == "Patient 1 Liver Tumer"):
commentName = itr2.firstChild.nodeValue
Loacation = it1.secondChild.nodeValue
Any recommendations or am i missing somthing here? Thank you in advance.
Answer: Parsing xml with `minidom` isn't fun at all, but here's the idea:
* iterate over all `image` nodes
* for each node, check comment/name text
* if the text matches, get the location node's text
Example that finds location for `Patient 1 Brain Tumer` comment:
import xml.dom.minidom
data = """
your xml goes here
"""
dom = xml.dom.minidom.parseString(data)
for image in dom.getElementsByTagName('image'):
comment = image.getElementsByTagName('comment')[0]
comment_name_text = comment.getElementsByTagName('name')[0].firstChild.nodeValue
if comment_name_text == 'Patient 1 Brain Tumer':
location = image.getElementsByTagName('location')[0]
print location.firstChild.nodeValue
prints:
img/brain_tumer2.jpg
|
Faster way to shift a decimal in Python to remove zeros?
Question: I have a variety of numbers that are always bounded by 0 and 1. The numbers
range in value such as
.9, .08, .00024, .00000507
My goal is to convert these numbers to the following
.9, .8, .24, .507
That is, I want to remove any zeros after the decimal point.
I have the following code to do this. Is there a way to do this faster in
terms of performance?
import math
x=.009
n = int(-math.log10(x))
x *= math.pow(10, n)
Answer: Faster method of doing it (gets faster with larger numbers):
def method2(x):
while x < 0.1:
x *= 10
return x
Even faster:
def method3(x):
while x < 0.01:
x *= 100
while x < 0.1:
x *= 10
return x
Funny method of doing it (slower than the question):
def remove_zeros(a):
return float("0." + str(long(str(1+a)[2:])))
|
Django/Python: confusion on class Classname(models.Model) parameter
Question: I cannot understand the models.Model parameter and what exactly it does here.
I am learning django on the job and this is my first software project. Here is
a piece:
class DocumentRequest(models.Model):
#contact info fields
first_name = models.CharField(max_length=255)
last_name = models.CharField(max_length=255)
Throughout the first part of code, we used Charfield, IntegerField,
DateTimeField and so on. It seems to be the Model class of the models file
that allows these things to be used, if I'm correct. I don't understand why
you would use models.Model as a parameter, why not just import the "Model"
class and use these calls (models.CharField, etc)?
Thanks
Answer: What you have here is `class inheritance`.
There is a `Model` class that lives in a `django.db.models` module. You can
actually check the source code [here, currently starting at line
371](https://github.com/django/django/blob/master/django/db/models/base.py).
As you can see as you scroll down, there are a bunch of methods. Methods are
functions defined inside of the class. When you `subclass` one class during
the declaration of another it inherits those methods (and other goodness) that
was defined in the parent class.
Your `DocumentRequest` class will have everything that was defined in `Model`
class + new stuff specific to your new class. The concept of `class
inheritance` saves us a lot of trouble and is absolutely core to understanding
object oriented programming.
You can do this:
dr = DocumentRequest()
dr.first_name = "Mark"
dr.last_name = "Twain"
dr.save() # save() method exists, we can use it right away!
|
Python 2.7 formatting data for writing as csv
Question: I'm trying to write data into a CSV file. I have data from several sensors and
time/date from a Real Time Clock. The sensor data is fine but I'm going round
in circles with formatting the RTC data. I've tried lots of things and read as
many relevant posts as my ageing brain can handle.
The clock data is in val
#print "second =", val[0]
#print "minute =", val[1]
#print "hour =", val[2]
#print "day of week =", val[3]
#print "day of month =", val[4]
#print "month =", val[5]
#print "year =", val[6]
So for instance this prints time followed by date.
print "RTC Time is ", val[2], ":", val[1], ":", val[0]
print "RTC Date is ", val[4], "/", val[5], "/", "20",val[6]
The closest I've got to what I actually want is.
import csv
datetime = val[6],val[5],val[4],val[2],val[1],val[0]
filecsv = open('tidelog.csv', 'a+') #writing to csv log file
writer = csv.writer(filecsv,dialect='excel')
stuff = [datetime, temp_inside, humidity, press, response]
writer.writerow(stuff)
filecsv.close()
The output csv looks like this
"(14, 3, 28, 15, 52, 37)",29,47,1005,2208
The date/time yy mm dd hh mm ss is inside the quotes
What I'm trying to get is the date time with hyphen separators in the first
column like below.
14-3-28-15-52-37,29,47,1005,2208
Any help would be appreciated
Derek
Answer: You can use `join` with a hyphen like so:
>>> val
[37, 52, 15, 4, 28, 3, 14]
>>> '-'.join(str(x) for x in [val[6],val[5],val[4],val[2],val[1],val[0]])
'14-3-28-15-52-37'
Your method does not work because you are implicitly creating a tuple:
>>> 1,2,3
(1, 2, 3)
>>> val[6],val[5],val[4],val[2],val[1],val[0]
(14, 3, 28, 15, 52, 37)
* * *
_Edit_
As stated in the comments, you can use
[reversed](http://docs.python.org/2.7/library/functions.html#reversed) on your
list rather than index by index:
>>> list(reversed([1,2,3,4,5]))
[5, 4, 3, 2, 1]
|
qmath very strange AttributeError
Question: I'm trying to use qmath, a quaternion lib.
this
from qmath import qmathcore
a = qmathcore.quaternion([1,2,3,4])
print a.conj()
gives me such traceback
Traceback (most recent call last):
File "*******/q_test.py", line 25, in <module>
print str(a.conj())
File "*******/venv/lib/python2.7/site-packages/qmath/qmathcore.py", line 788, in conj
return self.real() - self.imag()
File "*******/venv/lib/python2.7/site-packages/qmath/qmathcore.py", line 762, in imag
return self - self.real()
File "*******/venv/lib/python2.7/site-packages/qmath/qmathcore.py", line 522, in __sub__
self -= other
File "*******/venv/lib/python2.7/site-packages/qmath/qmathcore.py", line 407, in __isub__
self.other = quaternion(other)
File "*******/venv/lib/python2.7/site-packages/qmath/qmathcore.py", line 81, in __init__
self.q = q.q
AttributeError: quaternion instance has no attribute 'q'
but in docs they said, that this must work:
def conj(self):
"""
Returns the conjugate of the quaternion
>>> import qmathcore
>>> a = qmathcore.quaternion([1,2,3,4])
>>> a.conj()
(1.0-2.0i-3.0j-4.0k)
>>> a = qmathcore.hurwitz([1,2,3,4])
>>> a.conj()
(1-2i-3j-4k)
"""
return self.real() - self.imag()
what is this?
Answer: `qmathcore.py` fails its own doctest with a newer (1.9) numpy.
Adding this test to `quatereon()`
elif isinstance(q,float) or isinstance(q,int): # accept np.float64
self.q = 1.0 * np.array([q,0.,0.,0.])
allows `qmath.quaternion([1,2,3,4]).imag()` (and `conj`).
The `quaternion` method is using a lot of `type(q)==xxx` tests. `isinstance()`
is a more robust test. Also it ends with a `else:pass`, and thus doesn't catch
`q` values that it can't handle.
After correcting some import errors, the `qmathcore` doctest runs fine.
|
How to enable logging at the initialization of the module in Python?
Question: I am having the following code:
#! /usr/bin/python
import logging
import test_log
def main():
logging.error("hello")
if __name__ == '__main__':
logging.basicConfig()
main()
where the `test_log` is defined as:
import logging
_logger = logging.getLogger(__name__)
_logger.warn('initializing ' + __name__)
Executing the main script will led to output:
No handlers could be found for logger "test_log"
ERROR:root:hello
I believe it is because the `import test_log` statement will execute the
`_logger = logging.getLogger(__name__); _logger.warn('initializing ' +
__name__)`, while at this time `logging.basicConfig` is not executed. Is there
a way of solving this?
Answer: If you can't avoid logging at import time, add a
[`NullHandler`](http://docs.python.org/2/library/logging.handlers.html#nullhandler)
instance to the logger in `test_log`. That will prevent the "No handlers ..."
message.
Also, you have two calls to `basicConfig()`. Avoid this - have just one call
(in `if __name__ == '__main__'`).
|
Catch mouse clicks in python
Question: I'm trying to catch mouse click events in my python code (not just clicks in a
`Gtk` window I created but anywhere in the screen). After some search, I found
a thread talking about `xlib` (great, a dependency I already had).
from Xlib import display, X
display = display.Display()
root = display.screen().root
root.change_attribute(event_mask = X.ButtonPressMask | X.ButtonReleaseMask)
while True:
event = root.display.next_event()
print(event)
The problem is that this chunk of code is throwing an error and I just can't
find good documentation...
X protocol error:
<class 'Xlib.error.BadAccess'>: code = 10, resource_id = 146, sequence_number = 9, major_opcode = 2, minor_opcode = 0
If you have any idea for this error or another way do this, it'll help really!
Answer: `BadAccess` here is because "An attempt is made to select an event type that
only one client can select at a time when another client has already selected
it."
From [x11 protocol
documentation](http://www.x.org/releases/X11R7.6/doc/xproto/x11protocol.txt),
ChangeWindowAttributes request (opcode 2 is this request code, resource id=146
is your root win id):
> Multiple clients can select input on the same window; their event-masks are
> disjoint. When an event is generated, it will be reported to all interested
> clients. However, only one client at a time can select for
> SubstructureRedirect , only one client at a time can select for
> ResizeRedirect, and only one client at a time can select for ButtonPress. An
> attempt to violate these restrictions results in an Access error.
Try to select `ButtonReleaseMask` only as a possible solution
|
logic of regex in python
Question: I always have a hard time understanding the logic of regex in python.
all_lines = '#hello\n#monica, how re "u?\n#hello#robert\necho\nfall and spring'
I want to retrieve the substring that STARTS WITH `#` until the FIRST `\n`
THAT COMES RIGHT AFTER the LAST `#` \- I.e., `'#hello\n#monica, how re
"u?\n#hello#robert'`
So if I try:
>>> all_lines = '#hello\n#monica, how re "u?\n#hello#robert\necho'
>>> RE_HARD = re.compile(r'(^#.*\n)')
>>> mo = re.search(RE_HARD, all_lines)
>>> print mo.group(0)
#hello
Now, if I hardcode what comes after the first \n after the last #, i.e., I
hardcode echo, I get:
>>> all_lines = '#hello\n#monica, how re "u?\n#hello#robert\necho'
>>> RE_HARD = re.compile(r'(^#.*echo)')
>>> mo = re.search(RE_HARD, all_lines)
>>> print mo.group(0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'group'
I get an error, no idea why. Seems the same as before.
This is still not want I want since in reality after the first \n that comes
after the last # I may have any character/string...
Answer: This program matches the pattern you request.
#!/usr/bin/python
import re
all_lines = '#hello\n#monica, how re "u?\n#hello#robert\necho'
regex = re.compile(
r'''\# # first hash
.* # continues to (note: .* greedy)
\# # last hash
.*?$ # rest of the line. (note .*? non-greedy)
''',
# Flags:
# DOTALL: Make the '.' match any character at all, including a newline
# VERBOSE: Allow comments in pattern
# MULTILINE: Allow $ to match end of line
re.DOTALL | re.VERBOSE | re.MULTILINE)
print re.search(regex, all_lines).group()
Reference: <http://docs.python.org/2/library/re.html>
Demo: <http://ideone.com/aZjjVj>
|
BeautifulSoup webscraping find_all( ): finding exact match
Question: I'm using Python and BeautifulSoup for web scraping.
Lets say I have the following html code to scrape:
<body>
<div class="product">Product 1</div>
<div class="product">Product 2</div>
<div class="product special">Product 3</div>
<div class="product special">Product 4</div>
</body>
Using BeautifulSoup, I want to find ONLY the products with the attribute
class="product" (only Product 1 and 2), not the 'special' products
If I do the following:
result = soup.find_all('div', {'class': 'product'})
the result includes ALL the products (1,2,3, and 4).
What should I do to find products whose class EXACTLY matches 'product'??
* * *
The Code I ran:
from bs4 import BeautifulSoup
import re
text = """
<body>
<div class="product">Product 1</div>
<div class="product">Product 2</div>
<div class="product special">Product 3</div>
<div class="product special">Product 4</div>
</body>"""
soup = BeautifulSoup(text)
result = soup.findAll(attrs={'class': re.compile(r"^product$")})
print result
Output:
[<div class="product">Product 1</div>, <div class="product">Product 2</div>, <div class="product special">Product 3</div>, <div class="product special">Product 4</div>]
Answer: In BeautifulSoup 4, the `class` attribute (and several other attributes, such
as `accesskey` and the `headers` attribute on table cell elements) is treated
as a set; you match against individual elements listed in the attribute. This
follows the HTML standard.
As such, you cannot limit the search to just one class.
You'll have to use a [custom
function](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#a-function)
here to match against the class instead:
result = soup.find_all(lambda tag: tag.name == 'div' and
tag.get('class') == ['product'])
I used a `lambda` to create an anonymous function; each tag is matched on name
(must be `'div'`), and the class attribute must be exactly equal to the list
`['product']`; e.g. have just the one value.
Demo:
>>> from bs4 import BeautifulSoup
>>> text = """
... <body>
... <div class="product">Product 1</div>
... <div class="product">Product 2</div>
... <div class="product special">Product 3</div>
... <div class="product special">Product 4</div>
... </body>"""
>>> soup = BeautifulSoup(text)
>>> soup.find_all(lambda tag: tag.name == 'div' and tag.get('class') == ['product'])
[<div class="product">Product 1</div>, <div class="product">Product 2</div>]
For completeness sake, here are all such set attributes, from the
BeautifulSoup source code:
# The HTML standard defines these attributes as containing a
# space-separated list of values, not a single value. That is,
# class="foo bar" means that the 'class' attribute has two values,
# 'foo' and 'bar', not the single value 'foo bar'. When we
# encounter one of these attributes, we will parse its value into
# a list of values if possible. Upon output, the list will be
# converted back into a string.
cdata_list_attributes = {
"*" : ['class', 'accesskey', 'dropzone'],
"a" : ['rel', 'rev'],
"link" : ['rel', 'rev'],
"td" : ["headers"],
"th" : ["headers"],
"td" : ["headers"],
"form" : ["accept-charset"],
"object" : ["archive"],
# These are HTML5 specific, as are *.accesskey and *.dropzone above.
"area" : ["rel"],
"icon" : ["sizes"],
"iframe" : ["sandbox"],
"output" : ["for"],
}
|
How do I sort objects inside of objects in JSON? (using Python 2.7)
Question: I have the following code, which is a function to export a transaction history
from a digital currency wallet to a json file.
The problems I am facing are two:
1. I would like to allow the json file to be written in utf-8, as the property 'label' can be utf-8 characters, and if I do not account for that, it will show up in the file as \u\u\u etc. However, no matter what combinations and orderings of encode/decode('utf-8'), I can not get the final output file to print to utf-8.
2. I would like to order each iteration in the order I wrote them in the code. I tried OrderedDict from collection, but it did not order the items so that Date comes first, etc.
Any help with figuring out how to print this to my file using utf-8 and with
the order inside each item as I wrote it.
Thank you very much.
# This line is the last line of a for loop iterating through
# the list of transactions, "for each item in list"
wallet_history.append({"Date": time_string, "TXHash": tx_hash, "Label": label, "Confirmations":
confirmations, "Amount": value_string, "Fee": fee_string, "Balance": balance_string})
try:
history_str = json.dumps(
wallet_history, ensure_ascii=False, sort_keys=False, indent=4)
except TypeError:
QMessageBox.critical(
None, _("Unable to create json"), _("Unable to create json"))
jsonfile.close()
os.remove(fileName)
return
jsonfile.write(history_str)
Answer: You need to ensure that both json does not escape characters, and you write
your json output as unicode:
import codecs
import json
with codecs.open('tmp.json', 'w', encoding='utf-8') as f:
f.write(json.dumps({u'hello' : u'привет!'}, ensure_ascii=False) + '\n')
$ cat tmp.json
{"hello": "привет!"}
As for your second question: you can use `collections.OrderedDict`, but you
need to be careful to pass it directly to `json.dumps` without changing it to
simple dict. See the difference:
from collections import OrderedDict
data = OrderedDict(zip(('first', 'second', 'last'), (1, 10, 3)))
print json.dumps(dict(data)) # {"second": 10, "last": 3, "first": 1}
print json.dumps(data) # {"first": 1, "second": 10, "last": 3}
|
Selenium python how to use assertEqual method from unittest.TestCase "No such test method in <class 'unittest.case.TestCase'>"
Question: I try to use assertEqual in usual class and can't call method from
unittest.TestCase
class MyPages(unittest.TestCase):
@classmethod
def setUpClass(cls):
basetest.BaseTest().open_browser('firefox')
basetest.BaseTest().login()
def testCreateFolder(self):
print "111111111"
def testCreateFolder1(self):
print "222222222"
@classmethod
def tearDownClass(cls):
basetest.BaseTest().close_browser()
And in my BaseTest I want to make login with text assert.
class BaseTest():
def open_browser(self, browser):
self.driver = config.browser[browser]
global driver
driver = self.driver
driver.get(config.url)
def login(self):
# Go to authorisation page
driver.find_element_by_xpath(link.header["Login_button"]).click()
# Get text from LOGIN label and assert it with expected text
login_text = driver.find_element_by_xpath(link.author_popup["Login_label"])
login_text.get_attribute("text")
print login_text.text
unittest.TestCase().assertEqual(1, 1, "helllllllo")
unittest.TestCase().assertEqual(login_text.text, text.author_popup["Login"],
"Wrong label on log in auth popup. Expected text:")
As a result I have the following:
Error
Traceback (most recent call last):
File "D:\python\PD_Tests\pages\my_pages.py", line 17, in setUpClass
basetest.BaseTest().login()
File "D:\python\PD_Tests\tests\basetest.py", line 25, in login
unittest.TestCase().assertEqual(1, 1, "helllllllo")
File "C:\Python27\lib\unittest\case.py", line 191, in __init__
(self.__class__, methodName))
ValueError: no such test method in <class 'unittest.case.TestCase'>: runTest
Can I use assertEqual method in my method if my class is not
unittest.TestCase?
Answer: I think there's a way to do what you want, but it's a little bit of a hack.
The constructor for the `TestCase` class takes a method name as a parameter,
and this parameter has the default value `"runTest"`. The docstring for this
constructor reads as follows:
> Create an instance of the class that will use the named test method when
> executed. Raises a ValueError if the instance does not have a method with
> the specified name.
Hopefully this should explain the error message you are seeing.
If you want to create a `TestCase` just to use the assert methods, you can
pass in the name of some other method instead, such as `__str__`. That will
get you past the checking done in the constructor:
Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from unittest import TestCase
>>> t = TestCase("__str__")
>>> t.assertEqual(3, 5)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python27\lib\unittest\case.py", line 511, in assertEqual
assertion_func(first, second, msg=msg)
File "C:\Python27\lib\unittest\case.py", line 504, in _baseAssertEqual
raise self.failureException(msg)
AssertionError: 3 != 5
>>> t.assertEqual(3, 3)
>>>
As long as you don't try to run your TestCase, this shouldn't be a problem.
|
Underflow issue in Python
Question: I need to draw the following function, which is what I call theory data

with a=z-(L/2) b=z+(L/2) for the following case
* N=75
* I=1
* L=0.16
* R=0.013
for z between -0.011 and 0.011. My guess is that there are underflow issues.
The code is as follows
import numpy as np
import matplotlib.pyplot as plt
class Solenoid():
def __init__(self,N,I,L,R):
self.N=N
self.I=I
self.L=L
self.R=R
def field(self,z):
mu=4*(np.pi)*(10 ** -7)
coef=(mu*self.N*self.I)/(2*self.L)
a=z+(self.L/2)
b=z-(self.L/2)
sum_a=a/((self.R ** 2 + a ** 2) ** 0.5)
sum_b=b/((self.R ** 2 + b ** 2) ** 0.5)
return (coef*(sum_a - sum_b))
sol=Solenoid(75,1,0.160,0.013)
x=np.linspace(-0.011, 0.011, 10000)
y=[]
for point in x:
y.append(sol.field(point))
plt.plot(x,y)
plt.show()
The output is

which is plain wrong. The expected output is a more or less smooth line along
with experimental data:

UPDATE New output is

Answer: It was a factor of 10 in the x scale ...

|
How to save an animation without showing the previous frames in the video?
Question: I would like to save an animation using Python but I get the frames
superposed! I want to get the frames displayed individually. Please here what
I used:
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import animation
from numpy import pi, cos, sin
fig = plt.figure()
plt.axis([-1.5, 1.5,-1.5, 1.5])
ax = plt.gca()
ax.set_aspect(1)
N=100
xp = [None] * N
yp = [None] * N
def init():
# initialize an empty list of cirlces
return []
def animate(i):
xp[i]=sin(i*pi/10)
yp[i]=cos(i*pi/10)
patches = []
patches.append(ax.add_patch( plt.Circle((xp[i],yp[i]),0.02,color='b') ))
return patches
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=N-1, interval=20, blit=True)
anim.save("example.avi")
plt.show()
Answer: There are some things I'm not sure about and it really seems to be that the
axis.plot() behavior and FuncAnimate() behavior are different. However, the
code below works for both.
# Use only one patch (in your case)
The key point from your code is that you are adding a new circle in addition
to the old circles every iteration:
patches = []
patches.append(ax.add_patch( plt.Circle((xp[i],yp[i]),0.02,color='b') ))
Even though you clear the patches list, they are still stored in the axis.
Instead, just create one circle and change its position.
# Clear first frame with `init()`
Also, `init()` needs to clear the patch from the base frame.
# Standalone Example
from matplotlib import pyplot as plt
from matplotlib import animation
from numpy import pi, cos, sin
fig = plt.figure()
plt.axis([-1.5, 1.5, -1.5, 1.5])
ax = plt.gca()
ax.set_aspect(1)
N = 100
xp = []
yp = []
# add one patch at the beginning and then change the position
patch = plt.Circle((0, 0), 0.02, color='b')
ax.add_patch(patch)
def init():
patch.set_visible(False)
# return what you want to be cleared when axes are reset
# this actually clears even if patch not returned it so I'm not sure
# what it really does
return tuple()
def animate(i):
patch.set_visible(True) # there is probably a more efficient way to do this
# just change the position of the patch
x, y = sin(i*pi/10), cos(i*pi/10)
patch.center = x, y
# I left this. I guess you need a history of positions.
xp.append(x)
yp.append(y)
# again return what you want to be cleared after each frame
# this actually clears even if patch not returned it so I'm not sure
# what it really does
return tuple()
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=N-1, interval=20, blit=True)
# for anyone else, if you get strange errors, make sure you have ffmpeg
# on your system and its bin folder in your path or use whatever
# writer you have as: writer=animation.MencoderWriter etc...
# and then passing it to save(.., writer=writer)
anim.save('example.mp4')
plt.show()
# Return values???
Regarding the return values of `init()` and `animate()`, It doesn't seem to
matter what is returned. The single patch still gets moved around and drawn
correctly without clearing previous ones.
|
pysfml breaks on runtime while displaying a render window
Question: im using pySfml with python3.3 x64 when i simply want to display a window it
breaks . it runs normally and everything is working for 2-3 seconds . then the
window gets non responding while my pytho script is actually still working and
everything is fine there .
import sfml as sf
win = sf.RenderWindow(sf.VideoMode(500,500),"hi")
win.clear()
win.display()
input()
so , does anyone has any experience with this problem ? its really annoying .
to be more accurate im using python 3.3.2 and pysfml 1.3.0
many thanks in advance
edit : just tried the same code this time with python2.7 32 bit it happened
again . whats going on ?!
Answer: just tried this and it worked :
import sfml as sf
win = sf.RenderWindow(sf.VideoMode(100,100),"hi")
while(1):
for i in win.events :
pass
win.clear()
win.display()
looks like i HAVE TO always empty the event queue somehow . even when i have
nothing to do with events .
|
How to extend a templated c++ class in python with SWIG to allow the [] operator
Question: I have a templated c++ array class which uses the standard vector class:
#include <vector>
#include <string>
using namespace std;
template<typename T>
class Array1D{
private:
vector<T> data_;
int xsize_;
public:
Array1D(): xsize_(0) {};
// creates vector of size nx and sets each element to t
Array1D(const int& nx, const T& t): xsize_(nx) {
data_.resize(xsize_, t);
}
T& operator()(int i) {return data_[i];}
T& operator[](int i) {return data_[i];}
};
My SWIG interface file looks like
%module test
%{
#define SWIG_FILE_WITH_INIT
#include "test.h"
%}
%include "std_vector.i"
// Array 1D Typemaps
// typemaps for standard vector<double>
namespace std{
%template(DoubleVector) vector<double>;
%template(IntVector) vector<int>;
}
%include "test.h"
%template(intArray1D) Array1D<int>;
%template(doubleArray1D) Array1D<double>;
%rename(__getitem__) operator[];
%extend Array1D<T>{
T& __getitem__(int i) {
return (*self)[i];
}
}
After making the module, and create an Array1D in python, when I type in a[2]
I get the following error:
TypeError: 'doubleArray1D' object does not support indexing
My guess is something is wrong with the extend part of my interface file. I
don't think it is recognizing the type T. Any thoughts on how to get this to
work?
Thanks in advance!
Answer: You can extend whole templates, without having to pick a specific type. For
example, modifying your code as follows:
%module test
%{
#include <vector>
%}
%inline %{
template<typename T>
class Array1D{
private:
std::vector<T> data_;
size_t xsize_;
public:
Array1D(): xsize_(0) {};
// creates vector of size nx and sets each element to t
Array1D(const size_t& nx, const T& t): xsize_(nx) {
data_.resize(xsize_, t);
}
T& operator[](const size_t i) {return data_.at(i);}
};
%}
%extend Array1D {
T __getitem__(size_t i) {
return (*$self)[i];
}
}
%template(intArray1D) Array1D<int>;
%template(doubleArray1D) Array1D<double>;
Which works as you'd hope because SWIG itself expands and fills in the types
for `T` when it is generating the wrapper:
In [1]: import test
In [2]: a=test.intArray1D(10,1)
In [3]: a[0]
Out[3]: 1
In [4]: a[10]
terminate called after throwing an instance of 'std::out_of_range'
what(): vector::_M_range_check
zsh: abort ipython
Note: I swapped to `size_t` from `int` because they're not synonyms always and
`.at()` instead of `[]` because the former will throw for an invalid index
rather than invoke undefined behaviour. You can actually use SWIG's default
exception library to do "smart" things with the exception for free:
%module test
%{
#include <vector>
%}
%include <std_except.i>
%inline %{
template<typename T>
class Array1D{
private:
std::vector<T> data_;
size_t xsize_;
public:
Array1D(): xsize_(0) {};
// creates vector of size nx and sets each element to t
Array1D(const size_t& nx, const T& t): xsize_(nx) {
data_.resize(xsize_, t);
}
T& operator[](const size_t i) {return data_.at(i);}
};
%}
%extend Array1D {
T __getitem__(size_t i) throw(std::out_of_range) {
return (*$self)[i];
}
}
%template(intArray1D) Array1D<int>;
%template(doubleArray1D) Array1D<double>;
Is sufficient (two lines of changes) to get a Python `IndexError` instead of a
C++ exception, crash or other UB.
|
flask module import failure from test folder
Question: How do I import modules located up (out of a test folder) and then back down
my app folder structure?
in the app_test.py folder I want to
from app.models import User
running
/bin/python app_test.py
I get
no modules named app.models
I have an app_test.py file that worked until i did some shuffling, and now i'm
not sure why My imports are not working when running tests from my test
folder. I'm vitrualenv'd into my apps main folder and here's the basic layout
minus superfluous folders/files:
/application
/(leaving out folders bin build lib local...)
/src
/test
__init__.py
app_test.py
/app
models.py
forms.py
__init__.py
views.py
/static
/templates
/tmp
What should my import be? I notice that the mega tutorial just uses a test.py
file instead of a folder. is that the best way to do it, and if not, where
should my test folder be located? It is driving me nuts.
Answer: You either need to run your app_test.py from the top, (ie, "python
test/app_test.py") or fixup the python path in app_test.py:
import os
import sys
topdir = os.path.join(os.path.dirname(__file__), "..")
sys.path.append(topdir)
|
Use Apache 2 + mod_wsgi for Python app
Question: I'm trying to use Apache 2 + mod_wsgi for my Python app. But get an error:
Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error.
More information about this error may be available in the server error log.
My configurations: Ubuntu 12.04, Apache2, Python3.3, Django 1.6,
libapache2-mod-wsgi-py3.
My settings.
Folder structure:
www
└── test.local
├── test_site
│ ├── manage.py
│ └── test_site
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── test_site.wsgi
/etc/hosts:
127.0.0.1 localhost
127.0.0.2 test.local
127.0.1.1 ube-home
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
/etc/apache2/sites-available/test.local
<VirtualHost 127.0.0.2:80>
ServerAdmin [email protected]
ServerName test.local
ServerAlias www.test.local
WSGIScriptAlias / /home/ube/www/test.local/test_site.wsgi
DocumentRoot /home/ube/www/test.local
<Directory />
Options FollowSymLinks
AllowOverride All
</Directory>
<Directory /home/ube/www/test.local/>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory "/usr/lib/cgi-bin">
AllowOverride All
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn
CustomLog ${APACHE_LOG_DIR}/access.log combined
Alias /doc/ "/usr/share/doc/"
<Directory "/usr/share/doc/">
Options Indexes MultiViews FollowSymLinks
AllowOverride ALl
Order deny,allow
Deny from all
Allow from 127.0.0.0/255.0.0.0 ::1/128
</Directory>
</VirtualHost>
/home/ube/www/test.local/test_site.wsgi
import os
import sys
sys.path.append('/home/ube/www/test.local/')
os.environ['DJANGO_SETTINGS_MODULE'] = 'test_site.settings'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
I haven't forgotten:
sudo a2ensite test.local
sudo service apache2 restart
Error log says:
[Sun Mar 30 01:35:06 2014] [error] [client 127.0.0.1] Traceback (most recent call last):
[Sun Mar 30 01:35:06 2014] [error] [client 127.0.0.1] File "/home/ube/www/test.local/test_site.wsgi", line 5, in <module>
[Sun Mar 30 01:35:06 2014] [error] [client 127.0.0.1] import django.core.handlers.wsgi
[Sun Mar 30 01:35:06 2014] [error] [client 127.0.0.1] ImportError: No module named django.core.handlers.wsgi
[Sun Mar 30 01:35:06 2014] [error] [client 127.0.0.1] mod_wsgi (pid=15638): Target WSGI script '/home/ube/www/test.local/test_site.wsgi' cannot be loaded as Python module.
[Sun Mar 30 01:35:06 2014] [error] [client 127.0.0.1] mod_wsgi (pid=15638): Exception occurred processing WSGI script '/home/ube/www/test.local/test_site.wsgi'.
[Sun Mar 30 01:35:06 2014] [error] [client 127.0.0.1] Traceback (most recent call last):
[Sun Mar 30 01:35:06 2014] [error] [client 127.0.0.1] File "/home/ube/www/test.local/test_site.wsgi", line 5, in <module>
[Sun Mar 30 01:35:06 2014] [error] [client 127.0.0.1] import django.core.handlers.wsgi
[Sun Mar 30 01:35:06 2014] [error] [client 127.0.0.1] ImportError: No module named django.core.handlers.wsgi
Answer: Solution in my case:
import os
import sys
sys.path.append('/home/ube/www/test.local/test_site/')
sys.path.append('/home/ube/www/test.local/')
sys.path.append('/home/ube/www/')
sys.path.append('/usr/local/lib/python3.3/dist-packages')
os.environ['DJANGO_SETTINGS_MODULE'] = 'test_site.settings'
from django.core.handlers.wsgi import WSGIHandler
application = WSGIHandler()
I added more sys.path =).
|
Adding a numeric type between Real and Rational, and supporting the type's functionality for Rational numbers
Question: Python offers [a set of abstract base
classes](http://docs.python.org/3/library/numbers.html) for types of numbers.
These start with `Number`, of which `Complex` is a subclass, and so on through
`Real`, `Rational` and `Integral`. Since each is a subclass of the last, each
supports the special functionality of the classes that came before it in the
sequence. For example, you can write `(1).numerator` to get the numerator of
the Python integer `1`, created using the integer literal `1`, considered as a
rational number.
The linked page notes: _There are, of course, more possible ABCs for numbers,
and this would be a poor hierarchy if it precluded the possibility of adding
those. You can add MyFoo between Complex and Real with:_
class MyFoo(Complex): ...
MyFoo.register(Real)
This has the effect of adding a new subclass of complex numbers such that
objects of type `Real` will test as being instances of the new class - thus
adding the new class "in between" `Complex` and `Real` in some sense. This
doesn't address, however, the possibility that the new class might introduce
functionality (such as that exemplified by the `numerator` property) not
offered by its subclass.
For example, suppose that you want to add a class whose instances represent
numbers of the form `a + b√2` where `a` and `b` are rational numbers. You
would probably represent these numbers internally as a pair of `Fraction`s
(instances of `fraction.Fraction` from the Python standard library).
Evidently, this class of numbers is properly a subclass of `Real`, and we
would want to treat `Rational` as being its subclass (because every rational
number is a number of our new type in which `b == 0`). So we would do this:
class FractionWithRoot2Part (Real): ...
FractionWithRoot2Part.register(Rational)
We might want to add properties to the new class that (say) return the numbers
`a` and `b`. These properties might be called something like `RationalPart`
and `CoefficientOfRoot2`. This is awkward, however, because existing numbers
of type `Rational` will not have these properties. If we write
`(1).RationalPart` then we will get an `AttributeError`. Demonstration:
Python 3.3.1 (v3.3.1:d9893d13c628, Apr 6 2013, 20:25:12) [MSC v.1600 32 bit (In
tel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from abc import *
>>> class c1 (metaclass = ABCMeta):
... def x (self): return 5
...
>>> class c2: pass
...
>>> c1.register(c2)
<class '__main__.c2'>
>>> a1 = c1()
>>> a2 = c2()
>>> a1.x()
5
>>> a2.x()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'c2' object has no attribute 'x'
Thus we have not truly introduced a new type that is "in between" the existing
two types, because the type "at the bottom" of the subclass relation does not
support the behaviours of the class "in the middle".
What is the generally accepted way to get around this? One possibility is to
provide a function (not a method of any class) which can handle any kind of
input and act intelligently; something like this:
def RationalPart (number):
if isinstance(number, FractionWithRoot2Part):
try:
return number.RationalPart
except AttributeError:
# number is presumably of type Rational
return number
else:
raise TypeError('This is not supported you dummy!')
Is there a better way than this?
Answer: You can't use ABCs to modify existing classes in this way (and in fact, you
can't safely modify existing classes in this way at all). ABCs are only a
mechanism for customizing whether a class tests as a subclass of another (and
instances of it test as instances of the other), not for actually changing
subclass implementations. When the documentation talks about defining a new
class "in between", this is the sense it means; it just means in between in
terms of subclass/instance checks, not actual inheritance. This is described
[here](http://docs.python.org/3/glossary.html#term-abstract-base-class):
> ABCs introduce virtual subclasses, which are classes that don’t inherit from
> a class but are still recognized by `isinstance()` and `issubclass()`
Note what it says: the virtual subclasses _don't actually inherit_ from your
ABC, they just test as if they do. That's how ABCs are designed to work. The
way to use them is suggested
[here](http://docs.python.org/2/library/abc.html):
> An ABC can be subclassed directly, and then acts as a mix-in class.
So you can't modify the existing Rational class using ABC. The way to do what
you want is to make a new class that inherits from Rational and uses your ABC
as a mixin. Then use that class instead of the regular Rational.
In fact, you might not even really need to use ABCs here. The only advantage
of using ABCs is that it makes your new rational-like numbers look like
Rationals if anyone explicitly tests; but as long as you inherit from Rational
and from your new class that adds the behavior you want, the new classes will
_act_ like Rationals anyway.
When you say
> This is awkward, however, because existing numbers of type Rational will not
> have these properties.
you have targeted the essence of the situation. It might seem awkward, but it
would also be mighty awkward if someone could come in and, with an ABC end-
run, start modifying the behavior of your existing classes by sticking a new
superclass above them in the inheritance hierarchy. That's not how it works.
There is no safe way to add new behavior to _existing_ instances of any class;
the only safe thing is to add new behavior to your new class, and tell people
to use that new class instead of the old class.
|
Comparing and then appending rows to csv file in python
Question: I am trying to compare two columns from _result.csv_ and *street_segments.csv*
and then if they are the same append columns from *street_segments.csv* to
_results.csv_.
import csv
count=0
count1=0
count2=0
first = file('result.csv', 'rU')
reader = csv.reader(first)
second=file('street_segments.csv', 'rU')
reader1= csv.reader(second)
for row1 in reader1:
count +=1
print count
for row in reader:
count1 += 1
print count1
if row[3]==row1[1]:
row.append(row1[2])
row.append(row1[3])
row.append(row1[4])
count2 += 1
print count2
The issue that I am having is that what I get is:
1 (from count)
1 (from count1)
2 (from count1)
3(from count1)
...
200,000(from count1)
2(from count)
3(from count)
...
90000(from count)
With the nested for loops, shouldn't I be getting:
1 (from count)
1 (from count1)
2(from count1)
...
90000(from count1)
2 (from count)
1 (from count1)
2(from count1)
...
90000(from count1)
3 (from count)
1(from count1)
2(from count1)
...
90000(from count1)
Can you guys let me know what I am doing wrong or if there is a better
approach to this problem.
So my results.csv file has a row like:
-73.88637197, 40.85400596, 5327502, P-089988, 1015684.082, 250435.3, NO PARKING (SANITATION BROOM SYMBOL) 8:30-10AM TUES & FRI <----->
and my street_segments.csv has a row:
B, P-004958, RANDALL AVENUE, FAILE STREET, COSTER STREET, N
So what I am trying to do is that if the fourth column of results.csv and the
second column of street_segments.csv are the same. I want to add columns 3,
4,5 of street_segments.csv to the end of the row of results.csv.
Answer: Your problem is that file iterators do not automatically "rewind". That is,
when you hit your second iteration of the outer `for` loop, the inner `for` is
never entered, because the file pointer for `reader` (from `first`) is at the
end of the file. To solve this you'll need to add a `first.seek(0)` command to
get you back to the start of the file (before your inner loop).
import csv
count=0
count1=0
count2=0
first = file('result.csv', 'rU')
reader = csv.reader(first)
second=file('street_segments.csv', 'rU')
reader1= csv.reader(second)
for row1 in reader1:
count +=1
print count
# Rewind to the start of the file in preparation for the next loop
first.seek(0)
for row in reader:
count1 += 1
print count1
if row[3]==row1[1]:
row.append(row1[2])
row.append(row1[3])
row.append(row1[4])
count2 += 1
print count2
|
In over my head: Debugging and many errors
Question: I am trying to write a program for connect 4 but am having a lot of trouble
getting past the directions. Everything under the comment, "#everything works
up to here" works but then it all explodes and I have no idea even where to
start to fix it.
#connect 4
import random
#define global variables
X = "X"
O = "O"
EMPTY = "_"
TIE = "TIE"
NUM_ROWS = 6
NUM_COLS = 8
def display_instruct():
"""Display game instructions."""
print(
"""
Welcome to the second greatest intellectual challenge of all time: Connect4.
This will be a showdown between your human brain and my silicon processor.
You will make your move known by entering a column number, 1 - 7. Your move
(if that column isn't already filled) will move to the lowest available position.
Prepare yourself, human. May the Schwartz be with you! \n
"""
)
def ask_yes_no(question):
"""Ask a yes or no question."""
response = None
while response not in ("y", "n"):
response = input(question).lower()
return response
def ask_number(question,low,high):
"""Ask for a number within range."""
#using range in Python sense-i.e., to ask for
#a number between 1 and 7, call ask_number with low=1, high=8
low=1
high=NUM_COLS
response = None
while response not in range (low,high):
response=int(input(question))
return response
def pieces():
"""Determine if player or computer goes first."""
go_first = ask_yes_no("Do you require the first move? (y/n): ")
if go_first == "y":
print("\nThen take the first move. You will need it.")
human = X
computer = O
else:
print("\nYour bravery will be your undoing... I will go first.")
computer = X
human = O
return computer, human
def new_board():
board = []
for x in range (NUM_COLS):
board.append([" "]*NUM_ROWS)
return board
def display_board(board):
"""Display game board on screen."""
for r in range(NUM_ROWS):
print_row(board,r)
print("\n")
def print_row(board, num):
"""Print specified row from current board"""
this_row = board[num]
print("\n\t| ", this_row[num], "|", this_row[num], "|", this_row[num], "|", this_row[num], "|", this_row[num], "|", this_row[num], "|", this_row[num],"|")
print("\t", "|---|---|---|---|---|---|---|")
# everything works up to here!
def legal_moves(board):
"""Create list of column numbers where a player can drop piece"""
legal=True
while not legal:
col = input("What column would you like to move into (1-7)?")
for row in range (6,0,1):
if (1 <= row <= 6) and (1 <= col <= 7) and (board[row][col]==" "):
board[row][col] = turn
legal = True
else:
print("Sorry, that is not a legal move.")
def human_move(board,human):
"""Get human move"""
try:
legals = legal_moves(board)
move = None
while move not in legals:
move = ask_number("Which column will you move to? (1-7):", 1, NUM_COLS)
if move not in legals:
print("\nThat column is already full, nerdling. Choose another.\n")
print("Human moving to column", move)
return move #return the column number chosen by user
except NameError:
print ("Only numbers are allowed.")
except IndexError:
print ("You can only select colums from 1-7.")
def get_move_row(turn,move):
for m in (NUM_COLS):
place_piece(turn,move)
display_board()
def computer_move ():
move= random.choice(legal)
return move
def place_piece(turn,move):
if this_row[m[move]]==" ":
this_row.append[m[move]]=turn
def winner(board):
# Check rows for winner
for row in range(6):
for col in range(3):
if (board[row][col] == board[row][col + 1] == board[row][col + 2] == board[row][col + 3]) and (board[row][col] != " "):
return [row][col]
# Check columns for winner
for col in range(6):
for row in range(3):
if (board[row][col] == board[row + 1][col] == board[row + 2][col] ==board[row + 3][col]) and (board[row][col] != " "):
return [row][col]
# Check diagonal (top-left to bottom-right) for winner
for row in range(3):
for col in range (4):
if (board[row][col] == board[row + 1][col + 1] == board[row + 2][col + 2] == board[row + 3][col + 3]) and (board[row][col] != " "):
return true
# Check diagonal (bottom-left to top-right) for winner
for row in range (5,2,-1):
for col in range (3):
if (board[row][col] == board[row - 1][col + 1] == board[row - 2][col + 2] == board[row - 3][col + 3]) and (board[row][col] != " "):
return [row][col]
# No winner
return False
def main():
display_instruct()
computer,human = pieces()
turn = X
board = new_board()
while not winner(board) and (" " not in board):
display_board(board)
if turn == human:
human_move(board,human)
get_move_row()
place_piece()
else:
computer_move(board,computer)
place_piece()
display_board(board)
turn = next_turn()
the_winner = winner(board)
congrat_winner(the_winner, computer, human)
#start the program
main ()
input ("\nPress the enter key to quit.")
Answer: For fun, here's an object-oriented refactorization. It's a bit long, but well
documented and should be easy to understand.
I started with your code and split it into Board, Player, and Game classes,
then derived Computer and Human classes from Player.
* Board knows the shape and size of the rack, what moves are legal, and recognizes when wins and ties occur
* Player has a name and knows how to choose (or prompt for) a legal move
* Game has a Board and two Players and controls turn-taking and output
I'm not 100% happy with it - Board has a .board that is a list of list of
string, but Game has a .board that is a Board; a bit of judicious renaming
would be a good idea - but for an hour's work it's pretty solid.
Hope you find this educational:
# Connect-4
from itertools import cycle, groupby
from random import choice
from textwrap import dedent
import sys
# version compatibility shims
if sys.hexversion < 0x3000000:
# Python 2.x
inp = raw_input
rng = xrange
else:
# Python 3.x
inp = input
rng = range
def get_yn(prompt, default=None, truthy={"y", "yes"}, falsy={"n", "no"}):
"""
Prompt for yes-or-no input
Return default if answer is blank and default is set
Return True if answer is in truthy
Return False if answer is in falsy
"""
while True:
yn = inp(prompt).strip().lower()
if not yn and default is not None:
return default
elif yn in truthy:
return True
elif yn in falsy:
return False
def get_int(prompt, lo=None, hi=None):
"""
Prompt for integer input
If lo is set, result must be >= lo
If hi is set, result must be <= hi
"""
while True:
try:
value = int(inp(prompt))
if (lo is None or lo <= value) and (hi is None or value <= hi):
return value
except ValueError:
pass
def four_in_a_row(tokens):
"""
If there are four identical tokens in a row, return True
"""
for val,iterable in groupby(tokens):
if sum(1 for i in iterable) >= 4:
return True
return False
class Board:
class BoardWon (BaseException): pass
class BoardTied(BaseException): pass
EMPTY = " . "
HOR = "---"
P1 = " X "
P2 = " O "
VER = "|"
def __init__(self, width=8, height=6):
self.width = width
self.height = height
self.board = [[Board.EMPTY] * width for h in rng(height)]
self.tokens = cycle([Board.P1, Board.P2])
self.rowfmt = Board.VER + Board.VER.join("{}" for col in rng(width)) + Board.VER
self.rule = Board.VER + Board.VER.join(Board.HOR for col in rng(width)) + Board.VER
def __str__(self):
lines = []
for row in self.board:
lines.append(self.rowfmt.format(*row))
lines.append(self.rule)
lines.append(self.rowfmt.format(*("{:^3d}".format(i) for i in rng(1, self.width+1))))
lines.append("")
return "\n".join(lines)
def is_board_full(self):
return not any(cell == Board.EMPTY for cell in self.board[0])
def is_win_through(self, row, col):
"""
Check for any winning sequences which pass through self.board[row][col]
(This is called every time a move is made;
thus any win must involve the last move,
and it is faster to check just a few cells
instead of the entire board each time)
"""
# check vertical
down = min(3, row)
up = min(3, self.height - row - 1)
tokens = [self.board[r][col] for r in rng(row - down, row + up + 1)]
if four_in_a_row(tokens):
return True
# check horizontal
left = min(3, col)
right = min(3, self.width - col - 1)
tokens = [self.board[row][c] for c in rng(col - left, col + right + 1)]
if four_in_a_row(tokens):
return True
# check upward diagonal
down = left = min(3, row, col)
up = right = min(3, self.height - row - 1, self.width - col - 1)
tokens = [self.board[r][c] for r,c in zip(rng(row - down, row + up + 1), rng(col - left, col + right + 1))]
if four_in_a_row(tokens):
return True
# check downward diagonal
down = right = min(3, row, self.width - col - 1)
up = left = min(3, self.height - row - 1, col)
tokens = [self.board[r][c] for r,c in zip(rng(row - down, row + up + 1), rng(col + right, col - left - 1, -1))]
if four_in_a_row(tokens):
return True
# none of the above
return False
def legal_moves(self):
"""
Return a list of columns which are not full
"""
return [col for col,val in enumerate(self.board[0], 1) if val == Board.EMPTY]
def do_move(self, column):
token = next(self.tokens)
col = column - 1
# column is full?
if self.board[0][col] != Board.EMPTY:
next(self.move) # reset player token
raise ValueError
# find lowest empty cell (guaranteed to find one)
for row in rng(self.height-1, -1, -1): # go from bottom to top
if self.board[row][col] == Board.EMPTY: # find first empty cell
# take cell
self.board[row][col] = token
# did that result in a win?
if self.is_win_through(row, col):
raise Board.BoardWon
# if not, did it result in a full board?
if self.is_board_full():
raise Board.BoardTied
# done
break
class Player:
def __init__(self, name):
self.name = name
def get_move(self, board):
"""
Given the current board state, return the row to which you want to add a token
"""
# you should derive from this class instead of using it directly
raise NotImplemented
class Computer(Player):
def get_move(self, board):
return choice(board.legal_moves())
class Human(Player):
def get_move(self, board):
legal_moves = board.legal_moves()
while True:
move = get_int("Which column? (1-{}) ".format(board.width), lo=1, hi=board.width)
if move in legal_moves:
return move
else:
print("Please pick a column that is not already full!")
class Game:
welcome = dedent("""
Welcome to the second greatest intellectual challenge of all time: Connect4.
This will be a showdown between your human brain and my silicon processor.
You will make your move known by entering a column number, 1 - 7. Your move
(if that column isn't already filled) will move to the lowest available position.
Prepare yourself, human. May the Schwartz be with you!
""")
def __init__(self):
print(Game.welcome)
# set up new board
self.board = Board()
# set up players
self.players = cycle([Human("Dave"), Computer("HAL")])
# who moves first?
if get_yn("Do you want the first move? (Y/n) ", True):
print("You will need it...\n")
# default order is correct
else:
print("Your rashness will be your downfall...\n")
next(self.players)
def play(self):
for player in self.players:
print(self.board)
while True:
col = player.get_move(self.board) # get desired column
try:
print("{} picked Column {}".format(player.name, col))
self.board.do_move(col) # make the move
break
except ValueError:
print("Bad column choice - you can't move there")
# try again
except Board.BoardWon:
print("{} won the game!".format(player.name))
return
except Board.BoardTied:
print("The game ended in a stalemate")
return
def main():
while True:
Game().play()
if not get_yn("Do you want to play again? (Y/n) ", True):
break
if __name__=="__main__":
main()
|
Git tree-filter run python script on commits
Question: I was asked this question on `#git` earlier but as its reasonably substantial
I'll post it up here. I want to run a `filter-branch` on a repo to modify
(thousands of) files over hundreds of commits using a python script. I'm
calling the `clean.py` script using the following command in the repo
directory:
git filter-branch -f --tree-filter '(cd ../cleaner/ && python clean.py --path=files/*/*/**)'
**[Clean.py](https://github.com/megawac/jsdelivr_cleaner/blob/master/clean.py)**
looks like this and will modify all files in path (i.e. `files/*/*/**`):
from os import environ as environment
import argparse, yaml
import logging
from cleaner import Cleaner
parser = argparse.ArgumentParser()
parser.add_argument("--path", help="path to run cleaner on", type=str)
args = parser.parse_args()
# logging.basicConfig(level=logging.DEBUG)
with open("config.yml") as sets:
config = yaml.load(sets)
path = args.path
if not path:
path = config["cleaner"]["general_pattern"]
cleaner = Cleaner(config["cleaner"])
print "Cleaning path: " + str(path)
cleaner.clean(path, True)
After running the command the following is outputted to terminal:
$ python deploy.py --verbose
INFO:root:Checked out master branch
INFO:root:Running command:
'git filter-branch -f --tree-filter '(cd C:/Users/Graeme/Documents/programming/clean-cdn/clean-jsdelivr/ && python clean.py --path=files/*/*/**)' -d "../tmp"' in ../jsdelivr
Rewrite 298ec3a2ca5877a25ebd40aeb815d7b5a5f33a7e (1/1535)
Cleaning path: files/*/*/**
C:\Program Files (x86)\git/libexec/git-core\git-filter-branch: line 343: ../commit: No such file or directory
C:\Program Files (x86)\git/libexec/git-core\git-filter-branch: line 346: ../map/298ec3a2ca5877a25ebd40aeb815d7b5a5f33a7e
: No such file or directory
could not write rewritten commit
rm: cannot remove `/c/Users/Graeme/Documents/programming/clean-cdn/tmp/revs': Permission denied
rm: cannot remove directory `/c/Users/Graeme/Documents/programming/clean-cdn/tmp': Directory not empty
_The python script executes successfully and modifies the files correctly but
the`filter-branch` doesn't finish fixing up the commit._ There appears to be a
permission issue however I haven't been able to get around it running with
elevated privileges. I've tried running the filter-branch on win7, win8, and
ubuntu with git v1.8 and v1.9.
**Edit** The script works as is on Centros with `git1.7.1`
The goal is to reduce the size of a CDNs repo (nearing 1GB) after the contents
in `files/*/*/**` finishes syncing with a database.
[The source code of the project](https://github.com/megawac/jsdelivr_cleaner)
[Target repo for the rewrite](https://github.com/jsdelivr/jsdelivr)
Answer: The permissions issue you're encountering is interesting-are you doing this on
a local copy of the repo (ie one where you have full access to the
filesystem), or on a remote server?
Reading over your python code, it looks like you're trying to remove every
file over a certain size that is not a .INI file, did I get that right?
If that's the case, can I ask if you've considered [The BFG Repo-
Cleaner](https://rtyley.github.io/bfg-repo-cleaner/)? Obviously, you learn a
lot about Git by writing your own code (I know I have), but I think The BFG is
probably tailor-made for your needs - and will be faster than any `git-filter-
branch` based approach.
In your case, you might want to run it with a command like:
$ java -jar bfg.jar --strip-blobs-bigger-than 100K my-repo.git
This removes all blobs bigger than 100K, that _aren't_ in your latest commit.
I did a quick run with this on the [jsdelivr](https://github.com/bfg-repo-
cleaner-demos/jsdelivr-original) repo, and reduced pack size from 284M to 138M
in [the cleaned repo](https://github.com/bfg-repo-cleaner-demos/jsdelivr-bfg-
cleaned). The BFG cleaning step took under 5 seconds, the subsequent `git gc
--prune=now --aggressive` just under 2 minutes.
_Full disclosure: I'm the author of the BFG Repo-Cleaner._
|
How to make unique combinations of the following list of tuples
Question: In python, I have a list containing the following tuples:
[(Bob, Tom), (GreenWood, Pearson)]
First `tuple` contains first names and second `tuple` contains last names.
_PS: The list I gave is a sample, the actual list varies with more names_
The thing I am trying to do is generate all the possible names that can be
generated i.e
- Bob GreenWood
- Bob Pearson
- Tom GreemWood
- Tom Pearson
How can I implement this in Python preferably or any other language.
I trying to first take the `Bob in tuple 1` and make combinations with `last
names in tuple 2` and do something like `tuple1[1:]` to get rid of the Bob.
Then repeat (possible recursion?) with only `Tom in tuple` but I can't seem to
wrap my head around how the algorithm should look like.
Any help?
Answer: You can use
[`itertools.product`](http://docs.python.org/2/library/itertools.html#itertools.product)
like this
from itertools import product
names = [('Bob', 'Tom'), ('GreenWood', 'Pearson')]
for item in product(*names):
print(item)
**Output**
('Bob', 'GreenWood')
('Bob', 'Pearson')
('Tom', 'GreenWood')
('Tom', 'Pearson')
If you wanted to print the possible names as string, then you can join the
result like this
print(" ".join(item))
This will produce
Bob GreenWood
Bob Pearson
Tom GreenWood
Tom Pearson
|
Query on local frame and global frame in python Environment
Question: From the below program, I understand that local frame(square)(blue colour box)
in the below Environment diagram belongs to(nothing but), the local ACTIVATION
RECORD of **call**() method of an object of class 'function'. This 'function'
class object is created internally when we defined function square(x).

My question is:
If my understanding is correct above, then to whom(which class object's
method) does global frame belong to?
Because, I feel, the control has been passed from some method of an
object(particular class type) to square.**call**(3) method.
Answer: When a function is defined, it's assigned the read-only attribute
`func_globals` (a.k.a. `__globals__` since 2.6 and only that since 3.0) which
corresponds to the current module namespace (like a view to the
`module.__dict__`). You can try, for example:
>>> square.func_globals
# get a big dictionary-like object here
This object is queried every time the function accesses a global variable.
When you later import this function in some other module, it will still keep
the reference to its original scope and will take the globals from there. In
other words, the code:
def square(x):
return mul(x, x)
is roughly the same as:
def square(x):
return globals()['mul'](x, x)
or
def square(x):
return square.func_globals['mul'](x, x)
So, in one sentence: function's globals belong to the module where it is
defined, and accessed through the function object's attributes.
**UPDATE**
More elaborated version of what happens underneath (applies to CPython). The
most important thing to understand about Python is, that, unlike C etc, it has
no clear separation between "declaration" and "execution". When you load a
module, its code is simply executed. When the interpreter comes across a
function definition, it creates the function object and assigns it to the
variable. Whatever has been defined or imported into the module will become
available to the function as global variables. Lets have a look at the
function object:
>>> import dis
>>> dis.dis(square.func_code) # disassemble the function's bytecode
2 0 LOAD_GLOBAL 0 (mul)
3 LOAD_FAST 0 (x)
6 LOAD_FAST 0 (x)
9 CALL_FUNCTION 2
12 RETURN_VALUE
>>> square.func_code.co_names
('mul',)
You can see the code `LOAD_GLOBAL 0` in the beginning. That means:
1. find the name with the index `0` in `co_names`, which is `'mul'`
2. find the name `'mul'` in `func_globals` and load it to the stack
3. do the rest of the logic
|
zip() in python, how to use static values
Question: I'm trying to build an array of tuples with first value from list with some
static values.
It should be simple but I'm struggling to do this for some reason.
For example, how do I get the following:
[(1,100,200),
(2,100,200),
(3,100,200),
(4,100,200),
(5,100,200)]
>>> zip([1,2,3,4,5],100,200)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: zip argument #2 must support iteration
Answer: You could use
[`itertools.repeat`](https://docs.python.org/2/library/itertools.html#itertools.repeat)
to repeat the elements that you're trying to zip together.
>>> import itertools
>>> zip([1, 2, 3, 4, 5], itertools.repeat(100), itertools.repeat(200))
[(1, 100, 200), (2, 100, 200), (3, 100, 200), (4, 100, 200), (5, 100, 200)]
You could also specify the number of times you need to repeat the element. (5
in this case)
|
Python: How to deal with Editable ComboBoxes when its Items used to store Objects (data)
Question: A following code creates a simple ComboBox with 12 pull-down items. To each
item there was assigned an instance of MyClass() - the variable _myObject_
using
`self.ComboBox.addItem( name, myObject ).`
The ComboBox itself was set to be "editable" using
self.ComboBox.setEditable(True)
Since the combobox is editable the user can double click straight into the
combobox and enter a new text entry which becomes a new Combobox's pull-down
item. The problem is that the text typed into the Combobox is only a string
(while all other combobox items have .setData() processed. Is there any work
around to make sure that even "typed in" combobox items would have myClass
instance assigned?
from PyQt4 import QtGui, QtCore
import sys, os
class MyClass(object):
def __init__(self):
super(MyClass, self).__init__()
self.myAttr=None
def getTime(self):
import datetime
return datetime.datetime.now()
class Dialog_01(QtGui.QMainWindow):
def __init__(self):
super(QtGui.QMainWindow,self).__init__()
myQWidget = QtGui.QWidget()
myBoxLayout = QtGui.QVBoxLayout()
myQWidget.setLayout(myBoxLayout)
self.setCentralWidget(myQWidget)
self.ComboBox = QtGui.QComboBox()
self.ComboBox.setEditable(True)
for i in range(12):
name='Item '+str(i)
myObject=MyClass()
self.ComboBox.addItem( name, myObject )
self.ComboBox.currentIndexChanged.connect(self.combobox_selected)
myBoxLayout.addWidget(self.ComboBox)
def combobox_selected(self, index):
myObject=self.ComboBox.itemData(index).toPyObject()
print myObject.getTime()
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
dialog_1 = Dialog_01()
dialog_1.show()
dialog_1.resize(480,320)
sys.exit(app.exec_())
Answer: Here is the working solution that worked for me.
from PyQt4 import QtGui, QtCore
import sys, os
class MyClass(object):
def __init__(self):
super(MyClass, self).__init__()
self.myAttr=None
def getTime(self):
import datetime
return datetime.datetime.now()
class Dialog_01(QtGui.QMainWindow):
def __init__(self):
super(QtGui.QMainWindow,self).__init__()
myQWidget = QtGui.QWidget()
myBoxLayout = QtGui.QVBoxLayout()
myQWidget.setLayout(myBoxLayout)
self.setCentralWidget(myQWidget)
self.ComboBox = QtGui.QComboBox()
self.ComboBox.setEditable(True)
for i in range(12):
name='Item '+str(i)
myObject=MyClass()
self.ComboBox.addItem( name, myObject )
self.ComboBox.currentIndexChanged.connect(self.combobox_selected)
myBoxLayout.addWidget(self.ComboBox)
def combobox_selected(self, index):
itemName=self.ComboBox.currentText()
myObject=self.ComboBox.itemData(index).toPyObject()
if not hasattr(myObject, 'getTime'):
result=self.ComboBox.blockSignals(True)
self.ComboBox.removeItem(index)
myObject=MyClass()
self.ComboBox.addItem( itemName, myObject )
self.ComboBox.setCurrentIndex( index )
self.ComboBox.blockSignals(False)
print myObject.getTime()
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
dialog_1 = Dialog_01()
dialog_1.show()
dialog_1.resize(480,320)
sys.exit(app.exec_())
|
How to pass results of urllib2.urlopen to a function
Question: WHen I run this I get the following error - TypeError: coercing to Unicode:
need string or buffer, NoneType found
Can someone please tell me how I can pass the results of function getURL() to
function romid()?
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import serial, time, os, decimal, csv, urllib2, socket
import xml.etree.cElementTree as ET
from xml.etree.cElementTree import parse
romidList = []
id = '{http://www.embeddeddatasystems.com/schema/owserver}ROMId'
owd= '{http://www.embeddeddatasystems.com/schema/owserver}owd_DS18B20'
def getURL():
f = urllib2.urlopen(''.join(['http://', '169.254.1.2', '/details.xml']))
#add the next line
return f
def romid():
f = getURL()
for x in parse(f).findall(owd):
romID = x.find(id)
romID = romID.text
if romID not in romidList:
romidList.append(romID)
print romidList
romid()
Answer: well...have you considered using the `return` statement in your `getURL`
function?
|
python too many values to unpack error and blank output
Question: I lost my previous account id so I am starting new.
templist=[]
temps1=[]
templist2=[]
tempstat1={}
station1={}
station2={}
import os.path
def main():
#file name=animallog.txt
endofprogram=False
try:
filename=input("Enter name of input file >")
file=open(filename,"r")
except IOError:
print("File does not exist")
endofprogram=True
count=0
count2=0
for line in file:
line=line.strip('\n')
if (len(line)!=0)and line[0]!='#':
(x,y,z)=line.split(':')
templist.append((x,y,z))
record=(x,z)
temps1.append(record)
for x,y in record:
if x in station1 or station2:
if y=='s1':
station1[x]=station1[x]+1
elif y=='s2':
station2[x]=station2[x]+1
elif x not in station1 and station2:
if y=='s1':
station1[x]=1
elif y=='s2':
station2[x]=1
main()
Anyways guys. I wrote this program. It is basically reading a file that has
info like this-> (animal:date:station number)
a01:01-24-2011:s1
a03:01-24-2011:s2
a03:09-24-2011:s1
I am trying to count which animal goes to which station and how many times. I
do not want an answer from you experts but just need to know what this ERROR
means-
File "/Applications/Wing101.app/Contents/MacOS/src/debug/tserver/_sandbox.py", line 58, in <module>
File "/Applications/Wing101.app/Contents/MacOS/src/debug/tserver/_sandbox.py", line 39, in main
builtins.ValueError: too many values to unpack (expected 2)
Thank you :)
**EDIT--**
changed `for x,y in record:` _TO_ `for x in record:`
But it prints `{}` when I try to print `station1` and `station2`
Why is it printing blank dictionaries for `station1` and `station2`?
Answer: `record` is a 2-tuple of strings. By iterating over `record` specifying a
2-tuple you are attempting to split each of these strings into 2 variables.
**This will not work unless they are two-character strings.**
Perhaps you meant to iterate over `temps1` instead?
|
PanedWindow issues with text widgets
Question: I have a couple of issues working with PanedWindow - which I do not well and
I'm a bit confused.
In this example I'm trying to put 2 text widgets side-by-side:
#!/usr/bin/env python
import os
from Tkinter import *
import ttk
root = Tk()
root.geometry("%dx%d+0+0" % (1400,850))
mainpanedframeframe = PanedWindow(root,orient=HORIZONTAL)
mainpanedframeframe.pack(fill=BOTH, expand=TRUE)
auxframe1 = PanedWindow(mainpanedframeframe)
auxframe2 = PanedWindow(mainpanedframeframe)
mainpanedframeframe.add(auxframe1)
mainpanedframeframe.add(auxframe2)
# --------------- chtext -------------
chtext = Text(auxframe1, width=50)
auxframe1.add(chtext)
chyscrollbar=Scrollbar(auxframe1, orient=VERTICAL, command=chtext.yview)
chtext["yscrollcommand"]=chyscrollbar.set
chyscrollbar.pack(side=RIGHT,fill=Y)
# --------------- configtext -------------
configtext = Text(auxframe2, width=150)
auxframe2.add(configtext)
yscrollbar=Scrollbar(auxframe2, orient=VERTICAL, command=configtext.yview)
configtext["yscrollcommand"]=yscrollbar.set
yscrollbar.pack(side=RIGHT, fill=Y)
for i in range(50):
configtext.insert(INSERT,"test configtext \n\ntest 2\ntest 3\ntest 4\n\ntest x\n\n\n\n")
chtext.insert(INSERT,"test chtext \n\ntest 2\ntest 3\ntest 4\n\ntest x\n\n\n\n")
#------------------------------
mainloop()
it works, but I dont know how to avoid that, at start, the 1st widget is
completely shrunken to the left.
As a second question - even more puzzling: if the 2nd text window is a class
that simply inherits from Text and nothing else (I will need to do more
complicate things, but as it stands in this example it is simply a call to
Text() ), the vertical scrollbar is not shown at all!
#!/usr/bin/env python
import os
from Tkinter import *
import ttk
class MyText(Text):
def __init__(self, parent, *args, **kwargs):
Text.__init__(self, *args, **kwargs)
self.parent = parent
root = Tk()
root.geometry("%dx%d+0+0" % (1400,850))
mainpanedframeframe = PanedWindow(root,orient=HORIZONTAL)
mainpanedframeframe.pack(fill=BOTH, expand=TRUE)
auxframe1 = PanedWindow(mainpanedframeframe)
auxframe2 = PanedWindow(mainpanedframeframe)
mainpanedframeframe.add(auxframe1)
mainpanedframeframe.add(auxframe2)
# --------------- chtext -------------
chtext = Text(auxframe1, width=50)
auxframe1.add(chtext)
chyscrollbar=Scrollbar(auxframe1, orient=VERTICAL, command=chtext.yview)
chtext["yscrollcommand"]=chyscrollbar.set
chyscrollbar.pack(side=RIGHT,fill=Y)
# --------------- configtext -------------
configtext = MyText(auxframe2, width=150)
auxframe2.add(configtext)
yscrollbar=Scrollbar(auxframe2, orient=VERTICAL, command=configtext.yview)
configtext["yscrollcommand"]=yscrollbar.set
yscrollbar.pack(side=RIGHT, fill=Y)
for i in range(50):
configtext.insert(INSERT,"test configtext \n\ntest 2\ntest 3\ntest 4\n\ntest x\n\n\n\n")
chtext.insert(INSERT,"test chtext \n\ntest 2\ntest 3\ntest 4\n\ntest x\n\n\n\n")
#------------------------------
mainloop()
Answer: For question 1: the Tk panedwidget is a geometry manager for one or more
panes. When you add a widget as a pane you can specify some options or you can
specify these afterwards using the
[`paneconfigure`](https://www.tcl.tk/man/tcl8.6/TkCmd/panedwindow.htm#M33)
command. eg:
auxframe1.add(chtext, stretch="always")
auxframe.paneconfigure(chtext, stretch="always")
auxframe.panecget(chtext, "stretch")
using `stretch="always"` should fix your initial problem.
The next point is mentioned in the documentation that "Each pane contains one
widget,...". You are managing the text widget using the panedwindow and then
also packing a scrollbar into the same window. If you ask each widget about
its geometry manager (using chtext.wm_geometry() and
chyscrollbar.wm_geometry()) the text widget is being managed by the
panedwindow while the scrollbar is managed by the pack geometry manager. This
is not a good scheme. What actually happens is the scrollbar overlays part of
your text widget, obscruing the edge of the widget. You can observer this is
you enter some text and reduce the size of the pane. Once the text hits the
edge of the widget it will by default start wrapping. By packing the
scrollbar, this does not wrap as soon as the text collides with the edge of
the scrollbar but only when the edge of the text widget hits the text. To
resolve this you should put the text and scrollbar into one Frame and add that
frame to the pane.
I suspect this second point is the issue with your missing text widget in the
second part of the question. Your text widget may well just be obscuring your
scrollbar. If you either grid or pack the two info a frame then they will both
be managed by the same geometry manager and there will be no confusion over
how much space each will be given.
|
Assigning value to the variable in Qt
Question: Here is the converted .ui of Qt to the .py:
from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
_fromUtf8 = lambda s: s
class Ui_NewWindow(object):
def setupUi(self, NewWindow):
NewWindow.setObjectName(_fromUtf8("NewWindow"))
NewWindow.resize(439, 225)
self.centralwidget = QtGui.QWidget(NewWindow)
self.centralwidget.setObjectName(_fromUtf8("centralwidget"))
self.pushButton_2 = QtGui.QPushButton(self.centralwidget)
self.pushButton_2.setGeometry(QtCore.QRect(170, 140, 99, 27))
self.pushButton_2.setObjectName(_fromUtf8("pushButton_2"))
self.widget = QtGui.QWidget(self.centralwidget)
self.widget.setGeometry(QtCore.QRect(40, 30, 365, 89))
self.widget.setObjectName(_fromUtf8("widget"))
self.verticalLayout = QtGui.QVBoxLayout(self.widget)
self.verticalLayout.setMargin(0)
self.verticalLayout.setObjectName(_fromUtf8("verticalLayout"))
self.horizontalLayout_2 = QtGui.QHBoxLayout()
self.horizontalLayout_2.setObjectName(_fromUtf8("horizontalLayout_2"))
self.label_3 = QtGui.QLabel(self.widget)
self.label_3.setObjectName(_fromUtf8("label_3"))
self.horizontalLayout_2.addWidget(self.label_3)
self.lineEdit = QtGui.QLineEdit(self.widget)
self.lineEdit.setObjectName(_fromUtf8("lineEdit"))
self.horizontalLayout_2.addWidget(self.lineEdit)
self.label_4 = QtGui.QLabel(self.widget)
self.label_4.setObjectName(_fromUtf8("label_4"))
self.horizontalLayout_2.addWidget(self.label_4)
self.lineEdit_2 = QtGui.QLineEdit(self.widget)
self.lineEdit_2.setObjectName(_fromUtf8("lineEdit_2"))
self.horizontalLayout_2.addWidget(self.lineEdit_2)
self.verticalLayout.addLayout(self.horizontalLayout_2)
self.pushButton = QtGui.QPushButton(self.widget)
self.pushButton.setObjectName(_fromUtf8("pushButton"))
self.verticalLayout.addWidget(self.pushButton)
self.horizontalLayout = QtGui.QHBoxLayout()
self.horizontalLayout.setObjectName(_fromUtf8("horizontalLayout"))
self.label = QtGui.QLabel(self.widget)
self.label.setObjectName(_fromUtf8("label"))
self.horizontalLayout.addWidget(self.label)
self.label_2 = QtGui.QLabel(self.widget)
self.label_2.setObjectName(_fromUtf8("label_2"))
self.horizontalLayout.addWidget(self.label_2)
self.verticalLayout.addLayout(self.horizontalLayout)
NewWindow.setCentralWidget(self.centralwidget)
self.menubar = QtGui.QMenuBar(NewWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 439, 25))
self.menubar.setObjectName(_fromUtf8("menubar"))
NewWindow.setMenuBar(self.menubar)
self.statusbar = QtGui.QStatusBar(NewWindow)
self.statusbar.setObjectName(_fromUtf8("statusbar"))
NewWindow.setStatusBar(self.statusbar)
self.retranslateUi(NewWindow)
QtCore.QMetaObject.connectSlotsByName(NewWindow)
def retranslateUi(self, NewWindow):
NewWindow.setWindowTitle(QtGui.QApplication.translate("NewWindow", "MainWindow", None, QtGui.QApplication.UnicodeUTF8))
self.pushButton_2.setText(QtGui.QApplication.translate("NewWindow", "Exit", None, QtGui.QApplication.UnicodeUTF8))
self.label_3.setText(QtGui.QApplication.translate("NewWindow", "A = ", None, QtGui.QApplication.UnicodeUTF8))
self.lineEdit.setText(QtGui.QApplication.translate("NewWindow", "0.0", None, QtGui.QApplication.UnicodeUTF8))
self.label_4.setText(QtGui.QApplication.translate("NewWindow", "B = ", None, QtGui.QApplication.UnicodeUTF8))
self.lineEdit_2.setText(QtGui.QApplication.translate("NewWindow", "0.0", None, QtGui.QApplication.UnicodeUTF8))
self.pushButton.setText(QtGui.QApplication.translate("NewWindow", "Add", None, QtGui.QApplication.UnicodeUTF8))
self.label.setText(QtGui.QApplication.translate("NewWindow", "Answer : ", None, QtGui.QApplication.UnicodeUTF8))
self.label_2.setText(QtGui.QApplication.translate("NewWindow", "0.0", None, QtGui.QApplication.UnicodeUTF8))
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
NewWindow = QtGui.QMainWindow()
ui = Ui_NewWindow()
ui.setupUi(NewWindow)
NewWindow.show()
sys.exit(app.exec_())
After running the above python program it shows the window as: 
But I have to assign the value from the line edit to the variable A and B so I
can add them as:
sum = A + B
and I have show this value `sum` in the label Answer when we clicked on push
button "Add" in above window, so please suggest proper change in above code.
Answer: You need to create a news action (`ad_values` in this case). Then, you need to
connect this action to the Buttons signal (`clicked` for example). After the
signal emits (i.e. the button is clicked), the actions reads the values from
`lineEdit` and `lineEdit_2`, adds them up and set's the text of the
`label_2`-widget (you might convert the values to int before [see the other
answer for a simple implementation])
from PySide import QtCore, QtGui
from PySide.QtCore import Slot
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
_fromUtf8 = lambda s: s
class Ui_NewWindow(object):
def setupUi(self, NewWindow):
NewWindow.setObjectName(_fromUtf8("NewWindow"))
NewWindow.resize(439, 225)
self.centralwidget = QtGui.QWidget(NewWindow)
self.centralwidget.setObjectName(_fromUtf8("centralwidget"))
self.pushButton_2 = QtGui.QPushButton(self.centralwidget)
self.pushButton_2.setGeometry(QtCore.QRect(170, 140, 99, 27))
self.pushButton_2.setObjectName(_fromUtf8("pushButton_2"))
self.widget = QtGui.QWidget(self.centralwidget)
self.widget.setGeometry(QtCore.QRect(40, 30, 365, 89))
self.widget.setObjectName(_fromUtf8("widget"))
self.verticalLayout = QtGui.QVBoxLayout(self.widget)
#self.verticalLayout.setMargin(0)
self.verticalLayout.setObjectName(_fromUtf8("verticalLayout"))
self.horizontalLayout_2 = QtGui.QHBoxLayout()
self.horizontalLayout_2.setObjectName(_fromUtf8("horizontalLayout_2"))
self.label_3 = QtGui.QLabel(self.widget)
self.label_3.setObjectName(_fromUtf8("label_3"))
self.horizontalLayout_2.addWidget(self.label_3)
self.lineEdit = QtGui.QLineEdit(self.widget)
self.lineEdit.setObjectName(_fromUtf8("lineEdit"))
self.horizontalLayout_2.addWidget(self.lineEdit)
self.label_4 = QtGui.QLabel(self.widget)
self.label_4.setObjectName(_fromUtf8("label_4"))
self.horizontalLayout_2.addWidget(self.label_4)
self.lineEdit_2 = QtGui.QLineEdit(self.widget)
self.lineEdit_2.setObjectName(_fromUtf8("lineEdit_2"))
self.horizontalLayout_2.addWidget(self.lineEdit_2)
self.verticalLayout.addLayout(self.horizontalLayout_2)
self.pushButton = QtGui.QPushButton(self.widget)
self.pushButton.setObjectName(_fromUtf8("pushButton"))
self.verticalLayout.addWidget(self.pushButton)
self.horizontalLayout = QtGui.QHBoxLayout()
self.horizontalLayout.setObjectName(_fromUtf8("horizontalLayout"))
self.label = QtGui.QLabel(self.widget)
self.label.setObjectName(_fromUtf8("label"))
self.horizontalLayout.addWidget(self.label)
self.label_2 = QtGui.QLabel(self.widget)
self.label_2.setObjectName(_fromUtf8("label_2"))
self.horizontalLayout.addWidget(self.label_2)
self.verticalLayout.addLayout(self.horizontalLayout)
NewWindow.setCentralWidget(self.centralwidget)
self.menubar = QtGui.QMenuBar(NewWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 439, 25))
self.menubar.setObjectName(_fromUtf8("menubar"))
NewWindow.setMenuBar(self.menubar)
self.statusbar = QtGui.QStatusBar(NewWindow)
self.statusbar.setObjectName(_fromUtf8("statusbar"))
NewWindow.setStatusBar(self.statusbar)
self.retranslateUi(NewWindow)
QtCore.QMetaObject.connectSlotsByName(NewWindow)
self.pushButton.clicked.connect(self.add_values)
def retranslateUi(self, NewWindow):
NewWindow.setWindowTitle(QtGui.QApplication.translate("NewWindow", "MainWindow", None, QtGui.QApplication.UnicodeUTF8))
self.pushButton_2.setText(QtGui.QApplication.translate("NewWindow", "Exit", None, QtGui.QApplication.UnicodeUTF8))
self.label_3.setText(QtGui.QApplication.translate("NewWindow", "A = ", None, QtGui.QApplication.UnicodeUTF8))
self.lineEdit.setText(QtGui.QApplication.translate("NewWindow", "0.0", None, QtGui.QApplication.UnicodeUTF8))
self.label_4.setText(QtGui.QApplication.translate("NewWindow", "B = ", None, QtGui.QApplication.UnicodeUTF8))
self.lineEdit_2.setText(QtGui.QApplication.translate("NewWindow", "0.0", None, QtGui.QApplication.UnicodeUTF8))
self.pushButton.setText(QtGui.QApplication.translate("NewWindow", "Add", None, QtGui.QApplication.UnicodeUTF8))
self.label.setText(QtGui.QApplication.translate("NewWindow", "Answer : ", None, QtGui.QApplication.UnicodeUTF8))
self.label_2.setText(QtGui.QApplication.translate("NewWindow", "0.0", None, QtGui.QApplication.UnicodeUTF8))
@Slot()
def add_values(self):
# you should convert the values to integers for a less error-prone code
summed = float(self.lineEdit.text()) + float(self.lineEdit_2.text())
self.label_2.setText(str(summed))
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
NewWindow = QtGui.QMainWindow()
ui = Ui_NewWindow()
ui.setupUi(NewWindow)
NewWindow.show()
sys.exit(app.exec_())
I´m using PySide, but it should work with PyQT as well.
|
Why is my UI element invisible until the user resizes the window?
Question: I have a custom progress bar–like view that basically consists of two sizers
side by side: the one on the left has a red background and the one on the
right has a light gray background. By adjusting the relative sizes of the
sizers I can indicate different percentages. For whatever reason—maybe because
this Window consists only of sizers, with no “substantial” elements—the
progress bar is not visible when its containing Frame first appears. When the
Frame is resized by any amount in any direction, the progress bar suddenly
pops into view, with the correct size and position and everything.
Below is a minimal code sample. It doesn’t quite demonstrate the problem I’m
describing: when run as-is, the progress bar will appear correctly.
If I remove the wx.EXPAND flag from either Add() call in TestWindow.**init** ,
however, then when the window first appears the progress bar will either be
the wrong size or completely invisible. (Resizing the window causes the bar to
become visible and/or take the correct size.) When the run_on_main_thread
decorator is not applied to update_sizes, though, the bar is always displayed
correctly right off the bat, regardless of whether the wx.EXPAND flag was
included!
I’m not sure why my efforts toward thread safety have caused this drawing
issue. Can anyone suggest a way to make the progress bar display correctly
from the beginning, while still ensuring that `update_sizes` will only run on
the main thread? For example, can I trigger an `EVT_SIZE` on the window so
that it will refresh itself without any user interaction?
I’m using Python 2.7.5 and wxPython 2.9.5.0 on OS X 10.9.2.
## Code
from functools import wraps
from math import floor
import wx
def run_on_main_thread(fn):
"""Decorator. Forces the function to run on the main thread.
Any calls to a function that is wrapped in this decorator will return
immediately; the return value of such a function is not available.
"""
@wraps(fn)
def deferred_caller(*args, **kwargs):
wx.CallAfter(fn, *args, **kwargs)
return deferred_caller
class PercentageBar(wx.Window):
def __init__(self, parent, percentage=0.0):
wx.Window.__init__(self, parent)
border_color = '#000000'
active_color = '#cc0000'
inactive_color = '#d3d7cf'
height = 24
self.percentage = percentage
self.SetBackgroundColour(border_color)
self.active_rectangle = wx.Panel(self, size=(-1, height))
self.active_rectangle.SetBackgroundColour(active_color)
self.inactive_rectangle = wx.Panel(self, size=(-1, height))
self.inactive_rectangle.SetBackgroundColour(inactive_color)
self.sizer = wx.BoxSizer(wx.HORIZONTAL)
self.SetSizer(self.sizer)
self.update_sizes()
@run_on_main_thread
def update_sizes(self):
self.sizer.Clear(False)
if self.percentage != 0.0 and self.percentage != 1.0:
active_flags = wx.EXPAND | (wx.ALL & ~wx.RIGHT)
else:
active_flags = wx.EXPAND | wx.ALL
self.sizer.Add(self.active_rectangle, floor(1000 * self.percentage),
active_flags, 1)
self.sizer.Add(self.inactive_rectangle, floor(1000 * (1.0 - self.percentage)),
wx.EXPAND | wx.ALL, 1)
if self.percentage == 0.0:
self.active_rectangle.Hide()
self.inactive_rectangle.Show()
elif self.percentage == 1.0:
self.active_rectangle.Show()
self.inactive_rectangle.Hide()
else:
self.active_rectangle.Show()
self.inactive_rectangle.Show()
self.sizer.Layout()
self.Refresh()
class TestWindow(wx.Frame):
def __init__(self, parent):
wx.Frame.__init__(self, parent, title='Test')
vertical_sizer = wx.BoxSizer(wx.VERTICAL)
vertical_sizer.Add((1, 1), 1)
vertical_sizer.Add(PercentageBar(self, percentage=0.33), 2, wx.EXPAND)
vertical_sizer.Add((1, 1), 1)
horizontal_sizer = wx.BoxSizer(wx.HORIZONTAL)
horizontal_sizer.Add((1, 1), 1)
horizontal_sizer.Add(vertical_sizer, 10, wx.EXPAND)
horizontal_sizer.Add((1, 1), 1)
self.SetSizer(horizontal_sizer)
if __name__ == '__main__':
app = wx.App()
window = TestWindow(None)
window.Show()
app.MainLoop()
Answer: Your example is working as expected for me with wxPython 3.0, but your symptom
is a common one. Usually it means that the initial size event is happening
before the sizer has been set, so the default `EVT_SIZE` handler doesn't yet
have a sizer to use for the layout. As soon as the window is resized then the
sizer is used to do the layout and all is as you expected it to be before.
To deal with this problem you just need to trigger the built-in layout
features after the frame and its content have been created. (Usually at the
end of `__init__`) Since the auto-layout is tied to the size events, then you
can use the frame's `SendSizeEvent` method to let it have one. Or you can just
explicily call the `Layout` method yourself.
|
Plotting the function 1/x and -1/x^2 on python in the interval from (-1,1)
Question: Please look at my code and help me. I'm writing a code the approximates the
Derivative of a function. To visually check how close my approximation is with
the real derivative, I'm plotting both of them together.
My problem is when the function is not defined at zero, like 1/x with a
derivative of '1/x^2.
Thanks in advance.
# -*- coding: utf-8 -*-
from pylab import *
import math
def Derivative(f,x, Tol = 10e-5, Max = 20):
try:
k = 2
D_old = (f(x+2.0**(-k))-f(x-2.0**-k))/(2.0**(1-k))
k = 3
D_new = (f(x+2.0**(-k))-f(x-2.0**-k))/(2.0**(1-k))
E_old = abs(D_new - D_old)
while True:
D_old = D_new
k+=1
D_new = (f(x+2.0**(-k))-f(x-2.0**-k))/(2.0**(1-k))
E_new = abs(D_old - D_new)
if E_new < Tol or E_new >= E_old or k >= Max:
return D_old
except:
return nan
def Fa(x):
return math.sin(2*math.pi*x)
def Fap(x):
return 2*math.pi*math.cos(2*math.pi*x)
def Fb(x):
return x**2
def Fbp(x):
return 2*x
def Fc(x):
return 1.0/x
def Fcp(x):
if abs(x)<0.01:
return 0
else:
return -1.0/x**2
def Fd(x):
return abs(x)
def Fdp(x):
return 1 #since it's x/sqrt(x**2)
# Plot of Derivative Fa
xx = arange(-1, 1, 0.01) # A Numpy vector of x-values
yy = [Derivative(Fa, x) for x in xx] # Vector of f’ approximations
plot(xx, yy, 'r--', linewidth = 5) # solid red line of width 5
yy2 = [Fap(x) for x in xx]
plot(xx, yy2, 'b--', linewidth = 2) # solid blue line of width 2
# Plot of Derivative Fb
yy = [Derivative(Fb, x) for x in xx] # Vector of f’ approximations
plot(xx, yy, 'g^', linewidth = 5) # solid green line of width 5
yy2 = [Fbp(x) for x in xx]
plot(xx, yy2, 'y^', linewidth = 2) # solid yellow line of width 2
Answer: "1/x" is an infinite function and you can not plot a function that is not
defined to zero. You can only plot the function with a broken axis. For the
broken axis, you can follow the suggestion of wwii in comments or you can
follow this
[tutorial](https://github.com/matplotlib/matplotlib/blob/master/examples/pylab_examples/broken_axis.py)
for [matplotlib](http://matplotlib.org/).
This tutorial show how you can just use two subplots to create the effect you
desire.
Here the example code:
"""
Broken axis example, where the y-axis will have a portion cut out.
"""
import matplotlib.pylab as plt
import numpy as np
# 30 points between 0 0.2] originally made using np.random.rand(30)*.2
pts = np.array([ 0.015, 0.166, 0.133, 0.159, 0.041, 0.024, 0.195,
0.039, 0.161, 0.018, 0.143, 0.056, 0.125, 0.096, 0.094, 0.051,
0.043, 0.021, 0.138, 0.075, 0.109, 0.195, 0.05 , 0.074, 0.079,
0.155, 0.02 , 0.01 , 0.061, 0.008])
# Now let's make two outlier points which are far away from everything.
pts[[3,14]] += .8
# If we were to simply plot pts, we'd lose most of the interesting
# details due to the outliers. So let's 'break' or 'cut-out' the y-axis
# into two portions - use the top (ax) for the outliers, and the bottom
# (ax2) for the details of the majority of our data
f,(ax,ax2) = plt.subplots(2,1,sharex=True)
# plot the same data on both axes
ax.plot(pts)
ax2.plot(pts)
# zoom-in / limit the view to different portions of the data
ax.set_ylim(.78,1.) # outliers only
ax2.set_ylim(0,.22) # most of the data
# hide the spines between ax and ax2
ax.spines['bottom'].set_visible(False)
ax2.spines['top'].set_visible(False)
ax.xaxis.tick_top()
ax.tick_params(labeltop='off') # don't put tick labels at the top
ax2.xaxis.tick_bottom()
# This looks pretty good, and was fairly painless, but you can get that
# cut-out diagonal lines look with just a bit more work. The important
# thing to know here is that in axes coordinates, which are always
# between 0-1, spine endpoints are at these locations (0,0), (0,1),
# (1,0), and (1,1). Thus, we just need to put the diagonals in the
# appropriate corners of each of our axes, and so long as we use the
# right transform and disable clipping.
d = .015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color='k', clip_on=False)
ax.plot((-d,+d),(-d,+d), **kwargs) # top-left diagonal
ax.plot((1-d,1+d),(-d,+d), **kwargs) # top-right diagonal
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d,+d),(1-d,1+d), **kwargs) # bottom-left diagonal
ax2.plot((1-d,1+d),(1-d,1+d), **kwargs) # bottom-right diagonal
# What's cool about this is that now if we vary the distance between
# ax and ax2 via f.subplots_adjust(hspace=...) or plt.subplot_tool(),
# the diagonal lines will move accordingly, and stay right at the tips
# of the spines they are 'breaking'
plt.show()
|
pcolormesh use of memory
Question: I've been struggling with this for a while. I have a set of images, I perform
some math on the X, Y coordinates of these images and then plot the new images
using pcolormesh. All the calculations I've already done, all I do is load the
new X's and new Y's and use the colors from the image in pcolormesh.
The images are 2048x2448 pixels (say approx 5mp), first image goes pretty fast
and every image after that the script gets slower and eats more memory. I have
tried some garbage collection but it doesn't work.
My script:
import numpy as np
from PIL import Image
import cPickle as pickle
import matplotlib.pyplot as plt
import os
# TRY forced garbage collection!
import gc
def rectify(cam_files, cam_rec_files, rec_file_location):
''' cam_files is a dictionary that contains the filenames with the camera-names as index
example: {'KDXX04C' : C:\Users\Yorian\Desktop\TU\Stage Shore\python_files\Rectify, metadata and upload\v3\archive\KDXXXXX\original\snap\1381383000\{filename}.jpg }
cam_rec_files_dir contains a dictionary, cameraName : fileLocation
example: {'KDXX04C' : C:\Users\Yorian\Desktop\TU\Stage Shore\python_files\Rectify, metadata and upload\v3\camdata\KDXXXXX\KDXX04C.pkl }
rec_file_location is a string that shows where the new rectification needs to be saved '''
fig, ax = plt.subplots(1, 1, figsize=(60,90))
for camname in cam_files:
img = Image.open(cam_files[camname])
img = np.asarray(img, dtype=np.uint8)
height, width, channels = img.shape
# load plot data
fh = open(cam_rec_files[camname], 'rb')
X = pickle.load(fh)
Y = pickle.load(fh)
masks = [X<=0, X>1500, Y>4000, Y<-4000]
total_mask = masks[0] | masks[1] | masks[2] | masks[3]
first_false = np.argwhere(total_mask == 0)
start = int(first_false[0]/width)
rgb = img.reshape((-1,3))/255.0
rgba = np.concatenate((rgb, np.ones((rgb.shape[0],1), dtype=np.uint8)), axis=1)
rgba[total_mask,3] = 0
rgba = rgba.reshape((height,width,4))[:,:-1,:]
rgba = rgba.reshape((-1,4))
plotimg = ax.pcolormesh(X.reshape((height, width))[start:,:], Y.reshape((height, width))[start:,:], img.mean(-1)[start:,:], cmap='Greys') # img.mean(-1)
plotimg.set_array(None)
plotimg.set_edgecolor('none')
plotimg.set_facecolor(rgba[(start*(width-1)):,:])
fh.close()
plt.savefig(rec_file_location)
gc.collect()
It works until six images, but when I try eight for example I have
insufficient memory (I use python 64bit and have 12gb of memory on my computer
which I imagined to be enough).
Does anybody have an idea on how to solve this problem?
Answer: In a nutshell, call `plt.close(fig)` when you're through with it if you're
using the `pyplot` interface and want to generate lots of figures without
displaying them.
Each time you call your `rectify` function, you're making a new (very large!!)
figure and then _keeping it in memory_. `pyplot` keeps a reference to the
figure so it can be displayed when you call `plt.show()`. Either call
`plt.close(fig)` or create the figures without using the `pyplot` state
machine. (`fig.clf()` will also work, but will keep references to a blank
figures around.)
Also, given that you're reading in image files, your values are presumably on
a regular x and y grid. If so, use `imshow` instead of `pcolormesh`. It's much
faster and more memory efficient.
* * *
As an example of the first issue, your `rectify` function basically does
something like this, and you're presumably calling it repeatedly (as the loop
below does):
import numpy as np
import matplotlib.pyplot as plt
def rectify():
fig, ax = plt.subplots()
ax.pcolormesh(np.random.random((10,10)))
fig.savefig('blah.png')
for _ in range(10):
rectify()
plt.show()
Notice that we'll get 10 figures popping up. `pyplot` holds on to a reference
to the figure so that it can be displayed with `show`.
If you want to remove the figure from the `pyplot` state machine, call
`plt.close(fig)`.
For example, no figures will be displayed if you do this: (each figure will be
garbage collected as you'd expect after you remove the figure from pyplot's
figure manager by calling `plt.close(fig)`.)
import numpy as np
import matplotlib.pyplot as plt
def rectify():
fig, ax = plt.subplots()
ax.pcolormesh(np.random.random((10,10)))
fig.savefig('blah.png')
plt.close(fig)
for _ in range(10):
rectify()
plt.show()
Alternately, you can bypass `pyplot` and make the figure and canvas directly.
Pyplot's figure manager won't be involved, and the figure instance will be
garbage collected as you'd expect. However, this method is rather verbose, and
assumes you know a bit more about how matplotlib works behind-the-scenes:
import numpy as np
from matplotlib.figure import Figure
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
# Don't need this, but just to demonstrate that `show()` has no effect...
import matplotlib.pyplot as plt
def rectify():
fig = Figure()
FigureCanvas(fig)
ax = fig.add_subplot(1,1,1)
ax.pcolormesh(np.random.random((10,10)))
fig.savefig('blah.png')
for _ in range(10):
rectify()
plt.show()
|
Mule jython running with main
Question: > Problem:
How to run a jython having main function in Mule application?
Issue:
I have a simple Jython code invoked by mule flow.
Flow:
<mule xmlns:...
<flow name="wfileFlow1" doc:name="wfileFlow1">
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8081" doc:name="HTTP"/>
<logger message="===\n START ===" level="INFO" doc:name="Logger"/>
<scripting:component doc:name="Python">
<scripting:script engine="jython" file="src/main/java/com/test/Test1.py"/>
</scripting:component>
<logger message="===\n END ===" level="INFO" doc:name="Logger"/>
</flow>
</mule>
> Test1.py
def add(a,b):
return a+b
def addFixedValue(a):
y = 5
return y +a
print add(1,2)
print addFixedValue(1)
output:
===\n START ===
3
6
===\n END ===
if I run with main, then no output i.e it doesn't print anything.
> Test1.py
def add(a,b):
return a+b
def addFixedValue(a):
y = 5
return y +a
if __name__ == '__main__':
print add(3,4)
print addFixedValue(1)
It prints no jython values:
===\n START ===
===\n END ===
Note here no jython values are printed.
Issue is, since second one run as main program in Java, but how do I run a
main program from mule application if my above flow is wrong?
Answer: `__name__` variable is [just a Python interpreter
hack](http://stackoverflow.com/questions/419163/what-does-if-name-main-do) for
executing sections in a main source file (and ignoring them in imported
modules). Mule sets the value of `__name__` to `main` when running Jython
scripts, so I guess you could do:
if __name__ == '__main__' or __name__ == 'main':
|
Parse/Creating Types in Python
Question: I'm trying to figure out how to create types and convert the users input
according to those types. For example:
user enters: 5
5 is of type Number 5 gets added to the list
user enters: 3.5
3.5 is of type Number 3.5 gets added to the list
user enters: :true:
:true: is of type Boolean :true: gets added to the list
I want my program to to be able to convert the number 5 to an int then put it
on the stack, 3,5 to a float then put it on the stack and know that :true: has
a value of True.
Here's what I tried so far but isn't working the way its supposed to:
#!/util/bin/python
import re
import shlex
stack = []
tokens = (
('STRING', re.compile('"[^"]+"')), # longest match
('NAME', re.compile('[a-zA-Z_]+')),
('SPACE', re.compile('\s+')),
('NUMBERD', re.compile('\d.')),
('NUMBERI', re.compile('\d+')),
('TRUE', re.compile(':true:')),
('FALSE', re.compile(':false:')),
)
def main ():
while 1:
inp = input('INPUT> ')
push(inp,stack) #Push User Input unto stack
printStack(stack) #Print Stack
quit(inp,stack) #Exit program if prompted
def push(s,list):
tokens = shlex.split(s, posix=False)
for token in tokens:
if type(s) == 'NUMBERI':
list.append(int(token))
elif type(s) == 'NUMBERD':
list.append(float(token))
else:
list.append(token)
return list
def printStack(stack):
for index in stack[::-1]:
print (index)
def quit (x, stack):
if x == "quit":
stack.pop()
exit(0)
def type(s):
i = 0
lexeme = []
while i < len(s):
match = False
for token, regex in tokens:
result = regex.match(s, i)
if result:
return(token)
i = result.end()
match = True
break
if not match:
raise Exception('lexical error at {0}'.format(i))
def add (x,y):
return(x + y)
def sub (x,y):
return(x - y)
def mul (x,y):
return(x * y)
def div (x,y):
return(x / y)
main()
Answer: Use
[`ast.literal_eval`](https://docs.python.org/3.4/library/ast.html#ast.literal_eval),
it safely evaluates a string as an expression.
Demo:
>>> from ast import literal_eval
>>> a = literal_eval('5')
>>> b = literal_eval('3.5')
>>> c = literal_eval('True')
>>> [e, type(e).__name__ for e in (a, b, c)]
[(5, 'int'), (3.5, 'float'), (True, 'bool')]
|
Creating two button methods Python
Question: I am creating a program in Python that creates a shape based on user input. I
need to create two functions to create buttons using [Zeller's
graphic.py](http://mcsp.wartburg.edu/zelle/python/graphics.py) file. One
button needs to say Quit and the second needs to say Process. Here is what i
have so far but as you can see, they are not in defined functions:
#create Quit button
quitButton = Text(Point(70,73), "Quit")
quitButton.draw(w)
Rectangle(Point(45, 50), Point(95,97)).draw(w)
#create Process button
enterButton = Text(Point(145,73), "Process")
enterButton.draw(w)
Rectangle(Point(120, 48), Point(170,98)).draw(w)
Here is a description of the necessary methods
* `createButton(text, pt1button, pt2button, win)` creates a rectangle with corner points `pt1button` and `pt2button` with centered text in window win
* `clickedButton(button, clickPt)` returns true/false if `clickPt` is in button.
I tried to create the function and received the following error.
Here is my function:
def createButton(text, pt1button, pt2button, win):
button = Text(Point(pt1button, pt2button), text)
button.draw(win)
Here is where I called the function:
createButton("Process",145,73,win)
createButton("Quit",70,73,win)
Here is the error that was thrown:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/project4FINAL.py", line 77, in <module> main()
File "/Library/Frameworks/Python.framework/Versions/3.3/lib/python3.3/project4FINAL.py", line 27, in main buttonCreate("Process", 145,73, win)
NameError: global name 'win' is not defined
Any suggestions?
Answer: So from looking at the code it looks like you want to create a callback
function for each button and then assign each to the canvas via the
`setMouseHandler` of method of `GraphWin`.
So from the example given in the API:
from graphics import *
def example_callback():
print "I love unicorns!"
def main():
# win is a Canvas with a setMouseHandler method
win = GraphWin("My Circle", 100, 100)
c = Circle(Point(50,50), 10)
c.draw(win)
#Add a callback to the canvas
c.cavas.setMouseHandler(example_callback)
# or win.setMouseHandler(example_callback)
win.getMouse() # Pause to view result
win.close() # Close window when done
main()
Unless you have bounds checking in you callbacks (to see which shape on the
canvas the mouse is inside), you should only have one canvas per drawn shape.
An example following the use of a `createButton` function:
def createButton(text, pt1button, pt2button, win):
button = Text(Point(pt1button, pt2button), text)
button.draw(win)
return button
def _callback(pt):
print "I love unicorns!"
print
print "Mouse clicked at x=%d, y=%d"%(pt.x,pt.y)
print
def test():
win = GraphWin()
win.setCoords(0,0,100,100)
quitButton = createButton("Quit",70,73,win)
Rectangle(Point(45, 50), Point(95,97)).draw(win)
win.setMouseHandler(_callback)
while True:
win.getMouse()
win.close()
Below is a complete example using a new Button object:
from graphics import *
class Button(object):
def __init__(self, text, text_pos, rect_pos, win, callback):
self.win = win
self.text = Text(text_pos, text)
self.text.draw(self.win)
# the limits of the button will be defined by the rectangle
self.coords = [rect_pos[0].x,rect_pos[0].y,rect_pos[1].x,rect_pos[1].y]
self.rect = Rectangle(*rect_pos)
self.rect.draw(self.win)
# set the buttons callback
self.callback = callback
def _is_inside(self,click):
limits = self.coords
return (limits[0] < click.x < limits[2]) and (limits[1] < click.y < limits[3])
class MyWindow(object):
def __init__(self,coords=(0,0,100,100)):
self.win = GraphWin()
self.win.setCoords(*coords)
# a list of all possible buttons
self.buttons = []
# register the master callback function
self.win.setMouseHandler(self._callback)
self._quit = False
#create a quit and confess button with a custom create method
self.create_button("Quit",(Point(10,10),Point(40,40)),Point(20,20),self.quit)
self.create_button("Confess",(Point(50,50),Point(80,80)),Point(60,60),self.confess)
def create_button(self,text,coords,text_coords,callback):
button = Button(text,text_coords,coords,self.win,callback)
# create a button and add it to our list of buttons
self.buttons.append(button)
def confess(self,point):
print
print "I love unicorns!"
print
def quit(self,point):
self._quit = True
self.win.close()
# This function is called for any click on the canvas
def _callback(self,point):
# Need to do a coordinate transform here to get the click in the coordinates of the button
x,y = self.win.trans.world(point.x,point.y)
point = Point(x,y)
print "Clicked at x=%d, y=%d"%(point.x,point.y)
# cycle through buttons and execute all for which the clicked point lies inside their rectangle
for button in self.buttons:
if button._is_inside(point):
button.callback(point)
# a loop to keep getting mouse input
def run(self):
while not self._quit:
try:
self.win.getMouse()
except GraphicsError:
break
if __name__ == "__main__":
x = MyWindow()
x.run()
|
How to do inputs in Python
Question: I have to open file in Python, what looks like this:
Rapla;Tartu;157
Tallinn;Narva;211
Valga;Haapsalu;249
Viljandi;Paide;71
Tartu;Rakvere;123
Rapla;Narva;259
Paide;Narva;196
Paide;Tallinn;92
I want to get program to work like this with an example:
Please enter starting point: Rapla
Now please enter ending point: Tartu
Distance between Rapla and Tartu is 157 km.
I managed to make a list in python, but I'm fairly new in Python and I dont
know, how to do inputs, so I could get a accetable output.
What I managed to do so far:
town1 = []
town2 = []
distance = []
f = open('town.csv')
for list in f:
x = list.split(';')
town1.append(x[0])
town2.append(x[1])
distance.append(x[2])
f.close()
n = len(town1)
for i in range(n):
print('Starting point is: ' + town1[i])
print('Ending point is: ' + town2[i])
print('Distance between', town1[i], 'and', town2[i], 'is', distance[i], 'km.')
Program just prints every single town starting point and ending point with
distance, but I want to make program to ask me starting point and ending
point.
Answer:
# assumes Python 2.x
from collections import defaultdict
import csv
def load_distances(fname):
distance = defaultdict(dict)
with open(fname, "rb") as inf:
incsv = csv.reader(inf, delimiter=";")
for town_a, town_b, dist in incsv:
distance[town_a][town_b] = int(dist)
return distance
def main():
distance = load_distances("town.csv")
town_a = raw_input("Please enter starting point: ")
town_b = raw_input("Now please enter ending point: ")
if town_a not in distance or town_b not in distance[town_a]:
print("I don't know how to get from {} to {}".format(town_a, town_b))
else:
print("Distance between {} and {} is {} km.".format(town_a, town_b, distance[town_a][town_b]))
if __name__=="__main__":
main()
If you are using Python 3, you need to replace `raw_input()` with `input()`
and `open(fname, "rb")` with `open(fname, newline="")`.
`defaultdict` is a special type of `dict`; if you ask it for an object it
doesn't have, it returns a new default object instead of causing an error.
|
Python reading file to dictionary
Question:
templist=[]
temp=[]
templist2=[]
tempstat1={}
station1={}
station2={}
import os.path
def main():
#file name=animallog.txt
endofprogram=False
try:
filename=input("Enter name of input file >")
file=open(filename,"r")
except IOError:
print("File does not exist")
endofprogram=True
for line in file:
line=line.strip('\n')
if (len(line)!=0)and line[0]!='#':
(x,y,z)=line.split(':')
record=(x,z)
if record[0] not in station1 or record[0] not in station2:
if record[1]=='s1' and record[0] not in station1:
station1[0]=1
if record[1]=='s2' and record[0] not in station2:
station2[0]=1
elif record[0] in station1 or record[0] in station2:
if record[1]=='s1':
station1[0]=station1[0]+1
elif record[1]=='s2':
station2[0]=station2[0]+1
print(station1)
print(station2)
main()
Hi guys!
I was just working on a program that reads from a file of this format:GIVEN AT
THE BOTTOM
But for some reason the output is `{0:1}` for both `station1` and `station2` .
I was just wondering why this was happening? I tried using the debug function
but not able to understand. Appreciate all your effort! Thanks :)
FILE FORMAT:
(NAME:DATE:STATION NUMBER)
a01:01-24-2011:s1
a03:01-24-2011:s2
a03:09-24-2011:s1
a03:10-23-2011:s1
a04:11-01-2011:s1
a04:11-02-2011:s2
a04:11-03-2011:s1
a04:01-01-2011:s1
Answer: Your dictionaries only hold `{0:1}` because that's all you're putting in them!
station1[0]=1 # This sets a key-value pair of 0 : 1
I'm not totally sure what your expected output is, but I think you're making
this harder than it needs to be. I'm guessing you want something like this:
name, date, station = line.split(':') # use meaningful identifiers!
if name not in station1 and station == 's1':
station1[name] = date
elif name not in station2 and station == 's2':
station2[name] = date
This will give you output dictionaries like this:
{'a01' : '01-24-2011',
'a03' : '09-24-2011'}
Note that by checking if the keys are already in the dictionary, you'll only
add the first of any non-unique keys you come across to any given dictionary
(for example, you'd only get the first two of your four `'a04'` entries in
your example input--the second two would be ignored, because `'a04'` is
already in both dicitonaries).
|
Setting up Django on apache with mod_wsgi
Question: Trying to setup Django on apache on CentOS 6 using mod_wsgi but I'm not sure
what I have setup wrong. I've tried alot of different setup guides but i
always get the same error in the apache logs:
[Mon Mar 31 19:51:22 2014] [error] [client ::1] mod_wsgi (pid=39608): Target WSGI script '/opt/django/movies/movies/wsgi.py' cannot be loaded as Python module.
[Mon Mar 31 19:51:22 2014] [error] [client ::1] mod_wsgi (pid=39608): Exception occurred processing WSGI script '/opt/django/movies/movies/wsgi.py
[Mon Mar 31 19:51:22 2014] [error] [client ::1] Traceback (most recent call last):
[Mon Mar 31 19:51:22 2014] [error] [client ::1] File "/opt/django/movies/movies/wsgi.py", line 16, in <module>
[Mon Mar 31 19:51:22 2014] [error] [client ::1] from django.core.wsgi import get_wsgi_application
[Mon Mar 31 19:51:22 2014] [error] [client ::1] ImportError: No module named wsgi
wsgi.py
import os
import sys
sys.path.append('/opt/django/movies/movies')
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
/etc/httpd/conf/httpd.conf
<VirtualHost *:80>
WSGIScriptAlias / /opt/django/movies/movies/wsgi.py
ServerName localhost
DocumentRoot "/opt/django/movies"
<Directory /opt/django/movies/movies>
<Files wsgi.py>
Order deny,allow
Allow from all
</Files>
</Directory>
</VirtualHost>
Answer: The version of Django installed into the Python installation which mod_wsgi is
using is an old version. The django.core.wsgi module didn't always exist.
You should work out what Python installation mod_wsgi is using and update the
Django installed. See:
* <http://code.google.com/p/modwsgi/wiki/CheckingYourInstallation#Python_Installation_In_Use>
BTW, do not set DocumentRoot such that your Django project lives under that
directory, it is one step towards being a security problem where a slight
misconfiguration of your Apache could expose all your source code, including
the settings file with its passwords.
|
python multithreading with list I/O
Question: I'm wanting to achieve multithreading in python where the threaded function
does some actions and adds a URL to a list of URLs (`links`) and a listener
watches the `links` list from the calling script for new elements to iterate
over. Confused? Me too, I'm not even sure how to go about explaining this, so
let me try to demonstrate with pseudo-code:
from multiprocessing import Pool
def worker(links):
#do lots of things with urllib2 including finding elements with BeautifulSoup
#extracting text from those elements and using it to compile the unique URL
#finally, append a url that was gathered in the `lots of things` section to a list
links.append( `http://myUniqueURL.com` ) #this will be unique for each time `worker` is called
links = []
for i in MyBigListOfJunk:
Pool().apply(worker, links)
for link in links:
#do a bunch of stuff with this link including using it to retrieve the html source with urllib2
Now, rather than waiting for all the `worker` threads to finish and iterate
over `links` all at once, is there a way for me to iterate over the URLs as
they are getting appended to the `links` list? Basically, the `worker`
iteration to generate the `links` list HAS to be separate from the iteration
of `links` itself; however, rather than running each sequentially I was hoping
I could run them somewhat concurrently and save some time... currently I must
call `worker` upwards of 30-40 times within a loop and the entire script takes
roughly 20 minutes to finish executing...
Any thoughts would be very welcome, thank you.
Answer: You should use `Queue` class for this. It is a thread-safe array. It's 'get'
function removes item from Queue, and, what's important, blocks when there is
no items and waits until other processes add them. If you use
`multiprocessing` than you should use `Queue` from this module, not the
`Queue` module. Next time you ask questions on processes, provide exact Python
version you want it for. This is for 2.6
|
Trouble importing a Python module
Question: I'm new to Python and having trouble importing from a different directory.
The file I'm currently in has the path home/pi/example.py
The module I'm trying to import into this file has the path
home/pi/ReactiveEngine/src/PiEngine.py
I've tried the following:
import sys
sys.path.append('home/pi/ReactiveEngine/src')
import sys.PiEngine.py
But it's saying there's no module named PiEngine.py
Can someone let me know what I'm doing wrong?
Answer: When you import, don't put the suffix `.py`. Just do the following:
import sys
sys.path.append('home/pi/ReactiveEngine/src')
import sys.PiEngine
And it should work without the `.py`, because you can only import a `.py`
file, so the suffix is redundant!
Or as Louis said below, you can use the following:
import os
os.chdir('home/pi/ReactiveEngine/src')
import PiEngine
|
display an rgb matrix image in python
Question: I have an rgb matrix something like this:
image=[[(12,14,15),(23,45,56),(,45,56,78),(93,45,67)],
[(r,g,b),(r,g,b),(r,g,b),(r,g,b)],
[(r,g,b),(r,g,b),(r,g,b),(r,g,b)],
..........,
[()()()()]
]
i want to display an image which contains the above matrix.
I used this function to display a gray-scale image:
def displayImage(image):
displayList=numpy.array(image).T
im1 = Image.fromarray(displayList)
im1.show()
argument(image)has the matrix
anybody help me how to display the rgb matrix
Answer: _**imshow_** in the matplotlib library will do the job
what's critical is that your NumPy array has the correct shape:
height x width x 3
(or height x width x 4 for RGBA)
>>> import os
>>> # fetching a random png image from my home directory, which has size 258 x 384
>>> img_file = os.path.expanduser("test-1.png")
>>> from scipy import misc
>>> # read this image in as a NumPy array, using imread from scipy.misc
>>> M = misc.imread(img_file)
>>> M.shape # imread imports as RGBA, so the last dimension is the alpha channel
array([258, 384, 4])
>>> # now display the image from the raw NumPy array:
>>> from matplotlib import pyplot as PLT
>>> PLT.imshow(M)
>>> PLT.show()
|
Python pandas Google finance international stocks - looking for way to get international stocks price history with Google
Question: This works: domestic stocks
gticker='NYSE:IBM'
import pandas.io.data as web
dfg = web.DataReader(gticker, 'google', '2013/1/1', '2014/3/1')
This does not: international stocks
gticker='HKG:0700'
import pandas.io.data as web
dfg = web.DataReader(gticker, 'google', '2013/1/1', '2014/3/1')
even though for both, you can go to the "Historical prices" link and see
historical data.
Any suggestions?
Answer: The DataReader for google wants to download a csv file. So for 'goog' it
requests the following URL which fetches the csv file:
[http://www.google.com/finance/historical?q=GOOG&startdate=Jan+1%2C+2013&enddate=Mar+1%2C+2014&output=csv](http://www.google.com/finance/historical?q=GOOG&startdate=Jan+1%2C+2013&enddate=Mar+1%2C+2014&output=csv)
This is true for all the domestic stocks (like IBM). But for 'HKG:0700' it
requests:
[http://www.google.com/finance/historical?q=HKG%3A0700&startdate=Jan+01%2C+2014&enddate=Mar+01%2C+2014&output=csv](http://www.google.com/finance/historical?q=HKG%3A0700&startdate=Jan+01%2C+2014&enddate=Mar+01%2C+2014&output=csv)
and that produces a 'The requested URL was not found on this server.' error.
You can look at the historical data at:
[http://www.google.com/finance/historical?q=HKG%3A0700&startdate=Jan+01%2C+2014&enddate=Mar+01%2C+2014](http://www.google.com/finance/historical?q=HKG%3A0700&startdate=Jan+01%2C+2014&enddate=Mar+01%2C+2014)
But it doesn't look like you can get a csv file.
You see what it is doing pandas/io/data.py when it creates the URL:
# www.google.com/finance/historical?q=GOOG&startdate=Jun+9%2C+2011&enddate=Jun+8%2C+2013&output=csv
url = "%s%s" % (_HISTORICAL_GOOGLE_URL,
urlencode({"q": sym,
"startdate": start.strftime('%b %d, ' '%Y'),
"enddate": end.strftime('%b %d, %Y'),
"output": "csv"}))
|
Auto kill process and child process of multiprocessing Pool
Question: I am using multiprocessing module for parallel processing. Bellow code snippet
search the string filename in X location and return the file name where the
string found. But in some cases it take long time to search process so i was
trying to kill the search process with take more than 300 seconds.For that i
used timeout == 300 as given bellow , this kills the search process but it
dosent kill the child process spawn by bellow code.
I tried to find multiple way but no success :/
How can i kill parent process from Pool along with its child process ?
import os
from multiprocessing import Pool
def runCmd(cmd):
lresult = os.popen(cmd).read()
return lresult
main ():
p = Pool(4)
data_paths = [list of paths of store data]
search_cmds = [ "SearchText.exe %s < %s"%(data_path, filename) for data_path in data_paths ]
results = [p.apply_async(runCmd, (cmd,), callback = log_result) for cmd in search_cmds]
try:
for result in results:
root.append(result.get(timeout=300))
#rool holds the result of search process
except TimeoutError:
for c in multiprocessing.active_children():
print '----->',c.pid
os.kill(c.pid, signal.SIGTERM)
p.close()
p.join()
if __name__ == '__main__':
main()
Process Tree in Process Explorer :
cmd.exe
------python.exe
----------------python.exe
--------------------------cmd.exe
---------------------------------SearchText.exe
----------------python.exe
--------------------------cmd.exe
---------------------------------SearchText.exe
----------------python.exe
--------------------------cmd.exe
---------------------------------SearchText.exe
----------------python.exe
--------------------------cmd.exe
---------------------------------SearchText.exe
above code snippet dosnt kill the child process
--------------------------cmd.exe
---------------------------------SearchText.exe
--------------------------cmd.exe
---------------------------------SearchText.exe
--------------------------cmd.exe
---------------------------------SearchText.exe
--------------------------cmd.exe
---------------------------------SearchText.exe
Theses child search process retain , these child process also get killed .
Please guild.
Thanks
Answer: I am able to solve my Issue using psutil module
Found solution on bellow post:
import psutil, os
def kill_proc_tree(pid, including_parent=True):
parent = psutil.Process(pid)
for child in parent.get_children(recursive=True):
child.kill()
if including_parent:
parent.kill()
me = os.getpid()
kill_proc_tree(me)
> <http://stackoverflow.com/a/4229404/420557>
|
How to parse JSON passed on the command line
Question: I am trying to pass JSON parameters through command line in Python:
automation.py {"cmd":"sel_media","value":"5X7_photo_paper.p}
how can I extract the values `sel_media` and `5X7_photo_paper.p`?
I used the following code, but it is not working:
cmdargs = str(sys.argv[1])
print cmdargs
Answer: Provided you pass actual valid JSON to the command line and quote it
correctly, you can parse the value with the [`json`
module](https://docs.python.org/2/library/json.html).
You need to quote the value properly, otherwise your shell or console will
interpret the value instead:
automation.py '{"cmd":"sel_media","value":"5X7_photo_paper.p"}'
should be enough for a bash shell.
In Python, decode with
[`json.loads()`](https://docs.python.org/2/library/json.html#json.loads):
import sys
import json
cmdargs = json.loads(sys.argv[1])
print cmdargs['cmd'], cmdargs['value']
Demo:
$ cat demo.py
import sys
import json
cmdargs = json.loads(sys.argv[1])
print cmdargs['cmd'], cmdargs['value']
$ bin/python demo.py '{"cmd":"sel_media","value":"5X7_photo_paper.p"}'
sel_media 5X7_photo_paper.p
|
Streams in computing
Question: In the world of computing, majorly i've been reading python programming
language and the socket module I understand that the sockets in communication
are just communications end-points, which every communication done uses a
socket.
import socket
#Importing the low-level socket module in python
But what i have been trying to grasp is streams, i understand them though, but
is it enough to just think of streams as the road connecting two sockets
together where other things like bytes and packets drive on from one place to
the other or there is more to it? Thanks for your simple but through
explanation
Answer: The main difference between a "file" and a "stream" is that a stream cannot be
rewound and read (or written) again. Also, a "stream" usually has no concept
of "framing", meaning it is just a pipe with bytes flowing through, as opposed
to a "message queue" or a "datagram socket". TCP sockets are considered stream
sockets, whereas UDP ones are "datagram" sockets meaning that the boundaries
between message bodies are explicit in the protocol. With a stream you often
need to implement your own framing or "envelope" protocol on top.
|
Using a queue for synchronous multiprocessing
Question: I have a python program which reads a line from a file, processes the line and
then writes it to a new file. It repeats this for all lines in the file.
Essentially:
for i in range(nlines):
line = read_line(line_number = i)
processed_line = process_line(line)
write_line(line)
I'd like to multiprocess it so that one process takes care of the reading and
writing and another process takes care of the processing:
read line 1 -> read line 2 -> write line 1 -> read line 3 -> write line 2 --> etc
process line 1 --------------> process line 2 ----------------> etc
I guess that I would need to utilise two queues to pass the data back and
forth, although I don't really know how to implement this in practice. Do you
have any ideas of how I might split this problem across 2 processes using
multiprocessing?
Answer:
from multiprocessing import Process, Queue as mpq
def worker(qIn, qOut):
for fpath,i,line in iter(qIn.get, None):
qOut.put((fpath,i, process_line(line)))
def main(infilepath, outfilepath):
qIn, qOut = mpq(), mpq()
Process(target=worker, args=(qIn, qOut)).start()
with open(infilepath) as infile, open(outfilepath, 'w') as outfile:
numLines = 0
for tup in enumerate(infile):
qIn.put(tup)
numLines += 1
qIn.put(None)
retlines = {}
top = -1
for _ in range(numLines)
i,line = qOut.get))
retlines[i] = line
if i+1 in retlines:
outfile.write(retlines.pop(i+1))
i += 1
Of course, this waits to finish reading the input file before it starts
writing to the output file, which is an efficiency bottleneck. I would do this
instead:
def procWorker(qIn, qOut, numWriteWorkers):
for fpath,i,line in iter(qIn.get, None):
qOut.put((fpath,i, process_line(line)))
for _ in range(numWriteWorkers):
qOut.put(None)
def readWorker(qIn, qOut, numProcWorkers):
for infilepath in iter(qIn.get, None):
with open(infilepath) as infile:
for line in infile:
qOut.put((infilepath, i, line))
for _ in range(numProcWorkers):
qOut.put(None)
def writeWorker(qIn, qOut):
outfilepaths = {"test1.in" : "test1.out"} # dict that maps input filepaths to corresponding output filepaths
lines = collections.defaultdict(dict)
inds = collections.defaultdict(lamnda : -1)
for fpath,i,line in iter(qIn.get, None):
if i == inds[fpath] + 1:
inds[fpath] += 1
with open(outfilepaths[fpath], 'a') as outfile:
outfile.write(line)
qOut.put((fpath, i))
else:
lines[fpath][i] = line
for fpath in lines:
with open(outfilepaths[fpath], 'a') as outfile:
for i in sorted(fpath):
outfile.write(lines[fpath][i])
qOut.put((fpath, i))
qOut.put(None)
def main(infilepaths):
readqIn, readqOut, procqOut, writeqOut = [Queue for _ in range(4)]
numReadWorkers = 1 # fiddle to taste
numWriteWorkers = 1 # fiddle to taste
numProcWorkers = 1 # fiddle to taste
for _ in range(numReadWorkers):
Process(target=readWorker, args=(readqIn, readqOut, numProcWorkers)).start()
for infilepath in infilepaths:
readqIn.put(infilepath)
for _ in range(numReadWorkers):
readqIn.put(None)
for _ in range(numProcWorkers):
Process(target=procWorker, args=(readqOut, procqOut, numWriteWorkers)).start()
for _ in range(numWriteWorkers):
Process(target=writeWorker, args=(procqOut, writeqOut)).start()
writeStops = 0
while True:
if writeStops == numWriteWorkers:
break
msg = writeqOut.get()
if msg == None:
writeStops += 1
else:
fpath, i = msg
print("line #%d was written to file %s" %(i, fpath))
Note that this allows for the possibility of multiple readers and writers.
Normally, this would be pointless, as there is only on head on a hard drive.
However, if you are on some distributed filesystem or your files reside on
multiple hard drives, then you could increase the number of read/write workers
to increase your efficiency. Assuming a trivial `process_line` function,
`numReadWorkers` \+ `numWriteWorkers` should equal the number of magnetic
heads you have on all your hard drives. You could balance files on drives (_a
la_ raid) to reach some optimum here, but a lot depends on file sizes,
read/write speeds, caching, etc.
Really, the first speedups you should get are from fiddling with
`numProcWorkers`, which should give you a nice linear increase in efficiency,
bounded of course, by the number of logical core processors on your machine
|
Statistical Significance in terms of Gaussian Sigma
Question: I'm working on an issue in my research where I would like to express my
statistical significance for a correlation peak in terms of sigma of a normal
distribution. For example, if my peak was at 95% significance it would be at
2sigma. Essentially what I'm asking is say I have an arbitrary peak
significance (e.g. 92%), how would I express this in terms of sigma of a
normal distribution? I realize this is a more general statistics question, so
any reading/background is encouraged. Or if Python as a straightforward
function to convert/compute this that works too. Thanks!
Answer: I'm not sure what you mean by "statistical significance of a correlation
peak," so I can't comment on whether the statistics you're talking about make
any sense. However it sounds like you'd like calculate the following: how many
standard deviations from the mean (say 1.96 sigma) cover a given fraction (in
this case, 0.95) of the normal distribution? If this is what you're asking,
you can use the SciPy statistics library to easily solve this. If you don't
have SciPy already, you'll need to [install it
first](http://scipy.org/install.html).
Once you have SciPy installed, you'll want to use the inverse survival
function (ISF) of the normal distribution. The ISF is the inverse of the
survival function, which itself is
1-[CDF](https://en.wikipedia.org/wiki/Cumulative_distribution_function).
Here's how you do it in python:
In [1]: import scipy.stats as st
In [2]: yourArea = 0.95
In [3]: st.norm.isf((1-yourArea)/2.)
Out[3]: 1.959963984540054
So that's how you calculate the number that I believe you want. The (1-A)/2
business just accounts for the fact the CDF integrates from -infinity, whereas
you're interested in values calculated from the center of the distribution.
|
Teradata 'Unable to get catalog string' error when using ODBC to connect
Question: I am trying to reach a remote host that is running Teradata services via ODBC.
The host that I am trying to connect from is 64-bit RHEL 6.x with the
following Teradata software installed:
1. bteq
2. fastexp
3. fastld
4. jmsaxsmod
5. mload
6. mqaxsmod
7. npaxsmod
8. sqlpp
9. tdodbc
10. tdwallet
11. tptbase
12. tptstream
13. tpump
When I try to connect to the remote host via Python (interactive session), I
receive a 'Unable to get catalog string' error:
[@myhost:/path/to/scripts] ->python
Python 2.6.6 (r266:84292, Nov 21 2013, 10:50:32)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyodbc
>>> pyodbc.pooling = False
>>> cn = pyodbc.connect("DRIVER={Teradata}; SERVER=12.245.67.255:1025;UID=usr;PWD=pwd", ANSI = True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
pyodbc.Error: ('28000', '[28000] [Teradata][ODBC Teradata Driver] Unable to get catalog string. (0) (SQLDriverConnect)')
Furthermore, when I try to use isql (from the unixODBC yum package), I receive
the same error
[@my_host:/path/to/scripts] ->isql -v proddsn
[28000][Teradata][ODBC Teradata Driver] Unable to get catalog string.
[ISQL]ERROR: Could not SQLConnect
Answer: I **think** that message means that the Teradata driver is attempting to
display an error message, but it can't find the catalog file. I think you need
to set an NLSPATH environment value pointing to where your tdodbc.cat file is.
Something along these lines:
/opt/teradata/client/lib/odbc/%N.cat
That funky looking %N.cat is standard, you just need to point to the correct
location.
|
Python script to read a text file and write into a csv file
Question: I have a text file in which each row has multiple words (which I want to
consider as columns). Now I want to read all the data from this text file and
create a csv file with rows and columns. I am written the code till here -
import csv
f=open("text.txt", "r")
reader=csv.reader(f)
offile=open("output.csv","wb")
writer=csv.writer(offile,delimiter='\t',quotechar='"',quoting=csv.QUOTE_ALL)
for row in reader:
........
f.close()
offile.close()
I am not able to understand how to divide each row into columns and write this
columns and rows back while writing a csv file? I am a newbie to python, so a
good example I will be very greatful.
Thanks
Answer: Try splitting the lines via a regular expression:
line = "Foo bar baz quux"
import re
pieces = re.split("\s+", line)
print pieces
This results in
`['Foo', 'bar', 'baz', 'quux']`
The regular expression used above matches for multiple (+) white space
characters (\s)
|
Text parsing convert equation blocks to Python statements
Question: I have a big group of SAS equation statements, and would like to use Python to
convert those equations to Python statements. They look like the following:
From SAS:
select;
when(X_1 <= 6.7278 ) V_1 =-0.0594 ;
when(X_1 <= 19.5338 ) V_1 =0.0604 ;
when(X_1 <= 45.1458 ) V_1 =0.1755 ;
when(X_1 <= 83.5638 ) V_1 =0.2867 ;
when(X_1 <= 203.0878 ) V_1 =0.395 ;
when(X_1 > 203.0878 ) V_1 =0.5011 ;
end;
label V_1 ="X_1 ";
select;
when(X_2 <= 0.0836 ) V_2 =0.0562 ;
when(X_2 <= 0.1826 ) V_2 =0.07 ;
when(X_2 <= 0.2486 ) V_2 =0.0836 ;
when(X_2 <= 0.3146 ) V_2 =0.0969 ;
when(X_2 <= 0.3806 ) V_2 =0.1095 ;
when(X_2 <= 0.4466 ) V_2 =0.1212 ;
when(X_2 <= 0.5126 ) V_2 =0.132 ;
when(X_2 <= 0.5786 ) V_2 =0.1419 ;
when(X_2 <= 0.6446 ) V_2 =0.1511 ;
when(X_2 <= 0.7106 ) V_2 =0.1596 ;
when(X_2 <= 0.8526 ) V_2 =0.1679 ;
when(X_2 > 0.8526 ) V_2 =0.176 ;
end;
label V_2 ="X_2 ";
...
...
...
To Python:
if X_1 <= 6.7278:
V_1 =-0.0594
elif X_1 <= 19.5338:
V_1 =0.0604
elif X_1 <= 45.1458:
V_1 =0.1755
elif X_1 <= 83.5638:
V_1 =0.2867
elif X_1 <= 203.0878:
V_1 =0.395
else:
V_1 =0.5011
if X_2 <= 0.0836:
....
I don't know where to get started, like if use 're' package or anything else.
Any help would be really appreciated!
Answer: If the input is very consistent (as shown), you could probably get by with
`re`.
For anything more complicated, you might want to look at a more robust parser
like `pyparsing`.
* * *
**Edit:** Here is a very simple finite-state-machine parser using regular
expressions; it handles blank lines, unnested `select;` and `end;` statements,
and initial/successive `when`s. I don't handle `label`s because I'm not sure
what they do - rename the V variable back to X?
import re
class SasTranslator:
def __init__(self):
# modes:
# 0 not in START..END
# 1 in START..END, no CASE seen yet
# 2 in START..END, CASE already found
self.mode = 0
self.offset = -1 # input line #
def handle_blank(self, match):
return ""
def handle_start(self, match):
if self.mode == 0:
self.mode = 1
return None
else:
raise ValueError("Found 'select;' in select block, line {}".format(self.offset))
def handle_end(self, match):
if self.mode == 0:
raise ValueError("Found 'end;' with no opening 'select;', line {}".format(self.offset))
elif self.mode == 1:
raise ValueError("Found empty 'select;' .. 'end;', line {}".format(self.offset))
elif self.mode == 2:
self.mode = 0
return None
def handle_case(self, match):
if self.mode == 0:
raise ValueError("Found 'when' clause outside 'select;' .. 'end;', line {}".format(self.offset))
elif self.mode == 1:
test = "if"
self.mode = 2
# note: code continues after if..else block
elif self.mode == 2:
test = "elif"
# note: code continues after if..else block
test_var, op, test_val, assign_var, assign_val = match.groups()
return (
"{test} {test_var} {op} {test_val}:\n"
" {assign_var} = {assign_val}".format(
test = test,
test_var = test_var,
op = op,
test_val = test_val,
assign_var = assign_var,
assign_val = assign_val
)
)
#
# Build a dispatch table for the handlers
#
BLANK = re.compile("\s*$")
START = re.compile("select;\s*$")
END = re.compile("end;\s*$")
CASE = re.compile("\s*when\((\w+)\s*([<>=]+)\s*([\d.-]+)\s*\)\s*(\w+)\s*=\s*([\d.-]+)\s*;\s*$")
dispatch_table = [
(BLANK, handle_blank),
(START, handle_start),
(END, handle_end),
(CASE, handle_case)
]
def __call__(self, line):
"""
Translate a single line of input
"""
self.offset += 1
for test,handler in SasTranslator.dispatch_table:
match = test.match(line)
if match is not None:
return handler(self, match)
# nothing matched!
return None
def main():
with open("my_file.sas") as inf:
trans = SasTranslator()
for line in inf:
result = trans(line)
if result is not None:
print(result)
else:
print("***unknown*** {}".format(line.rstrip()))
if __name__=="__main__":
main()
and run against your sample input it produces
if X_1 <= 6.7278:
V_1 = -0.0594
elif X_1 <= 19.5338:
V_1 = 0.0604
elif X_1 <= 45.1458:
V_1 = 0.1755
elif X_1 <= 83.5638:
V_1 = 0.2867
elif X_1 <= 203.0878:
V_1 = 0.395
elif X_1 > 203.0878:
V_1 = 0.5011
***unknown*** label V_1 ="X_1 ";
if X_2 <= 0.0836:
V_2 = 0.0562
elif X_2 <= 0.1826:
V_2 = 0.07
elif X_2 <= 0.2486:
V_2 = 0.0836
elif X_2 <= 0.3146:
V_2 = 0.0969
elif X_2 <= 0.3806:
V_2 = 0.1095
elif X_2 <= 0.4466:
V_2 = 0.1212
elif X_2 <= 0.5126:
V_2 = 0.132
elif X_2 <= 0.5786:
V_2 = 0.1419
elif X_2 <= 0.6446:
V_2 = 0.1511
elif X_2 <= 0.7106:
V_2 = 0.1596
elif X_2 <= 0.8526:
V_2 = 0.1679
elif X_2 > 0.8526:
V_2 = 0.176
***unknown*** label V_2 ="X_2 ";
* * *
Depending how often you use this, it might be worth making a binomial-lookup
function using `bisect` and translating the `select;`..`end;` blocks into that
form instead (although you would want to be very careful that the comparison
operators are what you expect!) - something like
V_1 = index_into(
X_1,
[ 6.7278, 19.5338, 45.1458, 83.5638, 203.0878 ],
[-0.0594, 0.0604, 0.1755, 0.2867, 0.395, 0.5011]
)
It could be significantly faster-running (especially as the number of options
goes up) and much easier to comprehend and maintain.
|
django csv import encoding
Question: I am using the Django csv import (<https://pypi.python.org/pypi/django-
csvimport>) to populate some models. The problem is the csv files I have are
encoded in ANSI (Windows-1252) format and they have words with special
characters e.g. JOSÉ, when I import to my models the word become JOSи.
Could you help me with this?
P.S.:
1 - I have fulfilled the encoding field of the csv import with many options
(ansi, utf-8...) but it seems to have no effect.
2 - I have tried to convert my csv files to many differents formats (using
vb.net) like utf-8, utf-32, unicode... but all of them cause some error in
Django csv import.
Answer: After some tries I found the solution: While trying to convert my text file
using vb.net I was opening it with OpenText(), which open the file with UTF8
encoding. So I opened it with something like "Using SR As StreamReader = New
StreamReader(Fl.FullName, System.Text.Encoding.GetEncoding("Windows-1252"),
True)", and wrote it with UTF8. This solved the problem.
|
Error converting .py to executable using cx_freeze
Question: I am using python 3.3.3
the following is my setup.py code
import sys
from cx_Freeze import setup, Executable
build_exe_options = {"packages": ["os"], "excludes": ["tkinter"]}
base = None
if sys.platform == "win32":
base = "Win32GUI"
setup( name = "send_email",
version = "0.1",
description = "send the email",
options = {"build_exe": build_exe_options},
executables = [Executable("send_email.py", icon="icon.ico", base=base)])
The only import in my send_email.py file is smtplib.
The following error message is what I receive when building the executable in
the command window:
c:\Python33>python.exe setup.py build
running build
running build_exe
copying c:\Python33\lib\site-packages\cx_Freeze\bases\Win32GUI.exe -> build\exe.
win-amd64-3.3\send_email.exe
copying C:\Windows\SYSTEM32\python33.dll -> build\exe.win-amd64-3.3\python33.dll
Traceback (most recent call last):
File "setup.py", line 17, in <module>
executables = [Executable("send_email.py", icon="icon.ico", base=base)])
File "c:\Python33\lib\site-packages\cx_Freeze\dist.py", line 365, in setup
distutils.core.setup(**attrs)
File "c:\Python33\lib\distutils\core.py", line 148, in setup
dist.run_commands()
File "c:\Python33\lib\distutils\dist.py", line 917, in run_commands
self.run_command(cmd)
File "c:\Python33\lib\distutils\dist.py", line 936, in run_command
cmd_obj.run()
File "c:\Python33\lib\distutils\command\build.py", line 126, in run
self.run_command(cmd_name)
File "c:\Python33\lib\distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "c:\Python33\lib\distutils\dist.py", line 936, in run_command
cmd_obj.run()
File "c:\Python33\lib\site-packages\cx_Freeze\dist.py", line 235, in run
freezer.Freeze()
File "c:\Python33\lib\site-packages\cx_Freeze\freezer.py", line 577, in Freeze
self._FreezeExecutable(executable)
File "c:\Python33\lib\site-packages\cx_Freeze\freezer.py", line 169, in _Freez
eExecutable
cx_Freeze.util.AddIcon(exe.targetName, exe.icon)
SystemError: error return without exception set
Answer: I had the same message error and I fixed it by giving the full path of the
icon file. By the way, make sure the icon is in .ico format (At first I
renamed the extensions of a .png file to .ico and caused the process to crash,
lastly I converted the .png file to the .ico format and it worked).
|
Horizontal Lines in Tkinter Python
Question: Trying to draw an 8 x 8 grid in Tkinter and I have the vertical lines drawn,
but I can't seem to draw the horizontal lines correctly. Please help!
Here's my code:
from tkinter import *
class CanvasGrid:
def __init__(self):
self.window = Tk()
self.window.title("Grid")
self.canvas = Canvas(self.window, width = 128, height = 128, bg = "white")
self.canvas.pack()
def displayVertical(self):
self.canvas.create_line(16, 0, 16, 128, fill = "red",
tags = "line")
self.canvas.create_line(32, 0, 32, 128, fill = "red",
tags = "line")
self.canvas.create_line(48, 0, 48, 128, fill = "red",
tags = "line")
self.canvas.create_line(64, 0, 64, 128, fill = "red",
tags = "line")
self.canvas.create_line(80, 0, 80, 128, fill = "red",
tags = "line")
self.canvas.create_line(96, 0, 96, 128, fill = "red",
tags = "line")
self.canvas.create_line(112, 0, 112, 128, fill = "red",
tags = "line")
def displayHorizontal(self):
self.canvas.create_line(50, 50, 50, 50, fill = "blue",
tags = "line")
Thanks!
Answer: As it is the same code for either direction, you can pass the start values to
a function that creates either/both directions if you want.
class CanvasGrid:
def __init__(self):
self.window = Tk()
self.window.title("Grid")
self.canvas = Canvas(self.window, width=128,
height=128, bg = "white")
self.display_lines(16, 0, 16, 128, "red")
self.display_lines(0, 16, 128, 16, "blue")
self.canvas.pack()
self.window.mainloop()
def display_lines(self, x1, y1, x2, y2, color):
x_plus = x1 ## all lines are evenly spaced
y_plus = y1
for ctr in range(7):
self.canvas.create_line(x1, y1, x2, y2, fill = color)
x1 += x_plus
x2 += x_plus
y1 += y_plus
y2 += y_plus
CG = CanvasGrid()
|
Can one define functions like in JavaScript?
Question: In Python, can one define a function (that can have statements in it, thus not
a lambda) in a way similar to the following JavaScript example?
var func = function(param1, param2) {
return param1*param2;
};
I ask, since I'd like to have a dictionary of functions, and I wouldn't want
to first define all the functions, and then put them in a dictionary.
The reason I want a dictionary of functions is because I will have another
function that takes another dictionary as parameter, loops through its keys,
and if it finds a matching key in the first dictionary, calls the associated
function with the value of the second dictionary as parameter. Like this:
def process_dict(d):
for k, v in d.items():
if k in function_dict:
function_dict[k](v)
Maybe there is a more pythonic way to accomplish such a thing?
Answer: Use a class (with static methods) instead of a dictionary to contain your
functions.
class MyFuncs:
@staticmethod
def func(a, b):
return a * b
# ... define other functions
In Python 3, you don't actually need the `@staticmethod` since class methods
are simple functions anyway.
Now to iterate:
def process_dict(d):
for k, v in d.items():
getattr(MyFuncs, k, lambda *x: None)(*v)
N.B. You could also use a module for your functions and use `import`.
|
Parsing a PDF via URL with Python using pdfminer
Question: I am trying to parse this file but without downloading it off of the website.
I have run this with the file on my hard drive and I am able to parse it
without issue but running this script it trips.
if not document.is_extractable:
raise PDFTextExtractionNotAllowed
I think I am integrating the url wrong.
import sys
import getopt
import urllib2
import datetime
import re
from pdfminer.pdfparser import PDFParser
from pdfminer.converter import XMLConverter, HTMLConverter, TextConverter, PDFConverter, LTContainer, LTText, LTTextBox, LTImage
from pdfminer.layout import LAParams
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter, process_pdf
from urllib2 import Request
# Define a PDF parser function
def parsePDF(url):
# Open the url provided as an argument to the function and read the content
open = urllib2.urlopen(Request(url)).read()
# Cast to StringIO object
from StringIO import StringIO
memory_file = StringIO(open)
# Create a PDF parser object associated with the StringIO object
parser = PDFParser(memory_file)
# Create a PDF document object that stores the document structure
document = PDFDocument(parser)
# Check if the document allows text extraction. If not, abort.
if not document.is_extractable:
raise PDFTextExtractionNotAllowed
# Define parameters to the PDF device objet
rsrcmgr = PDFResourceManager()
retstr = StringIO()
laparams = LAParams()
codec = 'utf-8'
Create a PDF device object
device = PDFDevice(rsrcmgr, retstr, codec = codec, laparams = laparams)
# Create a PDF interpreter object
interpreter = PDFPageInterpreter(rsrcmgr, device)
# Process each page contained in the document
for page in PDFPage.create_pages(document):
interpreter.process_page(page)
# Construct the url
url = 'http://www.city.pittsburgh.pa.us/police/blotter/blotter_monday.pdf'
Answer: Building on your own answer and the function provided
[here](http://stackoverflow.com/questions/5725278/python-help-using-pdfminer-
as-a-library), this should return a string from a pdf in a url without
downloading it:
import urllib2
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from pdfminer.pdfpage import PDFPage
from cStringIO import StringIO
def pdf_from_url_to_txt(url):
rsrcmgr = PDFResourceManager()
retstr = StringIO()
codec = 'utf-8'
laparams = LAParams()
device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams)
# Open the url provided as an argument to the function and read the content
f = urllib2.urlopen(urllib2.Request(url)).read()
# Cast to StringIO object
fp = StringIO(f)
interpreter = PDFPageInterpreter(rsrcmgr, device)
password = ""
maxpages = 0
caching = True
pagenos = set()
for page in PDFPage.get_pages(fp,
pagenos,
maxpages=maxpages,
password=password,
caching=caching,
check_extractable=True):
interpreter.process_page(page)
fp.close()
device.close()
str = retstr.getvalue()
retstr.close()
return str
|
Determine which numbers in list add up to specified value
Question: I have a quick (hopefully accounting problem. I just entered a new job and the
books are a bit of a mess. The books have these lump sums logged, while the
bank account lists each and every individual deposit. I need to determine
which deposits belong to each lump sum in the books. So, I have these four
lump sums:
[6884.41, 14382.14, 2988.11, 8501.60]
I then have this larger list of individual deposits (sorted):
[98.56, 98.56, 98.56, 129.44, 160.0, 242.19, 286.87, 290.0, 351.01, 665.0,
675.0, 675.0, 677.45, 677.45, 695.0, 695.0, 695.0, 695.0, 715.0, 720.0, 725.0,
730.0, 745.0, 745.0, 750.0, 750.0, 750.0, 750.0, 758.93, 758.93, 763.85,
765.0, 780.0, 781.34, 781.7, 813.79, 824.97, 827.05, 856.28, 874.08, 874.44,
1498.11, 1580.0, 1600.0, 1600.0]
In Python, how can I determine which sub-set of the longer list sums to one of
the lump sum values? (NOTE: these numbers have the additional problem that the
sum of the lump sums is $732.70 more than the sum of the individual accounts.
I'm hoping that this doesn't make this problem completely unsolvable)
Answer: Here's a pretty good start at a solution:
import datetime as dt
from itertools import groupby
from math import ceil
def _unique_subsets_which_sum_to(target, value_counts, max_sums, index):
value, count = value_counts[index]
if index:
# more values to be considered; solve recursively
index -= 1
rem = max_sums[index]
# find the minimum amount that this value must provide,
# and the minimum occurrences that will satisfy that value
if target <= rem:
min_k = 0
else:
min_k = (target - rem + value - 1) // value # rounded up to next int
# find the maximum occurrences of this value
# which result in <= target
max_k = min(count, target // value)
# iterate across min..max occurrences
for k in range(min_k, max_k+1):
new_target = target - k*value
if new_target:
# recurse
for solution in _unique_subsets_which_sum_to(new_target, value_counts, max_sums, index):
yield ((solution + [(value, k)]) if k else solution)
else:
# perfect solution, no need to recurse further
yield [(value, k)]
else:
# this must finish the solution
if target % value == 0:
yield [(value, target // value)]
def find_subsets_which_sum_to(target, values):
"""
Find all unique subsets of values which sum to target
target integer >= 0, total to be summed to
values sequence of integer > 0, possible components of sum
"""
# this function is basically a shell which prepares
# the input values for the recursive solution
# turn sequence into sorted list
values = sorted(values)
value_sum = sum(values)
if value_sum >= target:
# count how many times each value appears
value_counts = [(value, len(list(it))) for value,it in groupby(values)]
# running total to each position
total = 0
max_sums = [0]
for val,num in value_counts:
total += val * num
max_sums.append(total)
start = dt.datetime.utcnow()
for sol in _unique_subsets_which_sum_to(target, value_counts, max_sums, len(value_counts) - 1):
yield sol
end = dt.datetime.utcnow()
elapsed = end - start
seconds = elapsed.days * 86400 + elapsed.seconds + elapsed.microseconds * 0.000001
print(" -> took {:0.1f} seconds.".format(seconds))
# I multiplied each value by 100 so that we can operate on integers
# instead of floating-point; this will eliminate any rounding errors.
values = [
9856, 9856, 9856, 12944, 16000, 24219, 28687, 29000, 35101, 66500,
67500, 67500, 67745, 67745, 69500, 69500, 69500, 69500, 71500, 72000,
72500, 73000, 74500, 74500, 75000, 75000, 75000, 75000, 75893, 75893,
76385, 76500, 78000, 78134, 78170, 81379, 82497, 82705, 85628, 87408,
87444, 149811, 158000, 160000, 160000
]
sum_to = [
298811,
688441,
850160 #,
# 1438214
]
def main():
subset_sums_to = []
for target in sum_to:
print("\nSolutions which sum to {}".format(target))
res = list(find_subsets_which_sum_to(target, values))
print(" {} solutions found".format(len(res)))
subset_sums_to.append(res)
return subset_sums_to
if __name__=="__main__":
subsetsA, subsetsB, subsetsC = main()
which on my machine results in
Solutions which sum to 298811
-> took 0.1 seconds.
2 solutions found
Solutions which sum to 688441
-> took 89.8 seconds.
1727 solutions found
Solutions which sum to 850160
-> took 454.0 seconds.
6578 solutions found
# Solutions which sum to 1438214
# -> took 7225.2 seconds.
# 87215 solutions found
The next step is to cross-compare solution subsets and see which ones can
coexist together. I think the fastest approach would be to store subsets for
the smallest three lump sums, iterate through them and (for compatible
combinations) find the remaining values and plug them into the solver for the
last lump sum.
* * *
Continuing from where I left off (+ a few changes to the above code to grab
the return lists for subsums to the first three values).
I wanted a way to easily get the remaining value-coefficients each time;
class NoNegativesDict(dict):
def __sub__(self, other):
if set(other) - set(self):
raise ValueError
else:
res = NoNegativesDict()
for key,sv in self.iteritems():
ov = other.get(key, 0)
if sv < ov:
raise ValueError
# elif sv == ov:
# pass
elif sv > ov:
res[key] = sv - ov
return res
then I apply it as
value_counts = [(value, len(list(it))) for value,it in groupby(values)]
vc = NoNegativesDict(value_counts)
nna = [NoNegativesDict(a) for a in subsetsA]
nnb = [NoNegativesDict(b) for b in subsetsB]
nnc = [NoNegativesDict(c) for c in subsetsC]
# this is kind of ugly; with some more effort
# I could probably make it a recursive call also
b_tries = 0
c_tries = 0
sol_count = 0
start = dt.datetime.utcnow()
for a in nna:
try:
res_a = vc - a
sa = str(a)
for b in nnb:
try:
res_b = res_a - b
b_tries += 1
sb = str(b)
for c in nnc:
try:
res_c = res_b - c
c_tries += 1
#unpack remaining values
res_values = [val for val,num in res_c.items() for i in range(num)]
for sol in find_subsets_which_sum_to(1438214, res_values):
sol_count += 1
print("\n================")
print("a =", sa)
print("b =", sb)
print("c =", str(c))
print("d =", str(sol))
except ValueError:
pass
except ValueError:
pass
except ValueError:
pass
print("{} solutions found in {} b-tries and {} c-tries".format(sol_count, b_tries, c_tries))
end = dt.datetime.utcnow()
elapsed = end - start
seconds = elapsed.days * 86400 + elapsed.seconds + elapsed.microseconds * 0.000001
print(" -> took {:0.1f} seconds.".format(seconds))
and the final output:
0 solutions found in 1678 b-tries and 93098 c-tries
-> took 73.0 seconds.
So the final answer is **there is no solution for your given data**.
Hope that helps ;-)
|
Useing python to download a file in google drive that is created by a google app script
Question: Ok so I've got a google script that I'm working on to help out with some stuff
and I need to be able to create a txt file and download it to my local
computer. I've figured out how to create the file as a blob and create a file
in google drive.
The problem is that I for one can't figure out how to delete the old file and
create the new file and I also can't seem to figure out how to download it
locally so I can work my python magic and create a nice looking report and
print it out. I've gone over the documentation and looked at similar questions
but I can't figure out how to actually download a file.
An example program would be great answer for me something that uses a dummy
file and downloads it, really that would be awesome.
My thoughts are that I could go back to the old file that I am trying to
download and just edit it so that way I don't have to actually delete the file
which would make it have the same ID's, meta data, and URL.
Can I use the google app script to download it directly to my computer and for
a little added info I run a Linux machine so some things are a little more
labor intensive for me while some other stuff is nice and easy. I feel like
there is an app that can run on my computer locally that stores my google
drive files locally and I could possibly just grab it from there.
Lastly a link to documentation for running the scripts natively would be
helpful as well.
Answer: use
urllib.retrieve(httpaddressofthefile, filenametosaveitto)
or
import urllib2
response = urllib2.urlopen('http://www.wherever.com/')
html = response.read()
f=open(filename, 'w')
f.write(html)
f.close()
|
NameError: name 'install' is not defined when installing packages using pip
Question: I am trying to create virtualenv and install project dependencies using pip.
$ mkvirtualenv --no-site-packages --distribute myenv
(myenv)$ pip install -r requirements.txt
I also set up `export VIRTUALENV_DISTRIBUTE=true` in `~/.bash_profile`
After installing some packages pip shows following error:
.....
Could not find the /Users/me/.virtualenvs/myenv/lib/python2.7/site-packages/site.py element of the Setuptools distribution
Patched done.
Relaunching...
Traceback (most recent call last):
File "<string>", line 1, in <module>
NameError: name 'install' is not defined
----------------------------------------
Cleaning up...
Command /Users/me/.virtualenvs/myenv/bin/python -c "import setuptools;__file__='/Users/me/.virtualenvs/myenv/build/distribute/setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/r0/2b441j6x5rq8y964bhd15gkm0000gn/T/pip-wyn1Ys-record/install-record.txt --single-version-externally-managed --install-headers /Users/me/.virtualenvs/myenv/include/site/python2.7 failed with error code 1 in /Users/me/.virtualenvs/myenv/build/distribute
Exactly the same happens without `--distribute` switch and without `export
VIRTUALENV_DISTRIBUTE=true`
Here is my requirements.txt file:
Django==1.5
Pillow==1.7.6
South==0.7.3
amqplib==1.0.2
anyjson==0.3.1
celery==2.5.3
distribute==0.6.10
django-celery==2.4.2
django-indexer==0.3.0
django-kombu==0.9.4
django-mptt==0.5.2
django-paging==0.2.4
django-picklefield==0.2.1
django-social-auth==0.7.22
django-tagging==0.3.1
django-taggit==0.9.3
django-templated-email==0.4.7
django-templatetag-sugar==0.1
eventlet==0.9.16
greatape==0.3.0
greenlet==0.3.4
html5lib==0.90
httplib2==0.8
kombu==2.1.7
lockfile==0.9.1
oauth2==1.5.211
pycrypto==2.3
python-daemon==1.6
python-dateutil==1.5
python-openid==2.2.5
raven==1.0.4
sentry==2.0.0-RC6
simplejson==2.3.2
ssh==1.7.8
wsgiref==0.1.2
I am using `Mac OS X 10.9.2`. I don't want to change anything in
requirements.txt I just want to install all dependencies and run this project.
Answer: Remove the `distribute` package from the list, recreate your environment and
reinstall the requirements.
You may remove the `wsgiref` as well (although not much important).
|
How to make Conditional Probability Tables (CPTs) for Bayesian networks with pymc
Question: I would like to build a Bayesian network of discrete (pymc.Categorical)
variables that are dependent on other categorical variables. As a
[simplest](https://raw.githubusercontent.com/shpigi/pieces/master/ab.png)
example, suppose variables _a_ and _b_ are categorical and _b_ depends on _a_
Here is an attempt to code it with pymc (assuming _a_ takes one of three
values and _b_ takes one of four values). The idea being that the CPT
distributions would be learned from data using pymc.
import numpy as np
import pymc as pm
aRange = 3
bRange = 4
#make variable a
a = pm.Categorical('a',pm.Dirichlet('aCPT',np.ones(aRange)/aRange))
#make a CPT table as an array of
CPTLines = np.empty(aRange, dtype=object)
for i in range(aRange):
CPTLines[i] = pm.Dirichlet('CPTLine%i' %i,np.ones(bRange)/bRange)
#make a deterministic node that holds the relevant CPT line (dependent on state1)
@pm.deterministic
def selectedCPTLine(CPTLines=CPTLines,a=a):
return CPTLines[a]
#make a node for variable b
b=pm.Categorical('b', selectedCPTLine)
model = pm.MCMC([a, b, selectedCPTLine])
If we draw this model it looks like
[this](https://raw.githubusercontent.com/shpigi/pieces/master/abD.png)
However, running this code we get an error:
Probabilities in categorical_like sum to [ 0.8603345]
Apparently, pymc can take a Dirichlet variable as the parameter of a
Categorical variable. When the Categorical variable gets a Dirichlet variable
as its parameter, it knows to expect a k-1 vector of probabilities with the
assumption that the kth probability sums the vector to 1. This breaks down,
however, when the Dirichlet variable is the output of a deterministic
variable, which is what I need to make a CPT.
Am I going about this the right way? How can the representation mismatch be
solved? I should mention that I'm relatively new to pymc and Python.
This question is related to a previous question on [making a discrete state
Markov model with pymc](http://stackoverflow.com/questions/22636974/making-a-
discrete-state-markov-model-with-pymc)
Answer: OK, thanks. The problem is that, while usually, PyMC will recognize a
Dirichlet as the parent of a Categorical and completes the probability
simplex, here your Categoricals are embedded in a Container, and the
Categorical does not make the automatic adjustment needed. The following code
does this for you:
import numpy as np
import pymc as pm
aRange = 3
bRange = 4
aCPT = pm.Dirichlet('aCPT', np.ones(aRange))
#make variable a
a = pm.Categorical('a', aCPT)
#make a CPT table as an array of
CPTLines = [pm.Dirichlet('CPTLine%i' %i, np.ones(bRange)) for i in range(aRange)]
#make a node for variable b
@pm.stochastic(dtype=int)
def b(value=0, CPT=CPTLines, a=a):
return pm.categorical_like(value, p=pm.extend_dirichlet(CPT[a]))
model = pm.MCMC([a, b, CPTLines])
Hope that helps.
|
Selenium Python webdriver: driver.get() won't accept a variable?
Question: I'm trying to write an automated test script that will perform a set of
actions for multiple URL's. The reason I am trying to do this is because I am
testing a web application with multiple front-end interfaces that are
functionally exactly the same, so if I can use a single test script to run
through all of them and make sure the basics are in order, this saves me a lot
of time in regression testing when the codebase changes.
My current code is as follows:
# initialize the unittest framework
import unittest
# initialize the selenium framework and grab the toolbox for keyboard output
from selenium import selenium, webdriver
# prepare for the usage of remote browsers
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
class Clubmodule(unittest.TestCase):
def setUp(self):
# load up the remote driver and tell it to use Firefox
self.driver = webdriver.Remote(
command_executor="http://127.0.0.1:4444/wd/hub",
desired_capabilities=DesiredCapabilities.FIREFOX)
self.driver.implicitly_wait(3)
def test_010_LoginAdmin(self):
driver = self.driver
# prepare the URL by loading the list from a textfile
with open('urllist.txt', 'r') as f:
urllist = [line.strip() for line in f]
# Go to the /admin url
for url in urllist:
# create the testurl
testurl = str(url) + str("/admin")
# go to the testurl
driver.get("%s" %testurl)
# log in using the admin credentials
def tearDown(self):
# close the browser
self.driver.close()
# make it so!
if __name__ == "__main__":
unittest.main()
When I print the variable `testurl` I get the correct function, but when I try
to run my script with Python it does not seem to convert `driver.get("%s"
%testurl)` into `driver.get("actualurl")`.
I'm hoping it's a syntax issue but after trying all the variations I can come
up with I'm starting to thing this is a limitation of Webdriver. Can this be
done at all ?
Answer: How about
driver.get(testurl)
I do not think string interpolation is required.
|
How do arrow keys work with getch? (they register as 3 keys)
Question: I'm using the `getch` code [from this stack overflow
question](http://stackoverflow.com/questions/510357/python-read-a-single-
character-from-the-user), and I've written the following script:
getch = _Getch()
while(1):
test = getch()
if test == 'm':
break
else:
print ord(test)
as you can see, it prints the ascii value of each input. So a sample output
looks like this, where >>> represents my input:
>>>a
97
>>>ctrl-c
3
>>>Esc
27
So far so good, but when I press an arrow key, it gives me:
>>>(Left arrow)
27
91
66
>>>(right arrow)
27
91
67
So I can see that 27 is the `escape char`, but what is 91? 66 and 67 must be
sub values of this.
I'm looking for the values so that I can implement them to use the arrows to
move a cursor. How would I use this in an `if charvalue ==` scenario?
Answer: You can also use a combination of select, tty and termios.
import sys, select, tty, termios
class NonBlockingConsole(object):
def __enter__(self):
self.old_settings = termios.tcgetattr(sys.stdin)
tty.setcbreak(sys.stdin.fileno())
return self
def __exit__(self, type, value, traceback):
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, self.old_settings)
def get_data(self):
try:
if select.select([sys.stdin], [], [], 0) == ([sys.stdin], [], []):
return sys.stdin.read(1)
except KeyboardInterrupt:
return '[CTRL-C]'
return False
with NonBlockingConsole() as nbc:
while 1:
c = nbc.get_data()
if c:
c = c.decode('latin-1', 'replace')
if c == '\x1b': # x1b is ESC
break
elif c in ('\x7f', '\x08'): # backspace
pass
elif c == '[CTRL-C]':
pass
elif c == '\n': # it's RETURN
pass
else:
print('Pushed:',c)
sleep(0.025)
Havn't tested it in a while and i'm on a Windows machine at the moment, but it
might be able to capture arrow-keys as well (it should).
|
What is the best way to generate Pythagorean triples?
Question: I have tried with that simple code when you just check all the combinations
for a and b and then check if square root of c is an integer, but that code is
really slow, then I have tried with Euclid's formula
a = d*(n^2 - m^2)
b = 2*n*m*d
c = d*(n^2 + m^2)
and I have written a code where you first find n with
trunc(sqrt(max_value))
//this is in pascal
and then you check every combination of 0 < m < n but I get duplicate results,
like if n is 7, m is 5 and d is 1, and n is 6, m is 1 and d is 2 . In both
cases you get 24, 70 and 74. So what is a good fast way to calculate the
number of Pythagorean triples, I can't seem to find a way, also if I add all
results to an array, and then check the array for duplicates, it just takes
too much time... If anyone can help me with the code it can be pascal, c or
python, I can understand all...
Answer: I was curious so I decided to try this. I found that [this
algorithm](http://en.wikipedia.org/wiki/Formulas_for_generating_Pythagorean_triples#Progressions_of_whole_and_fractional_numbers)
was pretty easy to implement in Python and works pretty fast:
import math
def pythagorean_triples(n):
a, b, c = 1, 3, 0
while c < n:
a_ = (a * b) + a
c = math.sqrt(a_**2 + b**2)
if c == int(c):
yield b, a_, int(c)
a += 1
b += 2
if __name__ == '__main__':
import sys
for pt in pythagorean_triples(int(sys.argv[1])):
print(pt)
Try it by copying that script into `pythagorean_triples.py` and running
`python3 pythagorean_triples.py n` where `n` is the maximum `c` you want it to
generate. (You can use later Python2 if you like as well.)
|
How to get the hardest part to take longest in python
Question: I have some simple code as follows.
count =0
iters = 1000000
l=10
k=10
for i in xrange(iters):
t = np.random.choice([-1,1],size=l+k-1)
v = np.random.choice([-1,1], size = l)
if (not np.convolve(v, t, 'valid').any()):
count += 1
print count
When I profile this I was surprised to see that only a small fraction of the
time is spent doing `np.convolve` which in theory should be the most expensive
part. Is there any way to speed up the other parts so that `np.convolve` is
taking the majority of the time?
I would happily use pypy but sadly it doesn't support `np.convolve` yet.
* * *
Is there any way to make it fast enough that iters = 100 million and l = 12,
k= 12 can terminate?
Answer: _Edit_ : using a combination of generating the random numbers in a block and
multiprocessing, I got 100 million trials done in 6 minutes on my latop,
altogether about 12 times faster than the original code.
**single process with block generation of random numbers**
The following code runs 3x faster on my machine (15s vs 45s). The main change
is to move all random number generation out of the main loop. If `iters` is so
large that you don't have the memory to do that, then you can run a nested
loop and generate as large a block as you can afford and rinse and repeat --
I've put the code for this below following the edit to your question.
import numpy as np
count = 0
iters = 1000000
l=10
k=10
l0=l+k-1
t = np.random.choice([-1,1], size = l0 * iters)
v = np.random.choice([-1,1], size = l * iters)
for i in xrange(iters):
if (not np.convolve(v[(l*i):(l*(i+1))], t[(l0*i):(l0*(i+1))], 'valid').any()):
count += 1
print count
The other v minor change that improved the performance of your original code
by about 2% was to move the calculation `l+k-1` outside of the loop.
Incidentally, I found that the way you deal with `count` is quite efficient.
So, for example, `count += np.convolve(v[(l*i):(l*(i+1))],
t[(l0*i):(l0*(i+1))], 'valid').any()` and then doing `iters - count` at the
end is slower. This is because the condition `not...any()` is very rare, and
the way you have it now you touch count very rarely.
To run 100 million times set `N=100` in the program below. With the current
value of `N=4` the program took 1 minute to execute (with a count of 26). With
`l=12`, `k=12`, and `N=4` the program took just over a minute to execute (with
a count of 4). So you should be looking at less than half an hour for 100
million.
import numpy as np, time
start = time.clock()
count = 0
iters = 1000000 # 1million
l=10
k=10
l0=l+k-1
N = 4 # number of millions
for n in range(N):
t = np.random.choice([-1,1], size=l0 * iters)
v = np.random.choice([-1,1], size = l * iters)
for i in xrange(iters):
if (not np.convolve(v[(l*i):(l*(i+1))], t[(l0*i):(l0*(i+1))], 'valid').any()):
count += 1
print (time.clock() - start)
print count
**multiple processes**
_Edit_ : this is an "embarassingly parallel" problem, so you can run the
simulations on multiple processors. An easy way to do this is [using a pool of
workers](https://docs.python.org/2/library/multiprocessing.html#using-a-pool-
of-workers). Note however it's important to [set the random seed in each
process](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.RandomState.html#numpy.random.RandomState).
Otherwise you risk having each process use the same set of random numbers (see
[here](http://stackoverflow.com/questions/9209078/solved-using-python-
multiprocessing-with-different-random-seed-for-each-process), I'm assuming
this applies to numpy random as well as to the base random). The code is
below.
import numpy as np, time
from multiprocessing import Pool
def countconvolve(N):
np.random.seed() # ensure seed is random
count = 0
iters = 1000000 # 1million
l=12
k=12
l0=l+k-1
for n in range(N):
t = np.random.choice([-1,1], size=l0 * iters)
v = np.random.choice([-1,1], size = l * iters)
for i in xrange(iters):
if (not np.convolve(v[(l*i):(l*(i+1))], t[(l0*i):(l0*(i+1))], 'valid').any()):
count += 1
return count
if __name__ == '__main__':
start = time.clock()
num_processes = 8
N = 13
pool = Pool(processes=num_processes)
res = pool.map(countconvolve, [N] * num_processes)
print res, sum(res)
print (time.clock() - start)
It ran 104 million simulations in 370 seconds, and produced the following
output
[4, 9, 10, 6, 7, 8, 11, 9] 64
My laptop is a core-i7 with 8GB of memory, so with 8 cores I got a 4x speedup
by parallel processing. I found the memory usage was about 160MB per process
(with a higher peak). If you have fewer cores or less memory you would want to
adjust the parameters in the program accordingly.
With @moarningsun's suggestion of constraining the array to have 1 byte per
element, the memory usage dropped to 60MB per process.
t = np.random.choice(np.array([-1,1], dtype=np.int8), size=l0 * iters)
v = np.random.choice(np.array([-1,1], dtype=np.int8), size = l * iters)
|
How to create a table in SQLite using python without rowid?
Question: I would like to create an SQLite table in python without the default rowid
field.
For example, I have tried
import sqlite3
conn = sqlite3.connect("mydatabase.db")
cursor = conn.cursor()
sql = "CREATE TABLE tblTest (myid VARCHAR(20) PRIMARY KEY, testField1 VARCHAR(14), testField2 VARCHAR(2500) ) WITHOUT ROWID "
cursor.execute(sql)
And this is the error I get:
sqlite3.OperationalError: near "WITHOUT": syntax error
Any suggestions? Thank you in advance.
Answer: You probably need a newer version of SQLite. The `without rowid` syntax
requires 3.8.2 or later.
[Reference](https://www.sqlite.org/withoutrowid.html)
The version of the sqlite library that's linked to the sqlite3 executable . .
.
$ sqlite3 --version
3.7.9 2011-11-01 00:52:41 c7c6050ef060877ebe77b41d959e9df13f8c9b5e
The version that's incorporated into python's standard library . . .
$ python
[snip]
>>> import sqlite3
>>> sqlite3.sqlite_version
'3.7.9'
For information about how version checking can mislead you, see [this SO
answer](http://stackoverflow.com/a/1546162/562459).
|
Plotting boxplot in matplotlib with without data array?
Question: Is there a way to create plots in **matplotlib** without having raw data?
I have a tool which reports `Mean`, `Max`, `Min`, `standard_daviation`,
`P_50`, `P_90`, and `P_99`, but does not give all the data-points. I am trying
to create a plot boxplot like plot which will convey most of above properties.
But I haven't found any way to generate such a plot in matplotlib without
having all the data points. Is there any way that I can generate such a plot
in matplotlib.
I have preference for matplotlib as most of the infrastructure is in Python
and I am already using matplotlib for some other parts.
Answer: It depends on what
[distribution](http://docs.scipy.org/doc/numpy/reference/routines.random.html#distributions)
you want - you can plot some of these like so:
import numpy as np
loc, scale = 0., 1 # like mean and s.d.
x=np.arange(-8., 8., .01)
laplace = np.exp(-abs(x-loc/scale))/(2.*scale)
gumbel = (1/scale)*np.exp(-(x - scale)/scale)* np.exp( -np.exp( -(x - scale) /scale) )
logistic = np.exp((loc-x)/scale)/(scale*(1+np.exp((loc-x)/scale))**2)
normal = 1/(scale * np.sqrt(2 * np.pi))*np.exp( - (x - loc)**2 / (2 * scale**2) )
lognormal = (np.exp(-(np.log(x) - loc)**2 / (2 * scale**2))/ (x * scale * np.sqrt(2 * np.pi)))
rayleigh = (x/(scale*scale))*(np.exp((-x*x)/(2*scale*scale)))
standard_cauchy = 1/(np.pi*(1+(x*x)))
plt.plot(x,gumbel,label='gumbel scale=1')
plt.plot(x,laplace,label='laplace scale=1, loc = 0')
plt.plot(x,normal,label='normal scale=1, loc = 0')
plt.plot(x,logistic,label='logistic scale=1, loc = 0')
plt.plot(x,lognormal,label='lognormal scale=1, loc = 0')
plt.plot(x,rayleigh,label='rayleigh scale=1')
plt.plot(x,standard_cauchy,label='standard_cauchy')

|
Get a file from an ASPX webpage using Python
Question: I'm trying to download a CSV file from this site, but I keep getting an HTML
file when I'm using this piece of code (which used to work until a few weeks
ago), or when I'm using wget.
url = "http://.....aspx"
file_name = "%s.csv" % url.split('/')[3]
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (file_name, file_size)
file_size_dl = 0
block_sz = 8192
while True:
buffer = u.read(block_sz)
if not buffer:
break
file_size_dl += len(buffer)
f.write(buffer)
status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
status = status + chr(8)*(len(status)+1)
print status,
How can I get this file again with Python?
Thank you
Answer: Solved by using the Requests library instead of urllib2:
import requests
url = "http://www.....aspx?download=1"
file_name = "Data.csv"
u = requests.get(url)
file_size = int(u.headers['content-length'])
print "Downloading: %s Bytes: %s" % (file_name, file_size)
with open(file_name, 'wb') as f:
for chunk in u.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
f.close()
|
TypeError: can only concatenate list (not "tuple") to list ((Python))
Question: When running the following code in a little script, I get the following error:
Traceback (most recent call last):
File "/Users/PeterVanvoorden/Documents/GroepT/Thesis/f_python_standalone/python_files/Working_Files/testcase.py", line 9, in <module>
permutations_01.determin_all_permutations()
File "/Users/PeterVanvoorden/Documents/GroepT/Thesis/f_python_standalone/python_files/Working_Files/permutations_01.py", line 8, in determin_all_permutations
config.permutations = [[1] + x for x in config.permutations]
TypeError: can only concatenate list (not "tuple") to list
Code:
import itertools
import config
def determin_all_permutations():
L = list(xrange(config.number))
List = [x+2 for x in L]
config.permutations = list(itertools.permutations(List,config.number))
config.permutations = [[1] + x for x in config.permutations]
def determin_lowest_price_permutation():
length = len(L)
a = 1
while a <= length:
point_a = L[a]
point_b = L[a+1]
# Array with data cost connecting properties will exist of:
# * first column = ID first property [0]
# * second column = ID second property [1]
# * third column = distance between two properties [2]
# * forth column = estimated cost [3]
position = 0
TF = False
if TF == False:
if point_a == config.properties_array[position][0] and point_b == config.properties_array[position][1] and TF == False:
config.current_price = config.current_price + config.properties_array[position][3]
config.current_path = config.current_path + [config.properties_array[position][2]]
TF = True
else:
position += 1
else:
position = 0
TF = False
I don't get why this error is occurring. When I test line 8
config.permutations = [[1] + x for x in config.permutations]
in a normal situation by making a list in Shell for ex:
List = [1,1,1],[1,1,1]
([1,1,1],[1,1,1])
List = [[0] + x for x in List]
List
([0,1,1,1],[0,1,1,1])
it works, but when using the exact same code in a method, I get the error
about adding a tuple... Isn't [1] a list?
Can somebody help me?
Thanks!
Answer: In what you executed on the shell, `List` is made up of lists `[1,1,1]` and
`[1,1,1]`, not tuples. Hence doing `List = [[0] + x for x in List]` works
without errors.
Where as in your code, `list(itertools.permutations(List,config.number))`
returns a list of tuples, like:
[(2, 4, 8, 5, 11, 10, 9, 3, 7, 6), (2, 4, 8, 5, 11, 10, 9, 6, 3, 7),...]
which explains the error.
Doing this:
config.permutations = [[1] + list(x) for x in config.permutations]
fixes the issue.
|
Django:python manage.py not woroking but python2.7 manage.py is
Question: I HAVE A Weird Kind of Error.When I run manage.py in virtualenv it shows me an
error
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
But when I run
> python2.7 manage.py
it gives me the correct results.I could not understand what error it is and
how to fix it because I do not want every time to type python2.7 Also when I
run
> pip freeze > requiremts.txt
it's not showing the complete packages installed but they are installed in the
site-packages within the the virtualenv. Now what error is this, anyone
please?
Answer: Activate the virtual env:
. virtual_env_dir/bin/activate
|
why cv2.imshow() results in error in my python compiler?
Question: Hi friends i just now installed opencv and checking the basic code but it
results in error. The code is
import numpy as np
import cv2
img=cv2.imread('C:\Users\Pravin\Desktop\a.jpeg',1)
cv2.namedWindow('img',cv2.WINDOW_NORMAL)
cv2.Waitkey(10000)
cv2.imshow('cv2.WINDOW_NORMAL',img)
cv2.destoryAllWindows()
The error for cv2.imshow() is
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
cv2.imshow('image',img)
error: ..\..\..\src\opencv\modules\highgui\src\window.cpp:261: error: (-215)
size.width>0 && size.height>0
It was very helpful to me with your answer. Thanks in advance
Answer: Most likely, the imread call didn't succeed. Make sure the image
"C:\Users\Pravin\Desktop\a.jpeg" exists. (The extension .jpeg seems unusual,
maybe it has to be .jpg?)
Also, as Hyperboreus suggests, please, try using forward slashes in the
filename "C:/Users/Pravin/Desktop/a.jpg", or escape backslashes
"C:\\Users\\Pravin\\Desktop\\a.jpg"
|
An explanation of randint(2) < 1 in python, I cannot understand what the < 1 does
Question: In python, given **randint(2) < 1**, could someone please explain what the **<
1** means or does?
I am using IP[y]: Notebook
Please excuse the previous lack of information, this is the code
%pylab inline
from __future__ import division
import pandas as pd
c = randint(2 ,size=100) < 1
s1 = pd.Series(c)
s1.head()
s1.value_counts()
Answer: That line produces a truth matrix.
The [`numpy.random.randint()`
function](http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randint.html),
with a `size` argument, produces a new `numpy.ndarray` object with `size`
elements randomly picked between 0 and 2 (_exclusive_), so in this case 100 0
or 1 values:
>>> numpy.random.randint(2, size=100)
array([0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1,
0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1,
0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 0,
1, 1, 1, 1, 1, 1, 1, 0])
The `< 1` then produces an array of boolean values (`True` or `False`):
>>> numpy.random.randint(2, size=100) < 1
array([False, False, False, False, True, False, True, True, True,
False, False, True, True, True, False, False, True, True,
True, False, False, True, True, True, False, False, True,
False, False, False, False, False, True, True, False, True,
False, False, False, True, False, True, False, True, False,
False, True, True, True, False, True, True, False, False,
False, True, True, False, False, False, True, False, True,
True, True, False, False, False, True, False, False, False,
False, False, True, True, True, True, False, True, False,
True, True, False, True, False, True, False, True, False,
False, True, False, False, False, True, True, False, True, False], dtype=bool)
This array is then converted to a Pandas `Series` object.
|
How to forward a websocket server in localhost with ngrok
Question: I' trying to run a websocket server on local host and forward it to web using
ngrok. But couldn't figure it how. These are the original code's from
AutobahnPython git repository <https://github.com/tavendo/AutobahnPython>.
Server code:
from autobahn.twisted.websocket import WebSocketServerProtocol, \
WebSocketServerFactory
class MyServerProtocol(WebSocketServerProtocol):
def onConnect(self, request):
print("Client connecting: {0}".format(request.peer))
def onOpen(self):
print("WebSocket connection open.")
def onMessage(self, payload, isBinary):
if isBinary:
print("Binary message received: {0} bytes".format(len(payload)))
else:
print("Text message received: {0}".format(payload.decode('utf8')))
## echo back message verbatim
self.sendMessage(payload, isBinary)
def onClose(self, wasClean, code, reason):
print("WebSocket connection closed: {0}".format(reason))
if __name__ == '__main__':
import sys
from twisted.python import log
from twisted.internet import reactor
log.startLogging(sys.stdout)
factory = WebSocketServerFactory("ws://localhost:9000", debug = False)
factory.protocol = MyServerProtocol
reactor.listenTCP(9000, factory)
reactor.run()
Client Code:
from autobahn.twisted.websocket import WebSocketClientProtocol, \
WebSocketClientFactory
class MyClientProtocol(WebSocketClientProtocol):
def onConnect(self, response):
print("Server connected: {0}".format(response.peer))
def onOpen(self):
print("WebSocket connection open.")
def hello():
self.sendMessage(u"Hello, world!".encode('utf8'))
self.sendMessage(b"\x00\x01\x03\x04", isBinary = True)
self.factory.reactor.callLater(1, hello)
## start sending messages every second ..
hello()
def onMessage(self, payload, isBinary):
if isBinary:
print("Binary message received: {0} bytes".format(len(payload)))
else:
print("Text message received: {0}".format(payload.decode('utf8')))
def onClose(self, wasClean, code, reason):
print("WebSocket connection closed: {0}".format(reason))
if __name__ == '__main__':
import sys
from twisted.python import log
from twisted.internet import reactor
log.startLogging(sys.stdout)
factory = WebSocketClientFactory("ws://localhost:9000", debug = False)
factory.protocol = MyClientProtocol
reactor.connectTCP("127.0.0.1", 9000, factory)
reactor.run()
This is the changed code:
from autobahn.twisted.websocket import WebSocketClientProtocol, \
WebSocketClientFactory
class MyClientProtocol(WebSocketClientProtocol):
def onConnect(self, response):
print("Server connected: {0}".format(response.peer))
def onOpen(self):
print("WebSocket connection open.")
def hello():
self.sendMessage(u"Hello, world!".encode('utf8'))
self.sendMessage(b"\x00\x01\x03\x04", isBinary = True)
self.factory.reactor.callLater(1, hello)
## start sending messages every second ..
hello()
def onMessage(self, payload, isBinary):
if isBinary:
print("Binary message received: {0} bytes".format(len(payload)))
else:
print("Text message received: {0}".format(payload.decode('utf8')))
def onClose(self, wasClean, code, reason):
print("WebSocket connection closed: {0}".format(reason))
if __name__ == '__main__':
import sys
from twisted.python import log
from twisted.internet import reactor
log.startLogging(sys.stdout)
#~ factory = WebSocketClientFactory("ws://localhost:9000", debug = False)
factory = WebSocketClientFactory("ws://ngrok.com:xxxxx", debug = False)
factory.protocol = MyClientProtocol
reactor.connectTCP("ws://ngrok.com", xxxxx, factory)
reactor.run()
This is the ngrok command: **./ngrok -proto=tcp 9000**
What am I doing wrong here?
Answer: I tried using
`ngrok http 8091`
(where 8091 is the port where my WebsocketServer is running using Racthet IO)
and it works. I can still connect to the Websocket replacing the `http` with
`ws` on the connection.
|
Counting similar list element in a list
Question: I have a list of lists
list = [[1,0],[1,2],[1,1],[1,0]]
Now I want to count the number of occurrences similar list elements in the
above list. e;g; `[1,0]:2, [1,2]:1, [1,1]:1`
I did
import collections
print Counter(list)
It throws error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/collections.py", line 352, in __init__
self.update(iterable, **kwds)
File "/usr/local/lib/python2.7/collections.py", line 434, in update
self[elem] = self_get(elem, 0) + 1
TypeError: unhashable type: 'list'
What am i doing wrong?
Answer: `Counter` is internally a dictionary. The keys of the dictionaries should be
hashble. But lists are mutable and they are not hashable. That is you are not
able to use `Counter` as it is.
One option is to convert the nested list elements to tuples
my_list = [[1,0],[1,2],[1,1],[1,0]]
from collections import Counter
print Counter(map(tuple, my_list))
# Counter({(1, 0): 2, (1, 2): 1, (1, 1): 1})
If the list is too big, then you can use a generator expression like this
print Counter(tuple(item) for item in my_list)
this will avoid creating a new list in memory, unlike `map`. And using tuples
work because tuples are immutable and are hashable.
|
creating a table in sqlite3 python
Question: I apologize in advance for asking such a basic question but I am new to
SQlite3 and having trouble starting. I am trying to build a database with one
table. I used the following code to build a table.
import sqlite3
conn = sqlite3.connect('example.db')
c = conn.cursor()
c.execute('''CREATE TABLE mytable
(start, end, score)''')
but whenever I try to update or access the table it seems that it doesnt exist
or maybe it exists in a different database. I also tried creating a table
called example.mytable but I got the error: sqlite3.OperationalError: unknown
database example
What am I missing? Thanks
Answer: I think that a commit is needed after inserts (schema changes such as new
tables should automatically commit). I would suggest adding the full path to
your database as well to make sure you are accessing the same location next
time round.
Here is an extension on your code:
import sqlite3
def create():
try:
c.execute("""CREATE TABLE mytable
(start, end, score)""")
except:
pass
def insert():
c.execute("""INSERT INTO mytable (start, end, score)
values(1, 99, 123)""")
def select(verbose=True):
sql = "SELECT * FROM mytable"
recs = c.execute(sql)
if verbose:
for row in recs:
print row
db_path = r'C:\Users\Prosserc\Documents\Geocoding\test.db'
conn = sqlite3.connect(db_path)
c = conn.cursor()
create()
insert()
conn.commit() #commit needed
select()
c.close()
Output:
(1, 99, 123)
After closing the program if I log onto the SQLite database the data is still
there.
|
How to view file handling functions documentation using help function in Python 3?
Question: I'm trying to view file handling functions documentation inside Python Idle
using `help()` function.
So far, I've done and got the following:
>>> fh = open('file.txt', 'w')
>>> help(fh.seek)
Help on built-in function seek:
seek(...)
How can I get the documentation?
Answer:
>>> import io
>>> help(io.FileIO.seek)
Returns:
Help on method_descriptor:
seek(...)
seek(offset: int[, whence: int]) -> None. Move to new file position.
Argument offset is a byte count. Optional argument whence defaults to
0 (offset from start of file, offset should be >= 0); other values are 1
(move relative to current position, positive or negative), and 2 (move
relative to end of file, usually negative, although many platforms allow
seeking beyond the end of a file).
Note that not all file objects are seekable.
(END)
(Python 3.4.0)
|
Python Oauth Request to Twitter Yields Response 403 - SSL is Required
Question: Using Python's oauth2 library I've been attempting to implement a three-legged
oauth request (as outlined here - <http://www.linuxuser.co.uk/news/handling-
twitters-three-legged-oauth-process>) to Twitter only to obtain a 403 response
- "SSL is required" error.
I've been using this guide (<https://bitbucket.org/david/django-oauth-
plus/wiki/consumer_example>) to help setup my 1.5 Django app to make my
requests but with no avail.
Version of Python I'm currently using is 2.7.
(I have tried playing with adding the verify parameter for requests.get call
and using add_certificate on my client)
The specific area I am looking to pinpoint is how I can add SSL to my request
here is a snapshop of code as it currently stands --
import oauth2 as oauth
import requests
import urlparse
from django.shortcuts import render_to_response
from django.http import HttpResponseRedirect
from django.conf import settings
from twitterapp.models import User
consumer = oauth.Consumer(settings.TWITTER_CONSUMER_KEY, settings.TWITTER_CONSUMER_SEC)
client = oauth.Client(consumer)
client.add_certificate
request_token_url = 'http://api.twitter.com/oauth/request_token'
access_token_url = 'http://api.twitter.com/oauth/access_token'
authorize_url='https://api.twitter.com/oauth/authorize'
def signin(request):
oauth_request = oauth.Request.from_consumer_and_token(consumer, http_url=request_token_url, parameters={'oauth_callback':callback_url})
oauth_request.sign_request(oauth.SignatureMethod_HMAC_SHA1(), consumer, None)
response = requests.get(request_token_url, headers=oauth_request.to_header(), verify=True)
request_token = dict(urlparse.parse_qsl(response.content))
url = 'https://api.twitter.com/oauth/authorize?oauth_token=%s' % request_token['oauth_token']
return HttpResponseRedirect(url)
Answer: Here:
request_token_url = 'http://api.twitter.com/oauth/request_token'
access_token_url = 'http://api.twitter.com/oauth/access_token'
These requests need to be via `https://`
So
request_token_url = 'https://api.twitter.com/oauth/request_token'
access_token_url = 'https://api.twitter.com/oauth/access_token'
Should do the trick.
|
Hard time finding Python-Numpy deg2rad function
Question: Title says it all, I somehow can not find that function. Obviously it's inside
the Numpy package (numpy.core.umath.deg2rad) and I've tried importing it but
to no avail. Anyone care to chime in?
* import numpy as np - np.deg2rad doesn't even show up
* from numpy import* - umath.deg2rad shows up, but it raises an error, ''name 'umath' is not defined''
Answer:
from numpy.core.umath import deg2rad
# then
deg2rad(...)
Or
import numpy as np
np.core.umath.deg2rad(...)
|
can write to rs232 serial instrument but can't read from it
Question: I have an rs232 serial device and I am trying to read and write to it using
python with pyVISA. I am able to write my commands to it using "write" but I
If I try to us "read" or "ask" I get a timeout error.
I can read and write to it easily through labview or tera term but I can't
read using python.
Here is the python code that is not working:
import visa as v
si = v.SerialInstrument("COM1", delay = 0.1)
si.clear()
si.timeout = 3
si.baud_rate = 9600
si.data_bits = 8
si.stop_bits = 1
command = '0'
while command != 'end':
rorw = raw_input('ask, read, or write? >>')
command = raw_input('enter command code >>')
if rorw == 'write':
write1 = si.write(command)
print write1
elif rorw == 'read':
read1 = si.read()
print read1
else:
ask1 = si.ask(command)
print ask
Answer: My guess is you have a termchar issue. try set term_chars to \n or \r\n If
this doesnt work serial communication is very easy with pyserial. You will
want to define a custom method equivalent to pysvisa's ask using write and
readline and probably a **del** method to close the port if anything goes
wrong Good luck
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.