text
stringlengths 226
34.5k
|
---|
Do python list comprehensions get converted to pure C?
Question: I've been told multiple times that Python list comprehensions are better than
nested `for`, `if` simply because they are converted to pure C and compiled.
However, I cannot find any documentation to support this; is this true?
For example; the following below code actually does result in 1/3 performance
gain (on my machine):
import time
start = time.time()
a = []
for i in range(0, 100000000):
a.append(i)
end = time.time()
print("Time it took: " + str((end - start)))
start = time.time()
b = [i for i in range(0, 100000000)]
end = time.time()
print("Time it took: " + str((end - start)))
CPython results:
Time it took: 12.077988863
Time it took: 8.65817594528
PyPy results:
Time it took: 4.9356508255
Time it took: 0.686870098114
Answer: This is implementation specific, but in CPython, it's not compiled to C, but
it is compiled to what is called "bytecode".
We can inspect it with the `dis` module which is used to disassemble bytecode.
>>> import dis
>>> def foo():
... return [i//2 for i in range(20)]
...
>>> dis.dis(foo)
2 0 BUILD_LIST 0
3 LOAD_GLOBAL 0 (range)
6 LOAD_CONST 1 (20)
9 CALL_FUNCTION 1
12 GET_ITER
>> 13 FOR_ITER 16 (to 32)
16 STORE_FAST 0 (i)
19 LOAD_FAST 0 (i)
22 LOAD_CONST 2 (2)
25 BINARY_FLOOR_DIVIDE
26 LIST_APPEND 2
29 JUMP_ABSOLUTE 13
>> 32 RETURN_VALUE
In other implementations, it could be compiled to C. I am much less familiar
with those other implementations, so I'll withhold commentary on them.
Regarding your statement:
> list comprehensions are better than nested for, if ...
In some cases, it makes more sense to use the for list and conditions. The
best approach is to use what it most readable to you, and then profile your
code if you need to improve performance. Sometimes you don't actually need the
list or generator created by the list comprehension or generator expression,
and when that's the case, using the for loop with the if condition is
generally more performant.
### speculation:
However, in the example you give, you specifically are intending to create a
list. Appending to the list in a loop is the canonical way to do it, but as
you know from checking your runtimes, the list comprehension is faster. The
reason it is faster is because the designers of Python know a list
comprehension will be populating a list, so there are opportunities to
optimize it. In a for loop, optimizing it would be more difficult, because you
have to be able to process much more complex statements there.
I imagine your huge improvement in PyPy is due to an optimization that
optimizes on the i for i part, and perhaps simply calls `list` on your
iterator.
|
creating python lists on the fly and comparing them
Question: I need to open multiple files and compare the contents of them. The way I am
doing in now is dirty. I want to know an elegant way of doing it. I need to
open multiple files and see the common elements between them.
My code looks like:
sample_1=[]
sample_3=[]
sample_2=[]
sample_4=[]
for line in open("sample_EC1.Regions","r"):
line=line.strip()
sample_1.append(line)
for line in open("sample_EC2.Regions","r"):
line=line.strip()
sample_2.append(line)
for line in open("sample_EC3.Regions","r"):
line=line.strip()
sample_3.append(line)
for line in open("sample_EC4.Regions","r"):
line=line.strip()
sample_4.append(line)
CommonRegions = list(set(sample_2)&set(sample_3)&set(sample_4)&set(sample_1))
print CommonRegions
This code is dirty as if the number of files increases, every time I need to
change the code and if the number of files goes beyond 50, it is very
difficult to edit the code every time.
Answer:
total=4
with open("sample_EC1.Regions","r") as f:
commonregions = {line.strip() for line in f}
for i in range(2,total+1):
with open("sample_EC"+i+".Regions","r") as f:
#set comprehension
sample = {line.strip() for line in f}
commonregions = commonregions & sample
print commonregions
Instead of writing for loop for each file try to place them in a loop and
perform intersection.
improvements:
1. Two important improvements, firstly make sure you close files again and secondly don't redundantly convert lists to sets: `with open(...) as f: s = {l.strip() for l in f}`. The thing with the "`for l in f`" is called a set comprehension, the "set comprehension" is an expression that generates a set on the fly
|
sklearn.cross_validation.cross_val_score multiple cpu?
Question: I am trying to get a score for a model through cross validation with
sklearn.cross_validation.cross_val_score. According to its
[documentation](http://scikit-
learn.org/stable/modules/generated/sklearn.cross_validation.cross_val_score.html),
the parameter n_jobs sets the number of cpus that you can utilize. However,
when I set it to -1 (or other values not equal to 1), the program complains
that:
> AttributeError: '_MainProcess' object has no attribute '_daemonic'
Attached below is a minimal example, and the corresponding error message.
import sklearn.datasets
import sklearn.cross_validation
import sklearn.linear_model
d = sklearn.datasets.load_iris()
X = d.data
y = d.target
sklearn.cross_validation.cross_val_score(sklearn.linear_model.LogisticRegression(), X, y, n_jobs=-1)
* * *
AttributeError
Traceback (most recent call last)
<ipython-input-57-3b5f62e97b0d> in <module>()
----> 1 sklearn.cross_validation.cross_val_score(gb_clf, train, train_label, n_jobs=2)
/usr/lib/python3.4/site-packages/sklearn/cross_validation.py in cross_val_score(estimator, X, y, scoring, cv, n_jobs, verbose, fit_params, score_func, pre_dispatch)
1150 delayed(_cross_val_score)(clone(estimator), X, y, scorer, train, test,
1151 verbose, fit_params)
-> 1152 for train, test in cv)
1153 return np.array(scores)
1154
/usr/lib/python3.4/site-packages/sklearn/externals/joblib/parallel.py in __call__(self, iterable)
468 self._pool = None
469 else:
--> 470 if multiprocessing.current_process()._daemonic:
471 # Daemonic processes cannot have children
472 n_jobs = 1
AttributeError: '_MainProcess' object has no attribute '_daemonic'
Additional information: I am running this script in IPython notebook mode. It
also appears under console mode, or under normal python interpreter (per
@larsmans comment).
Answer: The combination of IPython notebook, NumPy-heavy code (like scikit-learn) and
joblib/multiprocessing (used when `n_jobs != 1`) is problematic and can cause
all kinds of crashes, freezes and strange error messages. The NumPy/SciPy
community is aware of this, but has AFAIK not yet diagnosed what exactly is
going wrong, let alone produced a fix.(*) I advise you to run this code
outside the IPython notebook.
(*) Be sure to search the mailing lists for the various projects if you're
interested. The problem probably stems from IPython's use of ZeroMQ, a
multithreaded C library, in conjunction with Python `multiprocessing`'s habit
of calling `fork` without `exec` in violation of
[POSIX](http://pubs.opengroup.org/onlinepubs/9699919799/functions/fork.html#tag_16_156_08).
Similar problems occur when NumPy calls multithreaded linear algebra libraries
in a `multiprocessing` context.
|
use cx_freeze with mysql-connector
Question: I'm trying to make an exe program from a fully functional python 3.4 script,
but I can't embed the dependencies about official mysql connector. This is a
sample code with the problem:
import mysql.connector
from settings import *
connLocal = mysql.connector.connect( host = DB_CRM_HOST,
user = DB_CRM_USER,
passwd = DB_CRM_PASS,
db = DB_CRM_DB )
cursorLocal = connLocal.cursor ()
sqlStr = "SELECT * FROM Users"
cursorLocal.execute( sqlStr)
for row in cursorLocal.fetchall():
print(row)
and this is my setup script:
'''script per il setup'''
import sys
from cx_Freeze import setup, Executable
EXCLUDES = ['_ssl', # Exclude _ssl
'pyreadline', 'difflib', 'doctest', 'locale',
'optparse', 'pickle', 'calendar'] # Exclude standard library
PACKAGES = []
INCLUDES = []
SCRIPT_NAME = "sync_crm2web.py"
EXE_NAME = "sync_crm2web.exe"
PRJ_NAME = "sync_crm2web"
VERSION = 1.0
AUTHOR = "Antonio"
DESCRIPTION = "Sincronizzazione crm sito internet"
BASE = "Console" #"Win32GUI"
#------------------------------------------------------------------------------
BUILD_EXE_OPTIONS = {"packages": PACKAGES,
"excludes": EXCLUDES,
"includes": INCLUDES,
"path": sys.path,
'append_script_to_exe':False,
'build_exe':"dist/bin",
'compressed':True,
'copy_dependent_files':True,
'create_shared_zip':True,
'include_in_shared_zip':True,
'optimize':2,}
EXE = Executable(script=SCRIPT_NAME,
base=BASE,
compress=True,
targetDir="dist",
targetName=EXE_NAME,
initScript=None,
copyDependentFiles=True,
appendScriptToExe=True,
appendScriptToLibrary=False,
)
setup(name=PRJ_NAME,
version=VERSION,
author=AUTHOR,
description=DESCRIPTION,
options={"build_exe": BUILD_EXE_OPTIONS},
executables=[EXE])
also trying forcing PACKAGES = [], INCLUDES = [] with combination of mysql,
mysql-connector, mysql.connector seem not to work.
I always obtain:
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\cx_Freeze\initscripts\Console.py", line 27, in <module>
exec(code, m.__dict__)
File "sync_crm2web.py", line 1, in <module>
File "c:\python\32-bit\3.4\lib\importlib\_bootstrap.py", line 2214, in _find_and_load
File "c:\python\32-bit\3.4\lib\importlib\_bootstrap.py", line 2189, in _find_and_load_unlocked
File "c:\python\32-bit\3.4\lib\importlib\_bootstrap.py", line 321, in _call_with_frames_removed
File "c:\python\32-bit\3.4\lib\importlib\_bootstrap.py", line 2214, in _find_and_load
File "c:\python\32-bit\3.4\lib\importlib\_bootstrap.py", line 2201, in _find_and_load_unlocked
ImportError: No module named 'mysql'
Can someone help me? full cx_freeze log here <http://pastebin.com/S3TMzAnB>
Answer: Solved using input from [Cx-Freeze Error - Python
34](http://stackoverflow.com/questions/23920073/cx-freeze-error-python-34) \-
[www.lfd.uci.edu/~gohlke/pythonlibs](http://www.lfd.uci.edu/~gohlke/pythonlibs/#cx_freeze)
so I've installed cx_Freeze from this installer and not from PIP.
After that I've commented the needed **_ssl** module and place all library
('build_exe' property) in the same folder of the executable.
'''script per il setup'''
import sys
from cx_Freeze import setup, Executable
EXCLUDES = [#'_ssl', # !!!! COMMENTED !!!!
'pyreadline', 'difflib', 'doctest', 'locale',
'optparse', 'pickle', 'calendar'] # Exclude standard library
PACKAGES = []
INCLUDES = []
SCRIPT_NAME = "sync_crm2web.py"
EXE_NAME = "sync_crm2web.exe"
PRJ_NAME = "sync_crm2web"
VERSION = 1.0
AUTHOR = "Antonio"
DESCRIPTION = "Sincronizzazione crm sito internet"
BASE = "Console" #"Win32GUI"
#------------------------------------------------------------------------------
BUILD_EXE_OPTIONS = {"packages": PACKAGES,
"excludes": EXCLUDES,
"includes": INCLUDES,
"path": sys.path,
'append_script_to_exe':False,
'build_exe':"dist", # !!!! BEFORE WAS dist/bin !!!!
'compressed':True,
'copy_dependent_files':True,
'create_shared_zip':True,
'include_in_shared_zip':True,
'optimize':2,}
EXE = Executable(script=SCRIPT_NAME,
base=BASE,
compress=True,
targetDir="dist",
targetName=EXE_NAME,
initScript=None,
copyDependentFiles=True,
appendScriptToExe=True,
appendScriptToLibrary=False,
)
setup(name=PRJ_NAME,
version=VERSION,
author=AUTHOR,
description=DESCRIPTION,
options={"build_exe": BUILD_EXE_OPTIONS},
executables=[EXE])
|
Cookie handling with Scrapy during login
Question: I'm trying to crawl some data from Amazon Mechanical Turkey, where I could
only view the first few pages of the result without logging in. It turned out
that Amazon requires cookies to record sessions, so the simplest way that just
submit a formrequest as many examples do won't work.
I've tried to pass some cookies around, though I thought scrapy will handle
that automatically, but it's not working. If I do open_in_browser after
submitting the form, I get the amazon page saying that I should enable cookies
in order to logging.
Then I came to another post where he uses selenium to get the cookies. I've
also tried it and the same occurs.
Here's what I've got right now. I've add `COOKIES_ENABLED = True` to
`settings.py`
By adding COOKIES_DEBUG to settings, I think cookies are received and set by
looking at the log simply with InitSpider, without selenium. But it just wont
work.
from scrapy.spider import Spider
from scrapy.selector import Selector
from scrapy.http import Request, FormRequest
from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.utils.response import open_in_browser
from mturk.items import MturkItem
from selenium import webdriver
class MturkSpider(Spider):
name = "AMT"
allowed_domains = ["mturk.com","amazon.com"]
start_url='https://www.mturk.com/mturk/viewhits?searchWords=&selectedSearchType=hitgroups&sortType=Title%3A1&pageNumber=1&searchSpec=HITGroupSearch%23T%231%2310%23-1%23T%23%21%23%21Title%211%21%23%21'
login_page = "https://www.mturk.com/mturk/beginsignin"
formdata ={'create':'0','email': '[email protected]', 'password': '1234'}
def get_cookies(self):
driver = webdriver.Firefox()
driver.implicitly_wait(30)
base_url = "https://www.mturk.com/mturk/beginsignin"
driver.get(base_url)
driver.find_element_by_name("email").clear()
driver.find_element_by_name("email").send_keys("[email protected]")
driver.find_element_by_name("password").clear()
driver.find_element_by_name("password").send_keys("1234")
driver.find_element_by_id("signInSubmit-input").click()
cookies = driver.get_cookies()
driver.close()
return cookies
def start_requests(self):
self.my_cookies = self.get_cookies()
yield Request(self.login_page,
cookies = self.get_cookies(),
callback = self.login,
)
def login(self, response):
yield FormRequest.from_response(response,
formdata = self.formdata,
# cookies=self.my_cookies,
callback = self.after_login,
)
def after_login(self,response):
open_in_browser(response) # where it says I need to enable cookies
yield Request(self.start_url,
# cookies=self.my_cookies,
callback = self.parse_page,
)
def parse_page(self, response):
# do the parsing, where I can successfully crawl the first few pages
I'm fairly new to python and this society. I have to say my knowledge is quite
limited in this area and I can only learn from others' work. Anybody has some
suggestions to make it work?
I've found solutions with urllib2/mechanize regarding amazon login, but no
solution with Request. I thought the case was similar here?
UPDATE: I've solved it myself. Seems no need to use selenium at all. The
problem lies in that I have to specify proper headers during submission of
requests. I manually set all headers grabbed from my browser and it worked.
Answer: I had the exact same problem, but rather than manually setting the Request
headers, I found that one of the header values was for the User Agent. Setting
this value in my project's setting file did the trick and I now do not have to
hard-code the header values:
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; rv:32.0) Gecko/20100101
Firefox/32.0'
or whatever value for the browser you're using.
Now, the cookies are working properly for me--automatically.
|
Using command line arguments to launch url in system web browser
Question: I would like to execute a script with parameters, these parameters must be
send into the URL.
So, the main problem, is how to do this task ?
That's my test script to do this, I used sys.argv ...
#!/usr/bin/python
import sys, os
print "The name is :", sys.argv[0]
print "number of arguments :", len(sys.argv)
print "arguments are :", str(sys.argv)
Answer: The arguments of the script can be accessed via the sys.argv array. For
example:
import sys
if len(sys.argv) == 3:
print "http://www.google.de?q=%s&%s" % (sys.argv[1], sys.argv[2])
else:
print "Usage: <Explain params here>"
Every argument that is seperated with a space in the command-line can be
accessed via the index in the sys.argv array.
Is this that you were looking for?
EDIT:
If you have a more complex use case where you need a more complex argmument
parser, you can use the
[argparse](https://docs.python.org/2.7/library/argparse.html) module of
python.
|
Multiplying very large 2D-array in Python
Question: I have to multiply very large 2D-arrays in Python for around 100 times. Each
matrix consists of `32000x32000` elements.
I'm using `np.dot(X,Y)`, but it takes very long time for each
multiplication... Below an instance of my code:
import numpy as np
X = None
for i in range(100)
multiplying = True
if X == None:
X = generate_large_2darray()
multiplying = False
else:
Y = generate_large_2darray()
if multiplying:
X = np.dot(X, Y)
Is there any other method much faster?
**Update**
Here is a screenshot showing the htop interface. My python script is using
only one core. Also, after 3h25m only 4 multiplications have been done.

**Update 2**
I've tried to execute:
import numpy.distutils.system_info as info
info.get_info('atlas')
but I've received:
/home/francescof/.local/lib/python2.7/site-packages/numpy/distutils/system_info.py:564: UserWarning: Specified path /home/apy/atlas/lib is invalid. warnings.warn('Specified path %s is invalid.' % d) {}
So, I think it's not well-configured.
Vice versa, regarding `blas` I just receive `{}`, with no warnings or errors.
Answer: As suggested by **ali_m** , the using of a BLAS library can speed up the
operations. However, the problem in my system was a bad configuration of
numpy. Here is the solution:
1) make sure to have all required libraries (you can use ATLAS, OpenBLAS,
etc.). I've chosen ATLAS in my case since directly supported in Ubuntu.
`sudo apt-get install libatlas3gf-base libatlas-base-dev libatlas-dev`
2) remove any previous numpy installations, e.g., `pypm uninstall numpy` (if
you installed it using ActivePython)
3) install again numpy using pip: `pip install numpy`
4) make sure your atlas is correctly linked:
import numpy.distutils.system_info as info
info.get_info('atlas')
ATLAS version 3.8.4 built by buildd on Sat Sep 10 23:12:12 UTC 2011:
UNAME : Linux crested 2.6.24-29-server #1 SMP Wed Aug 10 15:58:57 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
INSTFLG : -1 0 -a 1
ARCHDEFS : -DATL_OS_Linux -DATL_ARCH_HAMMER -DATL_CPUMHZ=1993 -DATL_USE64BITS -DATL_GAS_x8664
F2CDEFS : -DAdd_ -DF77_INTEGER=int -DStringSunStyle
CACHEEDGE: 393216
F77 : gfortran, version GNU Fortran (Ubuntu/Linaro 4.6.1-9ubuntu2) 4.6.1
F77FLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -Wa,--noexecstack -fPIC -m64
SMC : gcc, version gcc (Ubuntu/Linaro 4.6.1-9ubuntu2) 4.6.1
SMCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -Wa,--noexecstack -fPIC -m64
SKC : gcc, version gcc (Ubuntu/Linaro 4.6.1-9ubuntu2) 4.6.1
SKCFLAGS : -fomit-frame-pointer -mfpmath=387 -O2 -falign-loops=4 -Wa,--noexecstack -fPIC -m64
{'libraries': ['lapack', 'f77blas', 'cblas', 'atlas'], 'library_dirs': ['/usr/lib/atlas-base/atlas', '/usr/lib/atlas-base'], 'define_macros': [('ATLAS_INFO', '"\\"3.8.4\\""')], 'language': 'f77', 'include_dirs': ['/usr/include/atlas']}
|
How to run a bat file from Python 3.3.2
Question: I am developing a program in python, however I have a little problem because
the data that my program needs, it is from the result of a bat file, so I
would like to create something that I help me to run just my script in python
and not use the bat file directly.
Well, right now, I have to execute the following steps:
1. I create a txt file where I enter the input data (For example: A.txt)
2. I run the BAT file using the file A.txt
3. It creates a new file txt which has the result of running the BAT file (For example: B.txt)
4. I open the file B.txt and I copy all data and then I paste it in a new txt file, where it will serve as input data to my program in python
5. I run my program in python
6. it creates a new file txt where has the final results
7. End
How can I improve this, I mean, Which scripts I need to run my program in
Python without to go to the Bat file and to do the steps that I describe
above?
Answer: I'm pretty sure whatever you're doing in the `bat` file you can doit in a
python module. So:
**Write a python module sustitute for your *.bat**
Write a python module containing a function for read the file `A.txt` and
return the results. Then use that module from your program.
For instance:
# calculations.py, the bat substitute.
def calculate_from_input(file_name):
input_ = open(file_name, 'rb') # The mode you read the fike depends on your purposes.
# .... Do your calculations
return result # Return a list of velocities, for example.
Then in your main program
from calculations import calculate_from_input
calculated_data = calculate_from_input("A.txt") # Then, you have the data ready to use in your program.
|
In Python is there a function for the "in" operator
Question: Is there any Python function for the "in" operator like what we have for
operator.lt, operator.gt, .. I wan't to use this function to do something
like:
operator.in(5, [1,2,3,4,5,6])
>> True
operator.in(10, [1,2,3,4,5,6])
>> False
Answer: Yes, use
[`operator.contains()`](https://docs.python.org/2/library/operator.html#operator.contains);
note that the order of operands is reversed:
>>> import operator
>>> operator.contains([1,2,3,4,5,6], 5)
True
>>> operator.contains([1,2,3,4,5,6], 10)
False
You may have missed the handy [mapping
table](https://docs.python.org/2/library/operator.html#mapping-operators-to-
functions) at the bottom of the documentation.
|
What am I doing wrong in this QBO v3 API (IPP) Attachments upload python request?
Question: Intuit offers [these
instructions](https://developer.intuit.com/docs/0025_quickbooksapi/0050_data_services/020_key_concepts/attachments#Request_Body)
for uploading attachments (which become [Attachable
objects](https://developer.intuit.com/docs/0025_quickbooksapi/0050_data_services/030_entity_services_reference/attachable)
that can be associated with one or more transactions).
I **believe** I'm using python's requests module (via [rauth's OAuth1Session
module](http://rauth.readthedocs.org/en/latest/api/#oauth-1-0-sessions)—see
below for how I'm creating the session object) to generate these requests.
Here's the code leading up to the request:
print request_type
print url
print headers
print request_body
r = session.request(request_type, url, header_auth,
self.company_id, headers = headers,
data = request_body, **req_kwargs)
result = r.json()
print json.dumps(result, indent=4)
and the output of these things:
POST
https://quickbooks.api.intuit.com/v3/company/0123456789/upload
{'Accept': 'application/json'}
Content-Disposition: form-data; name="Invoice 003"; filename="Invoice 003.pdf"
Content-Type: application/pdf
<@INCLUDE */MyDir/Invoice 003.pdf*@>
{
"Fault": {
"type": "SystemFault",
"Error": [
{
"Message": "An application error has occurred while processing your request",
"code": "10000",
"Detail": "System Failure Error: Cannot consume content type"
}
]
},
"time": "[timestamp]"
}
I have confirmed (by uploading an attachment through the QBO web UI and then
querying the Attachable object through the API) that application/pdf is
included in the list of acceptable file types.
At sigmavirus24's suggestion, I tried removing the Content-Type line from the
headers, but I got the same result.
Here's how I'm creating the session object (which, again, is working fine for
other QBO v3 API requests of every type you see in Intuit's API Explorer):
from rauth import OAuth1Session
def create_session(self):
if self.consumer_secret and self.consumer_key and self.access_token_secret and self.access_token:
session = OAuth1Session(self.consumer_key,
self.consumer_secret,
self.access_token,
self.access_token_secret,
)
self.session = session
else:
raise Exception("Need four creds for Quickbooks.create_session.")
return self.session
What might I be missing here?
EDIT: current area of exploration is
[here](https://github.com/litl/rauth/issues/109); I just formed the header you
see (that has the "INCLUDE" string there) directly. Perhaps I should be using
rauth to attach the file...
Answer: Without being able to see what code you're using with requests, I'm going to
take a shot in the dark and tell you to remove setting your own `Content-
Type`. You probably don't want that. It looks like you want `multipart/form-
data` and requests will set that on its own if you stop fighting it.
* * *
It looks like you're missing the boundaries that QuickBooks is expecting
(based on what you linked).
---------------------------acebdf13572468
Content-Disposition: form-data; name="file_content_01"; filename="IMG_0771.jpg"
Content-Type: image/jpeg
<@INCLUDE *Y:\Documents\IMG_0771.jpg*@>
---------------------------acebdf13572468--
The first and last line above seem to be what you're missing.
|
How to go through all possibilities efficiently?
Question: I have run into this problem before, but it hasn't been too important until
now: going through all combinations given 3 or 4 variables. My current project
is in Python, so here is an example:
def function(var1, var2, var3):
if var1:
if var2:
if var3:
foo(bar)
else:
bar(foo)
else:
if var3:
...
Even this example is a bit simpler than the code I am working with because
there are 3 to 4 possibilities for each variable.
I am unfamiliar with many programming concepts and I have a feeling that there
is already a good answer to this question. Any help is appreciated. Thanks in
advance!
Answer: The canonical Python replacement for lots of `if`s is a dictionary:
from functools import partial
def function(var1, var2, var3):
choices = {(True, True, True): partial(foo, bar),
(True, True, False): partial(bar, foo),
...}
choices[tuple(map(bool, (var1, var2, var3)))]()
(In simple cases like this you could use `lambda` rather than
[`functools.partial`](https://docs.python.org/2/library/functools.html#functools.partial)).
Or, in your case:
choices = {("past", "simple", 1, False): ...,
...}
|
Bundle sqldrivers into .exe using py2exe
Question: In my first attempts, my pyQt application bundled with py2exe refused to
connect to the sqlite database although it was working in its python version.
I guessed that it was a problem of libraries not loaded into the .exe
application. I solved that problem by including the full path to the sqlite
DLL into the `setup.py` file and thus copying this DLL to the executable
folder.
Now I would like to include this DLL into the .exe file in order to "hide"
this DLL to my users. **Do you have a clue how to do that ?**
my current setup.py:
from distutils.core import setup
import py2exe
setup(
windows=[{
"script": 'myscript.py'
}],
options={
'py2exe': {
"dll_excludes": [
"MSVCP90.dll",
"MSWSOCK.dll",
"mswsock.dll",
"powrprof.dll",
],
'includes': [
'sip',
'PyQt4.QtNetwork',
],
'bundle_files': 1,
}
},
data_files = [
'config.ini',
'template.htm',
# This is the File that I wish to be "hidden"
('sqldrivers', ('C:\Python27\Lib\site-packages\PyQt4\plugins\sqldrivers\qsqlite4.dll',)),
zipfile=None,
)
Answer: I ran into the same problem and you are half way to solving the issue. The
first part of the problem is as you identified, getting the file into the EXE.
I can't speak to the correctness of your py2exe solution as I am using
pyinstaller, but that is the general idea. You need to get the qsqlite4.dll
into a sqldrivers directory within your single file app.
The second part is that your main .py needs to have the path added to its
running directory which will now contain that sqldrivers folder. What you will
need to do is get the relative path to where your main .py is running and set
that directory as your library path in your QT application. I use the standard
resource_path() function for pyinstaller, but using something like this should
work for py2exe:
def resource_path(relative_path)
if sys.frozen:
base_path = os.path.dirname(sys.executable)
else:
base_path = os.path.dirname(__file__)
return os.path.join(base_path, relative_path)
Then you can use this code in the main function of your application
app = QApplication(sys.argv)
new_lib_path = app.libraryPaths()
new_lib_path.append(resource_path(''))
app.setLibraryPaths(new_lib_path)
. . .
With logging added, here is my app.libraryPaths() before and after:
08/25/2014 01:33:24 AM CRITICAL: Before[u'C:/dev/WORKSP~1/db/dist']
08/25/2014 01:33:24 AM CRITICAL: After[u'C:/dev/WORKSP~1/db/dist', u'C:\\Users\\jeff\\AppData\\Local\\Temp\\_MEI2042\\']
You could replace the '\' with '/' but I didn't bother, it still works with
windows separators.
|
nested loop for matplotlib graph of financial time series
Question: I am trying to print graphs for the selected tickers as I am learning python
and matplotlib. I have written the following code and it works fine, except
for the legend which prints the entire list of tickers, and I understand why
it is doing that, but I don't understand how to get it to print only the
ticker related to that graph. I also feel that as I am a beginner, I might
have written too many lines of code, and if someone can guide me on how to
reduce this code, that will help me learn the right way.
#!/usr/bin/env python3
import numpy as np
import datetime
import pandas as pd
import pandas.io.data
from pandas import Series, DataFrame
import matplotlib.pyplot as plt
import matplotlib as mpl
tickers=["ADBE","AAPL","GME","SNDK"]
roll_pd=30
num_std=2
flist=[]
ts_cutoff_days=-220
flag=1
start=datetime.datetime(2008,7,1)
end=datetime.date.today()
for ticker in tickers:
time_srs=pd.io.data.get_data_yahoo(ticker,start,end)
#time_srs.head()
fname=ticker+"_ohlc.csv"
print (fname)
time_srs.to_csv(fname)
flist.append(fname)
for fn in flist:
#print(fn)
df=pd.read_csv(fn,index_col='Date',parse_dates=True)
df.head()
df.index
close_px=df['Adj Close']
ts=df['Adj Close'][ts_cutoff_days:]
if flag==0:
px_srs=close_px
else:
px_srs=ts
mavg=pd.rolling_mean(px_srs,roll_pd)
mstd=pd.rolling_std(px_srs,roll_pd)
rets=px_srs / px_srs.shift(1) - 1
rets.head()
plt.figure()
plt.plot(px_srs.index, px_srs,color='k',label=tickers)# this is where I am making the error
plt.plot(mavg.index, mavg,label='mavg',color='red')
plt.fill_between(mstd.index,mavg-num_std*mstd,mavg+num_std*mstd,color='b',alpha=0.2)
plt.legend(loc='best')
plt.axes().yaxis.grid(True)
plt.show()
if flag==0:
df1 = pd.io.data.get_data_yahoo(tickers,start, end)['Adj Close']
else:
df1 = pd.io.data.get_data_yahoo(tickers,start, end)['Adj Close'][ts_cutoff_days:]# I tried writing [ts] instead of ['Adj Close'][ts_cutoff_days:], but it gave me an error, so I had to write this above if-else condition to create df1 correctly
Answer: I think you should rather toss away your `flist`, as all the information is
already in `tickers`.. Then you can do:
...
for ticker in tickers:
df=pd.read_csv(ticker + 'ohlc.csv',index_col='Date',parse_dates=True)
...
plt.plot(px_srs.index, px_srs,color='k',label=ticker)
If you are worried about changing the file names, then you can create a simple
function for that and use it in both reading and writing the file.
Another possibility would be to put a tuple into flist:
for ticker in tickers:
...
flist.append((ticker, fname))
And then use this tuple in your next loop:
for ticker, fn in flist:
...
This also enables you to use `ticker` as the plot label.
|
Layout for Client/Server project with common code
Question: I'm working on a client/server application in Python, where client and server
share a lot of code. How should the folder structure look like?
My idea is to have three folders with the code files in it
* server
* server.py
* etc.
* client
* client.py
* etc.
* common
* common.py
* etc.
But how can I import from common.py in server.py when server.py has to be
executable (can't be a package)?
Currently we have all files in the same folder but since the project got more
complex this isn't manageable anymore.
Answer: One solution is to have the executable scripts all at the top folder like
this:
* server
* server specific code
* client
* client specific code
* common
* common code
* server.py (executable script that imports from server and common)
* client.py (executable script that imports from client and common)
When deploying the server I just copy server.py, the server and the common
folder. Similar for the client.
It's not the ideal solution and I'd be thankful if someone comes up with a
better one but that is how I use it now.
|
Python output readable in Matlab
Question: I am a Python newbie and am trying to write a numpy array into format readable
in Matlab in the following format into an array [xi, yi, ti], separated by a
semi-colon.
In python, I am able to currently write it in the following form, which is a
numpy array printed on screen/written to file as [[xi yi ti]].
Here is the code:
import math
import random
import numpy as np
SPOT = []
f = open('data_dump.txt', 'a')
for i in range(10):
X = random.randrange(6)
Y = random.randrange(10)
T = random.randrange(5)
SPOT.append([X,Y,T])
SPOT = np.array(SPOT)
f.write(str(SPOT[:]))
f.close()
Please suggest how I should proceed to be able to write this data in Matlab
readable format as mentioned above. Thanks in advance!
Sree.
Answer: It is not very necessary to write your `array` into a special format. Write it
into a normal `csv` and use
[`dlmread`](http://www.mathworks.com/help/matlab/ref/dlmread.html) to open it
in `matlab`.
In `numpy` side, write your `array` using `np.savetxt('some_name.txt', aar,
delimiter=' ')`
|
Python/matplotlib : getting rid of matplotlib.mpl warning
Question: I am using matplotlib using python 3.4. When I start my program, I have the
following warning message:
C:\Python34-32bits\lib\site-packages\matplotlib\cbook.py:123: MatplotlibDeprecationWarning: The matplotlib.mpl module was deprecated in version 1.3. Use `import matplotlib as mpl` instead. warnings.warn(message, mplDeprecation, stacklevel=1)
As far as I know I do not use mpl, and all my imports concerning matplotlib
are:
import matplotlib.pyplot as plt
import matplotlib.animation as animation
Anything I should do ?
Answer: You can suppress that particular warning, which is probably the preferred way:
import warnings
import matplotlib.cbook
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
|
Algorithm for finding multiset permutation given lexicographic index
Question: I am trying to find an efficient algorithm to find permutation of a multiset,
given an index.
Ex: given `{1, 3, 3}`. All permutations in an ascending lexicographic order
are `{133, 313, 331}`. These elements are indexed as `{0, 1, 2}`. Given
`index=2`, the result is 331.
[I found an algorithm](http://stackoverflow.com/questions/8940470/algorithm-
for-finding-numerical-permutation-given-lexicographic-index) to find
permutation of a set given a lexicographic index. His algorithm is efficient:
O(n^2).
However, the algorithm is tested on a proper set (e.g. `{1, 2, 3}`), and not
correct on my test. I describe his python code here so that you can easily
follow.
from math import factorial, floor #// python library
from math import factorial, floor #// python library
i=5 #// i is the lexicographic index (counting starts from 0)
n=3 #// n is the length of the permutation
p = range(1,n+1) #// p is a list from 1 to n
for k in range(1,n+1): #// k goes from 1 to n
d = i//factorial(n-k) #// use integer division (like division+floor)
print(p[d]),
p.remove(p[d]) #//delete p[d] from p
i = i % factorial(n-k) #// reduce i to its remainder
Answer:
# Python 2
from collections import Counter
from math import factorial
def count_permutations(counter):
values = counter.values()
return (
factorial(sum(values))/reduce(lambda a, v: a * factorial(v), values, 1)
)
def permutation(l, index):
l = sorted(l)
if not index:
return l
counter = Counter(l)
total_count = count_permutations(counter)
acc = 0
for i, v in enumerate(l):
if i > 0 and v == l[i-1]:
continue
count = total_count * counter[v] / len(l)
if acc + count > index:
return [v] + permutation(l[:i] + l[i + 1:], index - acc)
acc += count
raise ValueError("Not enough permutations")
Seems to work as expected
In [17]: for x in range(50): print x, permutation([1, 1, 2, 2, 2], x)
0 [1, 1, 2, 2, 2]
1 [1, 2, 1, 2, 2]
2 [1, 2, 2, 1, 2]
3 [1, 2, 2, 2, 1]
4 [2, 1, 1, 2, 2]
5 [2, 1, 2, 1, 2]
6 [2, 1, 2, 2, 1]
7 [2, 2, 1, 1, 2]
8 [2, 2, 1, 2, 1]
9 [2, 2, 2, 1, 1]
10---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[...]
ValueError: Not enough permutations
Time complexity: `O(n^2)`.
|
l10n support in ruby
Question: I am able to parse localized dates using [python locale
module](https://docs.python.org/2/library/locale.html) and posix localization
database:
import locale, datetime
locale.setlocale(locale.LC_TIME, 'tr_TR.UTF-8')
print datetime.datetime.strptime("1 Haziran 2014", "%d %B %Y")
===
Edit
This example loads the locale and datetime modules, parses the localized date
to create an instance of python's datetime class. I'm looking specifically for
Ruby code that can parse localized dates using posix database.
===
Is there any equivalent of this in ruby? If there is a ruby library like
python's locale module or
[Boost.Locale](http://www.boost.org/doc/libs/1_55_0/libs/locale/doc/html/dates_times_timezones.html)
in C++, can you give example code? I tried the [gettext
gem](https://rubygems.org/gems/gettext) and [locale
gem](http://rubydoc.info/gems/locale/) (I set current locale and tried
[Time.strptime](http://www.ruby-
doc.org/stdlib-2.1.2/libdoc/date/rdoc/DateTime.html#method-c-strptime), which
failed).
I do not expect to do custom `gsub` or an i18n config file parsing. I am
asking for code that uses posix database to parse dates.
Answer: You will need 2 custom gems in your `Gemfile`
**Chronic:**
$ git clone git://github.com/mojombo/chronic.git
$ cd chronic && gem build chronic.gemspec
$ gem install chronic-*.gem
**Chronic-l10n:**
$ git clone git://github.com/luan/chronic-l10n.git
$ cd chronic-l10n && gem build chronic-l10n.gemspec
$ gem install chronic-l10n-*.gem
**Usage:**
require 'chronic'
require 'chronic-l10n'
Time.now #=> Sun Aug 27 23:18:25 PDT 2006
Chronic.locale = :'pt-BR'
Chronic.parse('amanhã')
#=> Mon Aug 28 12:00:00 PDT 2006
Chronic.parse('segunda', :context => :past)
#=> Mon Aug 21 12:00:00 PDT 2006
Chronic.parse('essa terça 5:00')
#=> Tue Aug 29 17:00:00 PDT 2006
Chronic.parse('essa terça 5:00', :ambiguous_time_range => :none)
#=> Tue Aug 29 05:00:00 PDT 2006
Chronic.parse('27 de maio', :now => Time.local(2000, 1, 1))
#=> Sat May 27 12:00:00 PDT 2000
Chronic.parse('27 de maio', :guess => false)
#=> Sun May 27 00:00:00 PDT 2007..Mon May 28 00:00:00 PDT 2007
Chronic.parse('6/4/2012', :endian_precedence => :little)
#=> Fri Apr 06 00:00:00 PDT 2012
|
Python unittest: TestSuite running only first TestCase
Question: Running `first_TestCase` and `second_TestCase` separately all works fine. But
when i created TestSuite, it runs only `first_TestCase`. Why is this
happening?
import unittest
from first_TestCase import first_TestCase
from second_TestCase import second_TestCase
def suite():
suite = unittest.TestSuite()
suite.addTest(first_TestCase())
suite.addTest(second_TestCase())
return suite
if __name__ == "__main__":
suite = unittest.defaultTestLoader.loadTestsFromTestCase(first_TestCase)
unittest.TextTestRunner().run(suite)
Answer: You're saying:
if __name__ == "__main__":
suite = unittest.defaultTestLoader.loadTestsFromTestCase(first_TestCase)
unittest.TextTestRunner().run(suite)
You're loading tests from only `first_TestCase` right before you run via the
`TextTestRunner`. You're never hitting that suite() function.
You should do:
if __name__ == "__main__":
unittest.TextTestRunner().run(suite())
Because you're not calling the suite() function in your current
implementation.
|
Finding network (external) IP addresses using Python
Question: I want to know my internet provider (external) IP address (broadband or
something else) with Python.
There are multiple machines are connected to that network. I tried in
different way's but I got only the local and public IP my machine. How do I
find my external IP address through Python?
Thanks in advance.
Answer: Use this script :
import urllib, json
data = json.loads(urllib.urlopen("http://ip.jsontest.com/").read())
print data["ip"]
Without json :
import urllib, re
data = re.search('"([0-9.]*)"', urllib.urlopen("http://ip.jsontest.com/").read()).group(1)
print data
|
Deadlock with logging multiprocess/multithread python script
Question: I am facing the problem with collecting logs from the following script. Once I
set up the `SLEEP_TIME` to too "small" value, the LoggingThread threads
somehow block the logging module. The script freeze on logging request in the
`action` function. If the `SLEEP_TIME` is about 0.1 the script collect all log
messages as I expect.
I tried to follow [this answer](http://stackoverflow.com/a/894284/533618) but
it does not solve my problem.
import multiprocessing
import threading
import logging
import time
SLEEP_TIME = 0.000001
logger = logging.getLogger()
ch = logging.StreamHandler()
ch.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(funcName)s(): %(message)s'))
ch.setLevel(logging.DEBUG)
logger.setLevel(logging.DEBUG)
logger.addHandler(ch)
class LoggingThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
while True:
logger.debug('LoggingThread: {}'.format(self))
time.sleep(SLEEP_TIME)
def action(i):
logger.debug('action: {}'.format(i))
def do_parallel_job():
processes = multiprocessing.cpu_count()
pool = multiprocessing.Pool(processes=processes)
for i in range(20):
pool.apply_async(action, args=(i,))
pool.close()
pool.join()
if __name__ == '__main__':
logger.debug('START')
#
# multithread part
#
for _ in range(10):
lt = LoggingThread()
lt.setDaemon(True)
lt.start()
#
# multiprocess part
#
do_parallel_job()
logger.debug('FINISH')
How to use logging module in multiprocess and multithread scripts?
Answer: This is probably [bug 6721](http://bugs.python.org/issue6721).
The problem is common in any situation where you have locks, threads and
forks. If thread 1 had a lock while thread 2 calls fork, in the forked
process, there will only be thread 2 and the lock will be held forever. In
your case, that is `logging.StreamHandler.lock`.
A fix can be found [here](https://github.com/google/python-
atfork/blob/master/atfork/stdlib_fixer.py) for the `logging` module. Note that
you need to take care of any other locks, too.
|
Consume multiple queues in python / pika
Question: I am trying to create a consumer that would subscribe to multiple queues, and
then process messages as they arrive.
The problem is that when there is some data already present in the first
queue, it consumes the first queue and never goes to consume the second queue.
However, when the first queue is empty, it does go to the next queue, and then
consumes both queues simultaneously.
I had first implemented threading but want to steer clear of it, when pika
library does it for me without much complexity. Below is my code:
import pika
mq_connection = pika.BlockingConnection(pika.ConnectionParameters('x.x.x.x'))
mq_channel = mq_connection.channel()
mq_channel.basic_qos(prefetch_count=1)
def callback(ch, method, properties, body):
print body
mq_channel.basic_ack(delivery_tag=method.delivery_tag)
mq_channel.basic_consume(callback, queue='queue1', consumer_tag="ctag1.0")
mq_channel.basic_consume(callback, queue='queue2', consumer_tag="ctag2.0")
mq_channel.start_consuming()
Answer: The issue is most likely that the first call has issued a Basic.Consume and
has already received messages from a pre-populated queue before the second
call is issued. You might want to try setting the QoS prefetch count to 1,
which will limit RabbitMQ from sending you more than one message at a time.
|
Python: How to order a list of float
Question: Having a list of floats:
a = [465.12, 405.85714285714278, 407.4285714285715, 408.0, 408.1874999999996, 409.875, 411.0, 411.75000000000063, 413.43749999999972, 414.0, 414.66666666666652, 416.33333333333201, 418.0, 417.33333333333252, 419.666666666666, 420.0, 420.74999999999966, 422.70000000000067, 423.0, 423.35714285714295, 425.49999999999994, 426.0, 426.37500000000011, 428.02499999999992, 429.0, 430.03125000000006, 431.8125, 432.0, 432.1874999999996, 434.0625000000004, 435.0, 435.32432432432432, 437.43243243243262, 438.0, 438.39999999999992, 440.40000000000009, 441.0, 441.32432432432427, 442.9459459459456, 444.0, 444.42857142857122, 445.97142857142865, 447.0, 447.92857142857116, 450.0, 450.0, 450.60000000000002, 452.72727272727263, 453.0, 453.19148936170194, 454.78723404255322, 456.0, 456.5, 458.5, 459.0, 459.44999999999959, 461.40000000000015, 462.0, 462.50000000000023, 464.16666666666725]
I am trying to sort that list by descendant values (notice I need `ind`
because later I will be sorting other lists, such as `sorted_b`, with it):
ind = sorted(range(len(a)), key = lambda i:a[i], reverse = True)
sorted_a = [x for (i, x) in sorted(zip(ind, a))]
# sorted_b = [x for (i, x) in sorted(zip(ind, b))]
But the code does not work as I would expect:
>>> sorted_a
[465.12, 464.16666666666725, 462.5000000000002, 462.0, 461.40000000000015, 459.4499999999996, 459.0, 458.5, 456.5, 456.0, 454.7872340425532, 453.19148936170194, 452.72727272727263, 453.0, 450.6, 450.0, 450.0, 447.92857142857116, 447.0, 445.97142857142865, 444.4285714285712, 444.0, 442.9459459459456, 441.32432432432427, 441.0, 440.4000000000001, 438.3999999999999, 438.0, 437.4324324324326, 435.3243243243243, 435.0, 434.0625000000004, 432.1874999999996, 432.0, 431.8125, 430.03125000000006, 429.0, 428.0249999999999, 426.3750000000001, 426.0, 425.49999999999994, 423.35714285714295, 423.0, 422.70000000000067, 420.0, 420.74999999999966, 419.666666666666, 417.3333333333325, 418.0, 416.333333333332, 414.6666666666665, 414.0, 413.4374999999997, 411.7500000000006, 411.0, 409.875, 408.1874999999996, 408.0, 407.4285714285715, 405.8571428571428]
Some values are not correctly sorted as we can easily see in the figure:
import matplotlib.pyplot as mpl
mpl.plot(sorted_a)
mpl.show()

What am I doing wrong?
Answer:
sorted(zip(ind, a))
does a lexicographic sort on the `(index, value)` pairs coming from `zip`;
since the integer indices are all unique by construction, the sort never even
looks at the values and just sorts the indices. What you meant is just
[a[i] for i in ind]
or better, if NumPy is an option:
a = np.array(a)
ind = np.argsort(a)
sorted_a = a[ind]
|
Is it possible to install a django package without pip?
Question: I am trying to install django-dash to run one of the dashboard examples and
see what it's like.
I am on Windows running Python 2.7 and Django 1.6.5. I know the usual approach
is to download pip then install the package using pip. However, I am on a work
computer with no administrative rights so I can't access my Internet Option
Settings to find my proxy URL to follow the instructions below:
Proxy problems
If you work in an office, you might be behind a HTTP proxy. If so, set the environment variables http_proxy and https_proxy. Most Python applications (and other free software) respect these. Example syntax:
http://proxy_url:port
http://username:password@proxy_url:port
I had the same issue when trying to install Django but was able to get it to
work by moving the django directory under Python27/Lib/site-packages. Is there
something similar I can do with django-dash?
I also tried downloading the sources and running python setup.py install. I
received the following error:
File "setup.py", line 3, in <module> from setuptools import setup, find_packages ImportError: No module named setuptools
Link to django-dash: <http://django-dash.readthedocs.org/en/latest/>
Answer: Yes, you can probably get the sources from [The Python Package
Index](https://pypi.python.org/pypi/)
Once you have them, uncompress the files and install them manually (this will
depend on you OS).
On Linux systems:
python setup.py build
python setup.py install
Here's [the full reference](https://docs.python.org/2/install/)
**EDIT :** Note that when manually installing those packages, you must also
install any missing dependencies, eg. setuptools in your case
|
Manipulating curves with python
Question: So I am using this python script to create the curves you see in the image.
Without going into detail about the real usage of this, my question is:
-Is there a way to create the blue and yellow curves without the linear values (red curve) ? Based on the blue line "formula" ( **sin(x*90*pi/180)** )
-Also, do I really need to create a for loop and append values like I do in the script or is there a more "clean" way of creating the array of values in one line?
In the end I just want to make sure that with my limited knowledge of maths
and python I am not making it more complex than it's needed. Thank you.

### RED CURVE ###
linearValues = []
for i in range(0,91):
linearValues.append( float( float(i)/float(90) ) )
linearValues.reverse()
### BLUE CURVE ###
blueCurveValues = []
for i in range(0,91):
blueCurveValues.append( math.sin(linearValues[i]*90*math.pi/180) )
### YELLOW CURVE ###
yellowCurveValues = []
for i in range(0,91):
yellowCurveValues.append( float( linearValues[i]-(blueCurveValues[i]-linearValues[i]) ) )
Answer: In Python 2, you should cast `int`s to `float`s before dividing (if you don’t
want integer division). And then you only need the cast when you’re dividing
two `int`s, which only happens for `linearValues` (and you’d only need
`float(i)/90` or `i/float(90)`, not three casts). But in Python 3 (or using
`from __future__ import division`), this is done automatically, and you don’t
need any calls to `float`. I’ve removed them.
The Pythonic way for creating your lists is called “list comprehension”.
Translating exactly what you’ve written, this would be:
linearValues = [ i/90 for i in range(90,-1,-1) ]
blueCurveValues = [ math.sin(linearValues[i]*90*math.pi/180) for i in range(91) ]
yellowCurveValues = [ linearValues[i]-(blueCurvevalues[i]-linearValues[i] for i in range(91) ]
If you want to not have the linearValues calculated, the obvious thing to do
is simply replace `linearValues[i]` with `i/90` every time. This would
simplify your blue values formula, but only marginally so your yellow. At the
least, you may want to consider changing `90*math.pi/180` to `math.pi/2` (or
`(i/90*90*math.pi/180)` to `(i*math.pi/180)`). But this is still using the red
curve, just under the hood.
Another option for creating your lists would be `map`:
linearValues = map(lambda x:x/90,range(90,-1,-1))
blueCurveValues = map(lambda x:math.sin(x*90*math.pi/180),linearValues)
yellowCurveValues = map(lambda x,y:x-(y-x),linearValues,blueCurveValues)
`map` and `lambda` is generally harder to read than a list comprehension, but
a shorter formula is generally easier. So red should probably be a list
comprehension. Blue and yellow would depend on your preference.
|
Groupby like Python's itertools.groupby
Question: In Python I'm able to group consecutive elements with the same key by using
[`itertools.groupby`](https://docs.python.org/2/library/itertools.html#itertools.groupby):
>>> items = [(1, 2), (1, 5), (1, 3), (2, 9), (3, 7), (1, 5), (1, 4)]
>>> import itertools
>>> list(key for key,it in itertools.groupby(items, lambda tup: tup[0]))
[1, 2, 3, 1]
Scala has `groupBy` as well, but it produces different result - a map pointing
from key to all the values found in the iterable with the specified key (not
the consecutive runs with the same key):
scala> val items = List((1, 2), (1, 5), (1, 3), (2, 9), (3, 7), (1, 5), (1, 4))
items: List[(Int, Int)] = List((1,2), (1,5), (1,3), (2,9), (3,7), (1,5), (1,4))
scala> items.groupBy {case (key, value) => key}
res0: scala.collection.immutable.Map[Int,List[(Int, Int)]] = Map(2 -> List((2,9)), 1 -> List((1,2), (1,5), (1,3), (1,5), (1,4)), 3 -> List((3,7)))
What is the most eloquent way of achieving the same as with Python
`itertools.groupby`?
Answer: If you just want to throw out sequential duplicates, you can do something like
this:
def unchain[A](items: Seq[A]) = if (items.isEmpty) items else {
items.head +: (items zip items.drop(1)).collect{ case (l,r) if r != l => r }
}
That is, just compare the list to a version of itself shifted by one place,
and only keep the items which are different. It's easy to add a `(same: (a1:
A, a2: A) => Boolean)` parameter to the method and use `!same(l,r)` if you
want custom behavior for what counts as the same (e.g. do it just by key).
If you want to keep the duplicates, you can use Scala's `groupBy` to get a
very compact (but inefficient) solution:
def groupSequential(items: Seq[A])(same: (a1: A, a2: A) => Boolean) = {
val ns = (items zip items.drop(1)).
scanLeft(0){ (n,cc) => if (same(cc._1, cc._2)) n+1 else n }
(ns zip items).groupBy(_._1).toSeq.sortBy(_._1).map(_._2)
}
|
Creating a large dictionary in pyspark
Question: I am trying to solve the following problem using pyspark. I have a file on
hdfs in the format which is a dump of lookup table.
key1, value1
key2, value2
...
I want to load this into python dictionary in pyspark and use it for some
other purpose. So I tried to do:
table = {}
def populateDict(line):
(k,v) = line.split(",", 1)
table[k] = v
kvfile = sc.textFile("pathtofile")
kvfile.foreach(populateDict)
I found that table variable is not modified. So, is there a way to create a
large inmemory hashtable in spark?
Answer: `foreach` is a distributed computation so you can't expect it to modify a
datasctructure only visible in the driver. What you want is.
kv.map(line => { line.split(" ") match {
case Array(k,v) => (k,v)
case _ => ("","")
}.collectAsMap()
This is in scala but you get the idea, the important function is
`collectAsMap()` which returns a map to the driver.
If you're data is very large you can use a PairRDD as a map. First map to
pairs
kv.map(line => { line.split(" ") match {
case Array(k,v) => (k,v)
case _ => ("","")
}
then you can access with `rdd.lookup("key")` which returns a sequence of
values associated with the key, though this definitely will not be as
efficient as other distributed KV stores, as spark isn't really built for
that.
|
Create MySQL unique ID while inserting list eliments in a DB
Question: I'm actually downloading feeds from a homepage and try to write them into a
MySQL DB. The Feeds are published in RSS. Everything is working fine without
the creation of the unique ID. So the Insert command must be wrong!
Here is my Code:
import feedparser
import urllib2
import cookielib
import MySQLdb
import time
import datetime
from cookielib import CookieJar
from urllib2 import urlopen
db = MySQLdb.connect(host="localhost", # your host, usually localhost
user="root", # your username - SELECT * FROM mysql.user
passwd="****", # your password
db="sentiment_analysis") # name of the data base
cur = db.cursor()
cur.execute("DROP TABLE IF EXISTS feeddata_lse")
sql = """CREATE TABLE feeddata_lse (LSE_ID INT NOT NULL AUTO_INCREMENT,PRIMARY KEY(LSE_ID),Unix_Timesstamp integer, III_Timestamp varchar(255), Source varchar(255), Title varchar(255), Text TEXT, Link varchar(255), Epic varchar(255), CommentNr integer, Author varchar(255))"""
cur.execute(sql)
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(CookieJar()))
opener.addheaders = [{'User-agent','Mozilla/5.0'}]
def feed_load(feed):
return [(time.time(),
entry.published,
'lse',
entry.title,
entry.summary,
entry.link,
(entry.link.split('?ShareTicker=')[1]).split('&post=')[0],
entry.link.split('&post=')[1],
entry.author)
for entry
in feedparser.parse(feed).entries]
def main():
FEED_URL = "http://www.lse.co.uk/chat/recent/"
feed = feed_load(FEED_URL)
print feed[1][1]
for item in feed:
cur.execute("""INSERT INTO feeddata_lse VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s)""",item)
db.commit()
If I delete the `LSE_ID INT NOT NULL AUTO_INCREMENT, PRIMARY KEY(LSE_ID)` out
to the Insert statement everything works fine. My major problem with changing
the insert-part is that a RSS feed contains a list of feeds and every feed is
another list.
**Thank you for helping me.** Please explain you arguments with code examples
because I am very unexperienced. If you have any questions feel free to ask!
Here is the error report:
Traceback (most recent call last):
File "C:/Python27/MySQL_finalversion/rss_db_connection_mysql_v1.py", line 54, in <module>
main()
File "C:/Python27/MySQL_finalversion/rss_db_connection_mysql_v1.py", line 48, in main
cur.execute("""INSERT INTO feeddata_lse VALUES (%s,%s,%s,%s,%s,%s,%s,%s,%s)""",item)
File "C:\Python27\lib\site-packages\MySQLdb\cursors.py", line 205, in execute
self.errorhandler(self, exc, value)
File "C:\Python27\lib\site-packages\MySQLdb\connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
OperationalError: (1136, "Column count doesn't match value count at row 1")
Answer: The `LSE_ID INT NOT NULL AUTO_INCREMENT, PRIMARY KEY(LSE_ID)` line occurs in
the table creation statement, not the insert statement. If you remove a column
from the table creation statement, but leave your INSERT statement unchanged,
you will indeed get a "Column count doesn't match value" error because you're
attempting to insert one more column than is defined on your table.
I think you'll find you actually want to keep that primary key. Rather than
remove `LSE_ID INT NOT NULL AUTO_INCREMENT, PRIMARY KEY(LSE_ID)` from your
table creation statement, maybe try explicitly naming the columns in your
INSERT statement like so?
`INSERT INTO feeddata_lse(Unix_Timesstamp, III_Timestamp, Source, Title, Text,
Link, Epic, CommentNr, Author) VALUES (%s,%s,` ...
|
How to re.sub() a optional matching group using regex in Python?
Question: My problem is quite simple.
I have a URL, sometimes it ends with specific characters. If they are present,
I would like to add them to my new URL.
test1 = "url#123"
test2 = "url"
r = re.sub(r"url(#[0-9]+)?", r"new_url\1", test1)
# Expected result: "new_url#123"
# Actual result: "new_url#123"
r = re.sub(r"url(#[0-9]+)?", r"new_url\1", test2)
# Expected result: "new_url"
# Actual result: "error: unmatched group"
Of course, I can not just do `re.sub("url", "new_url", test)`, because for
example it could be "url/123" and in this case I do not wish to make
amendments.
Answer: You cannot use optional matching group in replacement string.
How about following approach?
>>> import re
>>> test1 = "url#123"
>>> test2 = "url"
>>> re.sub(r"url((?:#[0-9]+)?)", r"new_url\1", test1)
new_url#123
>>> re.sub(r"url((?:#[0-9]+)?)", r"new_url\1", test2)
new_url
BTW, if you use [`regex`](https://pypi.python.org/pypi/regex), you can use
optional matching group:
>>> import regex
>>> test1 = "url#123"
>>> test2 = "url"
>>> regex.sub(r"url(#[0-9]+)?", r"new_url\1", test1)
'new_url#123'
>>> regex.sub(r"url(#[0-9]+)?", r"new_url\1", test2)
'new_url'
|
How can I create NEW Listboxes with a Button and collect all the data with a final submit Button in TKinter?
Question: Okay so here is my problem. I am trying to create a very open ended user
friendly Gui out of Tkinter. In short I made a button a function STAGE that
creates a Listbox that has choose-able indexes. Then I can press the button
SUBMIT that will print the selected keys.
BUT if I press ADD STAGE to make another listbox of the same value I CANNOT go
back and edit or retrieve the selected values of the old listbox.
I understand that this is because of the listbox will have the same name
so.....
from Tkinter import *
import tkMessageBox
class Insert_page(Frame):
global i0
global listbox
global Tech_option
Tech_option=['Rolled Plate','extrude bar','as-cast','wire','Other','USER Details']
i0=-1
def __init__(self,parent):
Frame.__init__(self,parent,background="white")
self.parent=parent
self.initUI()
def initUI(self):
self.parent.title("material Gui v2")
self.grid(row=1,column=1)
mButton=Button(self,text='Start Stages',command=self.stage).grid(row=2,column=5,sticky=W)
mButton3=Button(self,text='submit',command=self.submit).grid(row=9,column=1,sticky=W)
def stage(self): ######################HERE IS THE PROBLEM#########
global i0
i0+=1
stageFrame=Frame(self,bd=1,bg='red',relief=SUNKEN)
stageFrame.grid(row = 1+5*i0, column = 1, rowspan = 5, columnspan = 7, sticky = W+E+N+S)
stageVar = StringVar()
OPTIONS = [""]+range(0,10)
w = apply(OptionMenu, (stageFrame, stageVar) + tuple(OPTIONS))
w.grid(row=1,column=1)
stageLabel=Label(stageFrame,text='Stage')
stageLabel.grid(row=1+5*i0,column=0,sticky=W)
mButton=Button(stageFrame,text='add Stage',command=self.stage).grid(row=9,column=1,sticky=W)
listbox = Listbox(stageFrame,selectmode= MULTIPLE,exportselection=False) ######REPLACED CODE######
listbox.grid(row=3+5*i0,column=2)
for item in Tech_option :
listbox.insert(END, item)
mButton3=Button(self,text='submit',command=lambda: self.submit(listbox,Tech_option)).grid(row=9+5*i0,column=1,sticky=W)
######REPLACED CODE######
def submit(self,lb,option_list):
Tech_select=[]
for i in list(lb.curselection()):
Tech_select.append(Tech_option[int(i)])
print Tech_select
def main():
mGui= Tk()
mGui.geometry('800x600+200+200')
menubar=Menu(mGui)
filemenu=Menu(menubar,tearoff=0)
filemenu.add_command(label="New")
filemenu.add_command(label="Open")
filemenu.add_command(label="SaveAs...")
filemenu.add_command(label="Close")
menubar.add_cascade(label='File',menu=filemenu)
mGui.config(menu=menubar)
app=Insert_page(mGui)
mGui.mainloop()
main()
I replaced it with this...
exec ('listbox_%s = Listbox(stageFrame,selectmode= MULTIPLE,exportselection=False)' % (i0)) in globals(), locals()
exec ('listbox_%s.grid(row=3+5*i0,column=2)' % (i0)) in globals(), locals()
for item in Tech_option :
exec ('listbox_%s.insert(END, item)' % (i0)) in globals(), locals()
mButton3=Button(self,text='submit',command=lambda: self.submit(eval('listbox_%s'%(i0)),Tech_option)).grid(row=9+5*i0,column=1,sticky=W)
What it should do is create NEW listbox variables but all I get back when
pressing submit is.
Traceback (most recent call last):
File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 1413, in __call__
return self.func(*args)
File "/home/xxxxxx/pythons/xxxxxxxx.py", line 78, in <lambda>
mButton3=Button(self,text='submit',command=lambda: self.submit(eval('listbox_%s'%(i0)),Tech_option)).grid(row=9+5*i0,column=1,sticky=W)
File "<string>", line 1, in <module>
NameError: name 'listbox_1' is not defined
If someone could help me with this that would be great.
Answer: You need list of `listbox` to keep all lists from all stages.
Now in `listbox` you have only list from last created stage.
You will have to use lines like this (in different places)
self.all_listboxes = []
#---
self.all_listboxes.append( listbox )
#---
for one_list in self.all_listboxes:
for x in one_list.curselection():
* * *
BTW: use `self.listbox` (in `__init__`) in place of `global listbox`.
The same with `i0` and `Tech_option`.
We use `class` and `self` to not use `global`
|
Searching for lines in a file and giving users flexible context
Question: The short(ish) version of this question is: When you open a file using a text
editor and search for a term you can, after locating the term, move around in
the file showing flexible context. So, as a direct example, if you have a Log
file you could open it using less mylog.log and search /SALLY. This would take
you to the first occurence of 'SALLY' in the log file. Then, using normal
navigation keys(up and down arrow keys, pg up/dwn, etc) you can see what
happened before and after the word 'SALLY' appeared. I would like to leverage
a tool to give this same behavior but none of the tools I've looked into seem
quite right. It currently looks as though the only option is to write my own
methods for doing this, but surely that's not right.
Long version of this question:I have a bunch of log files scattered all over
the place. There is a part of my normal workflow that involves searching for
values in these log files and getting information from the context around
those values(it is worth noting that I cannot assume context is within a
specific set of lines nor do I know until I see it what the important context
is.) Manually going everywhere to get these log files is gross, I want to tell
my code 'look for SALLY' the code should give me a list of places(from a list
of known places where log files reside) where 'SALLY' appears. I then select
the logfile I want and it opens to the first occurrence of 'SALLY' with the
ability to navigate in the file from that point.
I know how to do most of this and, in fact, I can and have implemented
everything but the last bit. Using basic IO operations I can:
* Find and access all the potential log files
* Find log files with 'SALLY' in them
* Give the user a list with all the log files with 'SALLY' in them
* Given a selected logfile display the line(s) that contain 'SALLY'
What I can't do is figure out how to give the use the ability to smoothly
navigate the log file. Allowing them to move up and down the file so they can
see context. I could, and have, placed a call to 'less'(assuming it's on a
*nix system) and used it's search behavior but that's really not the behavior
I'd like. I'd like to do this all using Python.
I've looked at Elastic Search(which seems to be way beyond what I want),
several log parsing libraries(parsing the logs are pretty straight forward)
and just tried to find other's solutions to a similar problem. I've been
unable to find anyone with a similar problem let alone a solution which, given
the python community, seems unlikely.
I'm currently considering implementing some sort of custom file viewer. This
seems silly. What can I leverage to implement this sort of functionality?
Answer: So, after playing around a bit I found something that worked pretty good for
me, hope it will work for you too. The basic idea is, that we have some kind
of iterator (not a real one but because I lack imagination i called it
iterator) and it keeps track of the range you are looking at and returns the
current section you are looking at.
It is just a quick and dirty solution but I hope it does the job
from subprocess import call
def main():
fp = open('path/to/your/file')
f = fp.readlines()
fp.close()
myIter = MyIterator(f,12)
# ^replace with the actual index the line you want to look at
print myIter.current()
cmd = raw_input()
#Input is no optimal, but this is beyond the scope of your question
while cmd != "quit":
call(["clear"])
if cmd == "u":
myIter.previous()
elif cmd == "d":
myIter.next()
for line in myIter.current():
print line
cmd = raw_input()
class MyIterator():
def __init__(self,f,index):
self.f = []
for line in f:
#Otherwise you would have a blank line between every line
self.f.append(line.replace('\n',''))
self.upper_index = index-1
self.lower_index = index
def hasNext(self):
if self.upper_index > len(self.f):
return False
else:
return True
def hasPrevious(self):
if self.lower_index <= 0:
return False
else:
return True
def next(self):
self.upper_index += 1
return self.current()
def previous(self):
self.lower_index -= 1
return self.current()
def current(self):
return self.f[self.lower_index:self.upper_index]
if __name__ == "__main__":
main()
Note that with 'u' you go up one line and with 'd' you go down one line. The
poblem is, that you also have to press enter afterwards. Look
[here](http://stackoverflow.com/questions/510357/python-read-a-single-
character-from-the-user) for an implementation of getch() in python
|
Refining fnmatch pattern for more specific results
Question: Brand new to Python, coming from MATLAB. Essentially no UNIX or regexp
knowledge.
I have some data for processing sorted into folders. I'd like to get a list of
files to process, so I prompt for a top level folder and search everything in
that folder and subfolders for a match. Between the documentation for Python
and various things here on SO I've gotten most of the way there:
from Tkinter import Tk
import tkFileDialog
import os
import fnmatch
def recursivedecodeprompt():
root = Tk()
root.withdraw()
toplevel = tkFileDialog.askdirectory(title='Select Top Level Directory')
filelist = []
for root, dirnames, filenames in os.walk(toplevel):
for filename in fnmatch.filter(filenames, 'LOG.*'):
filelist.append(os.path.join(root, filename))
return filelist
My question is in relation to the pattern string. My folders could have just a
`LOG.001` file in them, or they could have `LOG.001`, `LOG.001.csv`,
`LOG.001.gps`, etc., which my current pattern also matches. I thought I could
be clever and use `'LOG.???'` but it returns the same list.
Is there a simple way to have `fnmatch` ignore files with anything appended
after the 3 digit ID? Is there a more appropriate tool for the job?
Semi-related side question: Is there a way to allow the
`tkFileDialog.askdirectory()` dialog to be resizable?
EDIT: To clarify, the numeric part of the filename can and will change, so I
can have `LOG.001`, `LOG.002`, `LOG.003`, etc. I wish it was a less annoying
naming convention but that's how it comes out of the device.
Answer: Using re:
filnames = ["LOG.001","LOG.002","LOG.001.csv","LOG.003.csv","LOG.1002"]
print [x for x in filnames if re.search("LOG.\d+$",x)]
['LOG.001', 'LOG.002', 'LOG.1002']
|
Where is the correct place to enable CORS?
Question: I'm using [Spyne](http://spyne.io) (the example ["hello world"
code](https://github.com/arskom/spyne/blob/master/examples/helloworld_http.py))
to make a webservice that produces some `json` data and then I'm trying to
consume this data in javascript code in client's browser.
When I go to the address `http://localhost:8000/say_hello?name=Dave×=3` I
get the following output:
`["Hello, Dave", "Hello, Dave", "Hello, Dave"]`
That's why I think there is nothing to do with the server (it works as
expected).
I use the following code to get the data from this webservice:
<html>
<head>
<meta charset="utf-8">
</head>
<body>
<script src="jquery-1.11.1.min.js" ></script>
<script>
var request_url = 'http://localhost:8000/say_hello?name=Dave×=3';
$.ajax( {
type:'Get',
url:request_url,
dataType: "jsonp",
crossDomain : true,
success:function(data) {
alert(data);
},
error: function()
{
alert("fail");
},
});
</script>
</body>
</html>
Then I get the "fail" popup.
As I searched the net, all I could find was a setting to be made on the server
side as follows:
Add following header in the server:
Header set Access-Control-Allow-Origin *
1. If any header setting must be changed on server side, how can I do that?
2. If there is no need to change any server side settings, what should be the correct client side code?
## **EDIT**
Here is the last version of both python and javascript code:
HTML:
<html>
<head>
<meta charset="utf-8">
</head>
<body>
<script src="jquery-1.11.1.min.js" ></script>
<script>
var request_url = 'http://localhost:8000/say_hello?name=Dave×=3';
var jdata = 'none'
$.ajax( {
type:'Get',
url:request_url,
dataType: "html",
crossDomain : true,
success:function(data) {
alert(data);
},
error: function()
{
alert("fail");
},
});
</script>
</body>
</html>
Python:
#!/usr/bin/env python
# encoding: utf8
'''
This is a simple HelloWorld example to show the basics of writing a Http api
using Spyne. Here's a sample:
$ curl http://localhost:8000/say_hello?name=Dave\×=3
["Hello, Dave", "Hello, Dave", "Hello, Dave"]
'''
import logging
from spyne.application import Application
from spyne.decorator import srpc
from spyne.protocol.json import JsonDocument
from spyne.protocol.http import HttpRpc
from spyne.service import ServiceBase
from spyne.model.complex import Iterable
from spyne.model.primitive import UnsignedInteger
from spyne.model.primitive import String
from spyne.server.wsgi import WsgiApplication
class CorsService(ServiceBase):
origin = '*'
def _on_method_return_object(ctx):
ctx.transport.resp_headers['Access-Control-Allow-Origin'] = \
ctx.descriptor.service_class.origin
CorsService.event_manager.add_listener('method_return_object',
_on_method_return_object)
class HelloWorldService(CorsService):
@srpc(String, UnsignedInteger, _returns=Iterable(String))
def say_hello(name, times):
for i in range(times):
#yield '%s("Hello, %s")' % (callback, name)
yield {"name": 'Hello (%d): %s' % (i, name), "address": "%d + %d" % (i, i)}
if __name__=='__main__':
from wsgiref.simple_server import make_server
logging.basicConfig(level=logging.DEBUG)
application = Application([HelloWorldService], 'spyne.examples.hello.http',
in_protocol=HttpRpc(validator='soft'),
out_protocol=JsonDocument(ignore_wrappers=True),
)
wsgi_application = WsgiApplication(application)
server = make_server('0.0.0.0', 8000, wsgi_application)
logging.info("listening to http://127.0.0.1:8000")
logging.info("wsdl is at: http://localhost:8000/?wsdl")
server.serve_forever()
Answer: You need add this as the first line of your service implementation:
ctx.transport.resp_headers['Access-Control-Allow-Origin'] = '*'
However, that can get very annoying very fast, so here's a way to properly
implement it:
class CorsService(ServiceBase):
origin = '*'
def _on_method_return_object(ctx):
ctx.transport.resp_headers['Access-Control-Allow-Origin'] = \
ctx.descriptor.service_class.origin
CorsService.event_manager.add_listener('method_return_object',
_on_method_return_object)
So instead of using `ServiceBase`, you can now use `CorsService` as parent
class to your services to get the CORS header automatically.
Also note that it's more secure to set the header value only to the domain
that hosts the Spyne service instead of using a wildcard.
|
Aggregate CSV file with python
Question: My cross-tabulated CSV file looks like this:
Country,Age,All,M,F
UK,Under65,30987,15000,15987
UK,65andOver,12345,6345,6000
Germany,Under65,32646,15642,17004
Germany,65andOver,14747,7192,7555
France,Under65,31587,16286,15301
France,65andOver,13741,6187,7554
I would like to amend it so that it looks like this:
Country,Under65_All,Under65_M,Under65_F,65andOver_All,65andOver_M,65andOver_F
UK,30987,15000,15987,12345,6345,6000
Germany,32646,15642,17004,14747,7192,7555
France,31587,16286,15301,13741,6187,7554
Each country now sits on one row and the number of columns has been expanded
(no cross-tab).
I'm trying to do this in Python 3. Excel VBA is out because I was hitting the
row limit with some of the larger CSV files.
I suppose what I'm trying to do is an "aggregate" with an additional "group
by" step. I've got as far as reading in the CSV file and calculating various
values which may prove useful: number of unique countries(3), number of unique
age groups(2),names and number of columns required for final output file(7).
I'm looking to make the code as flexible as possible so that it can read in a
file with x number of unique countries and y number of unique age groupings
and z number of column variables. And the final file would contain a header
row with y*z+1 columns and below this x number of rows.
Hope this makes sense, any help/pointers would be appreciated.
Answer: I'm going to propose a [`pandas`](http://pandas.pydata.org) solution because
otherwise you're reinventing the wheel, but there's no way around the fact
that it takes a bit of getting used to. The upside is that once you've picked
it up operations like this become relatively straightforward.
import pandas as pd
df = pd.read_csv("c.dat")
df = pd.melt(df, id_vars=["Country", "Age"], var_name="Other")
df["Column"] = df.pop("Age") + "_" + df.pop("Other")
df = df.pivot(index="Country", columns="Column")
df.columns = df.columns.droplevel(0)
df.to_csv("out.csv")
produces
>>> !cat out.csv
Country,65andOver_All,65andOver_F,65andOver_M,Under65_All,Under65_F,Under65_M
France,13741,7554,6187,31587,15301,16286
Germany,14747,7555,7192,32646,17004,15642
UK,12345,6000,6345,30987,15987,15000
(where we could sort the columns if we really wanted to.)
* * *
There's no point in copying out an entire tutorial here -- although you can
read the reshaping tutorial [here](http://pandas.pydata.org/pandas-
docs/stable/reshaping.html) \-- but I can at least give an overview of how
this works.
Step by step. First, we read the csv file into a `DataFrame` (kind of like an
excel sheet):
>>> df = pd.read_csv("c.dat")
>>> df
Country Age All M F
0 UK Under65 30987 15000 15987
1 UK 65andOver 12345 6345 6000
2 Germany Under65 32646 15642 17004
3 Germany 65andOver 14747 7192 7555
4 France Under65 31587 16286 15301
5 France 65andOver 13741 6187 7554
where you can access the frame by rows, columns, etc. For your purposes we can
melt (unpivot) this data:
>>> df = pd.melt(df, id_vars=["Country", "Age"], var_name="Other")
>>> df
Country Age Other value
0 UK Under65 All 30987
1 UK 65andOver All 12345
2 Germany Under65 All 32646
3 Germany 65andOver All 14747
4 France Under65 All 31587
5 France 65andOver All 13741
6 UK Under65 M 15000
7 UK 65andOver M 6345
8 Germany Under65 M 15642
9 Germany 65andOver M 7192
10 France Under65 M 16286
11 France 65andOver M 6187
12 UK Under65 F 15987
13 UK 65andOver F 6000
14 Germany Under65 F 17004
15 Germany 65andOver F 7555
16 France Under65 F 15301
17 France 65andOver F 7554
So now we have the row labels we want (the countries) and information about
the other columns, whatever they are, and the values. You wanted the "Age" and
whatever's in "Other" combined, so:
>>> df["Column"] = df.pop("Age") + "_" + df.pop("Other")
>>> df
Country value Column
0 UK 30987 Under65_All
1 UK 12345 65andOver_All
2 Germany 32646 Under65_All
3 Germany 14747 65andOver_All
4 France 31587 Under65_All
5 France 13741 65andOver_All
6 UK 15000 Under65_M
7 UK 6345 65andOver_M
8 Germany 15642 Under65_M
9 Germany 7192 65andOver_M
10 France 16286 Under65_M
11 France 6187 65andOver_M
12 UK 15987 Under65_F
13 UK 6000 65andOver_F
14 Germany 17004 Under65_F
15 Germany 7555 65andOver_F
16 France 15301 Under65_F
17 France 7554 65andOver_F
and now all the hard work is done. We simply have to call `pivot` to turn it:
>>> df = df.pivot(index="Country", columns="Column")
>>> df
value \
Column 65andOver_All 65andOver_F 65andOver_M Under65_All Under65_F
Country
France 13741 7554 6187 31587 15301
Germany 14747 7555 7192 32646 17004
UK 12345 6000 6345 30987 15987
Column Under65_M
Country
France 16286
Germany 15642
UK 15000
(Looks better on the screen.) It's given us the extra "value" level, which you
don't want, so let's drop that:
>>> df.columns = df.columns.droplevel(0)
>>> df
Column 65andOver_All 65andOver_F 65andOver_M Under65_All Under65_F \
Country
France 13741 7554 6187 31587 15301
Germany 14747 7555 7192 32646 17004
UK 12345 6000 6345 30987 15987
Column Under65_M
Country
France 16286
Germany 15642
UK 15000
And then we write it to csv:
>>> df.to_csv("out.csv")
|
Downloading all links on a webpage using Mechanize in Python
Question: I was trying to follow the following thread which seemed to answer my
question. It serves as a great example that shows how to download all links on
a webpage using Mechanize:
[Download all the links(related documents) on a webpage using
Python](http://stackoverflow.com/questions/5974595/download-all-the-
linksrelated-documents-on-a-webpage-using-python)
I followed the code that was posted (i.e.):
import mechanize
from time import sleep
#Make a Browser (think of this as chrome or firefox etc)
br = mechanize.Browser()
#visit http://stockrt.github.com/p/emulating-a-browser-in-python-with-mechanize/
#for more ways to set up your br browser object e.g. so it look like mozilla
#and if you need to fill out forms with passwords.
# Open your site
br.open('http://pypi.python.org/pypi/xlwt')
f=open("source.html","w")
f.write(br.response().read()) #can be helpful for debugging maybe
filetypes=[".zip",".exe",".tar.gz"] #you will need to do some kind of pattern matching on your files
myfiles=[]
for l in br.links(): #you can also iterate through br.forms() to print forms on the page!
for t in filetypes:
if t in str(l): #check if this link has the file extension we want (you may choose to use reg expressions or something)
myfiles.append(l)
def downloadlink(l):
f=open(l.text,"w") #perhaps you should ensure that file doesn't already exist.
br.click_link(l)
f.write(br.response().read())
print l.text," has been downloaded"
#br.back()
for l in myfiles:
sleep(1) #throttle so you dont hammer the site
downloadlink(l)
i only changed:
f=open(l.text,"w") #perhaps you should open in a better way & ensure that file doesn't already exist.
To:
f=open('C:\\l.text',"w") #perhaps you should open in a better way & ensure that file doesn't already exist.
That made the code work for me, else it was giving me an error. When i run the
code, i get the following output:
Download> xlwt-0.7.5.tar.gz has been downloaded
xlwt-0.7.5.tar.gz has been downloaded
So it worked. But i have no idea where this file was downloaded to? Any ideas?
I have searched my C drive, and could not find it.
If the code is run as:
f=open(l.text,"w")
It raises the following exception:
Traceback (most recent call last):
File "C:\Python27\mech.py", line 33, in <module>
downloadlink(l)
File "C:\Python27\mech.py", line 25, in downloadlink
f=open(l.text,"w") #perhaps you should ensure that file doesn't already exist.
IOError: [Errno 22] invalid mode ('w') or filename: 'Download> <span style="font-size: 75%">xlwt-0.7.5.tar.gz<span>'
Answer: The Python code you quoted uses the `text` attribute of the link `l` (hence
the expression `l.text`) as the filename. Consequently (since each link should
hopefully have a different `text` attribute value) the code should produce a
number of files, one for each link.
Your change replaces a variable expression (one which has a different value
for each link) with a constant. So each file is being written to the `C:\`
directory as `l.text`. Consequently when you look at this file you should see
the contexts of the last link on the page.
(By the way, not your fault I know, but `l` is a very bad name for a variable
due to its potential for confusion with the digit one).
The correct way to run this program is inside an empty directory (otherwise
the individual files will be hard to track down) on which you have write
permission. If any of the filenames contain slashes then you will have to take
special pains to either create the necessary directory structure or transform
them somehow into acceptable Windows filenames.
You may also wish to replace the detection code with something a little more
colloquial.
for l in br.links(): #you can also iterate through br.forms() to print forms on the page!
s = str(l)
if any(s.endswith(t) for t in filetypes):
myfiles.append(l)
|
How do I use colorbar with hist2d in matplotlib.pyplot?
Question: I want to do something similar to
<http://matplotlib.org/examples/pylab_examples/hist2d_log_demo.html> but I've
read that using pylab for code other than in python interactive mode is bad
practice so I'd like to do this with matplotlib.pyplot. However, I can't
figure out how to make this code work using pyplot. Using, pylab, the example
given is
from matplotlib.colors import LogNorm
from pylab import *
#normal distribution center at x=0 and y=5
x = randn(100000)
y = randn(100000)+5
hist2d(x, y, bins=40, norm=LogNorm())
colorbar()
show()
I've tried a lot like
import matplotlib.pyplot as plt
fig = plt.figure()
ax1 = fig.add_subplot(1,1,1)
h1 = ax1.hist2d([1,2],[3,4])
and from here I've tried everything from `plt.colorbar(h1)`
`plt.colorbar(ax1)` `plt.colorbar(fig)` `ax.colorbar()` etc etc and I can't
get anything to work.
In general, I'm honestly not really clear on the relationship between pylab
and pyplot, even after reading <http://matplotlib.org/faq/usage_faq.html>. For
example `show()` in pylab seems to become `plt.show()` in pyplot, but for some
reason `colorbar` doesn't become `plt.colorbar()`?
For example,
Answer: This should do it:
from matplotlib.colors import LogNorm
import matplotlib.pyplot as plt
from numpy.random import randn
#normal distribution center at x=0 and y=5
x = randn(100000)
y = randn(100000)+5
H, xedges, yedges, img = plt.hist2d(x, y, norm=LogNorm())
extent = [yedges[0], yedges[-1], xedges[0], xedges[-1]]
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
im = ax.imshow(H, cmap=plt.cm.jet, extent=extent, norm=LogNorm())
fig.colorbar(im, ax=ax)
plt.show()
Notice how colorbar is attached to "fig", not "sub_plot". There are some other
examples of this
[here](http://stackoverflow.com/questions/13784201/matplotlib-2-subplots-1-colorbar).
Notice how you also need to generate a ScalarMappable with `imshow`, as
explained in the API [here](http://matplotlib.org/api/colorbar_api.html).
|
In vs re.search python
Question: I have a python 2.7 script which usually runs for hours, and I am now trying
to optimize it. It has a lot of searches for strings, which represents the
heavy part of computation. At the moment I am currently using
`re.search('stringToFind', haystack)` to find substrings in longer strings. I
am considering rewriting all expressions that do not contain regular
expressions and use `in` instead. Almost all strings that are searched for are
normal strings, ie contains no regular expressions.
I know that `in` uses a `contains` method to check substrings, but I have no
clue how re.search works for strings that are non-re. How does re.search
actually work? Would it really be more efficient to use in rather than
re.search for substring in string comparision?
EDIT: To clarify I can give an example of currently running code:
if re.search('extern', variable):
# Do something...
Could be replace by:
if 'extern' in variable:
# Do something...
Answer: As others have commented, executing a regular expression search is inherently
more complex than a simple substring search. Here's a very rough benchmark:
import timeit
import re
s = '0' * 1000 + 'foo' + '0' * 1000
rgx = re.compile(r'foo')
setup = 'from __main__ import s, rgx'
print timeit.timeit('"foo" in s', setup = setup) # 0.745168924332
print timeit.timeit('rgx.search(s).group(0)', setup = setup) # 1.14249396324
But if your program runs for hours, I doubt that the source of the problem
lies in such areas.
|
Combine effects to menu in pygame
Question: Hey guys am developing a game with pygame. The idea of the game is that when a
user click on `start` button on the menu(which appear on the first before
starting a game) he must see the two balls bouncing on the pygame window.
For this i have two python files.
# bounceball.py
This python file makes the two ball bounce on the pygame window which is made
by me.The code of bounceball.py is [here](http://pastebin.com/M4VA6dH9).(Sorry
for pasting it on pastebin since its a long code)
# menu.py
This python file creates a menu which i have found from the internet and this
also works fine.The code of menu.py is [here](http://pastebin.com/UuTYjFpN)
But the Problem is that when a user clicks on the `Start` button from the menu
it didnt do anything .The thing i want is when a user clicks on `start` button
from menu he must see the ball boucing which i have coded in `bounceball.py`
on the pygame window.How can i link my `bounceball.py` to the menu.
I have tried to acheive this by many methods but it didnt helped me ..
Hope you guys can help me in acheieving this ..Any help would be appreciated
..Thanks in advance
Answer: It could be done better but at least it works.
**menu.py**
#!/usr/bin/python
import sys
import pygame
import bounceball
#----------------------------------------------------------------------
WHITE = (255, 255, 255)
RED = (255, 0, 0)
BLACK = ( 0, 0, 0)
#----------------------------------------------------------------------
class MenuItem(pygame.font.Font):
def __init__(self, text, font=None, font_size=30,
font_color=WHITE, (pos_x, pos_y)=(0, 0)):
pygame.font.Font.__init__(self, font, font_size)
self.text = text
self.font_size = font_size
self.font_color = font_color
self.label = self.render(self.text, 1, self.font_color)
self.width = self.label.get_rect().width
self.height = self.label.get_rect().height
self.dimensions = (self.width, self.height)
self.pos_x = pos_x
self.pos_y = pos_y
self.position = pos_x, pos_y
def is_mouse_selection(self, (posx, posy)):
if (posx >= self.pos_x and posx <= self.pos_x + self.width) and \
(posy >= self.pos_y and posy <= self.pos_y + self.height):
return True
return False
def set_position(self, x, y):
self.position = (x, y)
self.pos_x = x
self.pos_y = y
def set_font_color(self, rgb_tuple):
self.font_color = rgb_tuple
self.label = self.render(self.text, 1, self.font_color)
#----------------------------------------------------------------------
class GameMenu():
def __init__(self, screen, items, funcs, bg_color=BLACK, font=None, font_size=30,
font_color=WHITE):
self.screen = screen
self.scr_width = self.screen.get_rect().width
self.scr_height = self.screen.get_rect().height
self.bg_color = bg_color
self.clock = pygame.time.Clock()
self.funcs = funcs
self.items = []
for index, item in enumerate(items):
menu_item = MenuItem(item, font, font_size, font_color)
# t_h: total height of text block
t_h = len(items) * menu_item.height
pos_x = (self.scr_width / 2) - (menu_item.width / 2)
pos_y = (self.scr_height / 2) - (t_h / 2) + (index * menu_item.height)
menu_item.set_position(pos_x, pos_y)
self.items.append(menu_item)
self.mouse_is_visible = True
self.cur_item = None
def set_mouse_visibility(self):
if self.mouse_is_visible:
pygame.mouse.set_visible(True)
else:
pygame.mouse.set_visible(False)
def set_keyboard_selection(self, key):
"""
Marks the MenuItem chosen via up and down keys.
"""
for item in self.items:
# Return all to neutral
item.set_italic(False)
item.set_font_color(WHITE)
if self.cur_item is None:
self.cur_item = 0
else:
# Find the chosen item
if key == pygame.K_UP and \
self.cur_item > 0:
self.cur_item -= 1
elif key == pygame.K_UP and \
self.cur_item == 0:
self.cur_item = len(self.items) - 1
elif key == pygame.K_DOWN and \
self.cur_item < len(self.items) - 1:
self.cur_item += 1
elif key == pygame.K_DOWN and \
self.cur_item == len(self.items) - 1:
self.cur_item = 0
self.items[self.cur_item].set_italic(True)
self.items[self.cur_item].set_font_color(RED)
# Finally check if Enter or Space is pressed
if key == pygame.K_SPACE or key == pygame.K_RETURN:
text = self.items[self.cur_item].text
self.funcs[text]()
def set_mouse_selection(self, item, mpos):
"""Marks the MenuItem the mouse cursor hovers on."""
if item.is_mouse_selection(mpos):
item.set_font_color(RED)
item.set_italic(True)
else:
item.set_font_color(WHITE)
item.set_italic(False)
def run(self):
mainloop = True
while mainloop:
# Limit frame speed to 50 FPS
self.clock.tick(50)
mpos = pygame.mouse.get_pos()
for event in pygame.event.get():
if event.type == pygame.QUIT:
mainloop = False
if event.type == pygame.KEYDOWN:
self.mouse_is_visible = False
self.set_keyboard_selection(event.key)
if event.type == pygame.MOUSEBUTTONDOWN:
for item in self.items:
if item.is_mouse_selection(mpos):
self.funcs[item.text]()
if pygame.mouse.get_rel() != (0, 0):
self.mouse_is_visible = True
self.cur_item = None
self.set_mouse_visibility()
# Redraw the background
self.screen.fill(self.bg_color)
for item in self.items:
if self.mouse_is_visible:
self.set_mouse_selection(item, mpos)
self.screen.blit(item.label, item.position)
pygame.display.flip()
#----------------------------------------------------------------------
def run_bounceball():
print "run bounceball"
bounceball.run(screen)
#----------------------------------------------------------------------
if __name__ == "__main__":
pygame.init()
# Creating the screen
screen = pygame.display.set_mode((300, 300), 0, 32)
menu_items = ('Start', 'Quit')
funcs = {'Start': run_bounceball,
'Quit': sys.exit}
pygame.display.set_caption('Game Menu')
gm = GameMenu(screen, funcs.keys(), funcs)
gm.run()
**bounceball.py**
import pygame
import math
from itertools import cycle
#----------------------------------------------------------------------
# some simple vector helper functions, stolen from http://stackoverflow.com/a/4114962/142637
def magnitude(v):
return math.sqrt(sum(v[i]*v[i] for i in range(len(v))))
def add(u, v):
return [ u[i]+v[i] for i in range(len(u)) ]
def sub(u, v):
return [ u[i]-v[i] for i in range(len(u)) ]
def dot(u, v):
return sum(u[i]*v[i] for i in range(len(u)))
def normalize(v):
vmag = magnitude(v)
return [ v[i]/vmag for i in range(len(v)) ]
#----------------------------------------------------------------------
class Ball(object):
def __init__(self, path, screen):
self.x, self.y = (0, 0)
self.speed = 2.5
self.color = (200, 200, 200)
self.path = cycle(path)
self.set_target(next(self.path))
self.screen = screen
@property
def pos(self):
return self.x, self.y
# for drawing, we need the position as tuple of ints
# so lets create a helper property
@property
def int_pos(self):
return map(int, self.pos)
@property
def target(self):
return self.t_x, self.t_y
@property
def int_target(self):
return map(int, self.target)
def next_target(self):
self.set_target(self.pos)
self.set_target(next(self.path))
def set_target(self, pos):
self.t_x, self.t_y = pos
def update(self):
# if we won't move, don't calculate new vectors
if self.int_pos == self.int_target:
return self.next_target()
target_vector = sub(self.target, self.pos)
# a threshold to stop moving if the distance is to small.
# it prevents a 'flickering' between two points
if magnitude(target_vector) < 2:
return self.next_target()
# apply the balls's speed to the vector
move_vector = [c * self.speed for c in normalize(target_vector)]
# update position
self.x, self.y = add(self.pos, move_vector)
def draw(self):
pygame.draw.circle(self.screen, self.color, self.int_pos, 4)
#----------------------------------------------------------------------
def run(screen):
#pygame.init() # no need it - inited in menu.py
#screen = pygame.display.set_mode((300, 300)) # no need it - created in menu.py
clock = pygame.time.Clock()
quit = False
path = [(26, 43),
(105, 110),
(45, 225),
(145, 295),
(266, 211),
(178, 134),
(250, 56),
(147, 12)]
path2 = [(26, 43),
(105, 10),
(45, 125),
(150, 134),
(150, 26),
(107, 12)]
ball = Ball(path, screen)
ball.speed = 1.9
ball2 = Ball(path2, screen)
ball2.color = (200, 200, 0)
balls = [ball, ball2]
while not quit:
quit = pygame.event.get(pygame.QUIT)
pygame.event.poll()
map(Ball.update, balls)
screen.fill((0, 0, 0))
map(Ball.draw, balls)
pygame.display.flip()
clock.tick(60)
|
How to retrieve useful result from subprocess?
Question: # Summery
With which statement can :
b'10.0.3.15'
be converted into :
'10.0.3.15'
[What does a b prefix before a python string
mean?](http://stackoverflow.com/questions/2592764/what-does-a-b-prefix-before-
a-python-string-mean) explains the meaning of 'b' but not answer this
question.
An working answer on this question will be of course voted as solved.
# Full
From shell can find my IP address with the complex statement :
ip addr show eth0 | awk '$1 == "inet" {gsub(/\/.*$/, "", $2); print $2}'
(source : [Bash parse ifconfig to get only my IP
address](http://stackoverflow.com/questions/23934425/bash-parse-ifconfig-to-
get-only-my-ip-address/23934702#23934702) )
Which returns for me: `10.0.3.15`
That is Ok.
Now I want this statement, or any statement, to be executed in Python. Because
the statement has all kind of specific tokens, I just write to file, and read
the file into variable `cmd`
I checked the content of `cmd` is identical to the above statement. Now I
execute this command in Python :
p = subprocess.Popen(cmd, shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
b = p.communicate()
The result is a tuple, I only need the content:
value = b[0]
The value is `b'10.0.3.15\\n'`
In [Format Specification Mini
Language](https://docs.python.org/3/library/string.html) I see 'b' is a type
which stand for binary.
I only need `'10.0.3.15'`. How to achieve that?
Answer: Use `out, _ = p.communicate()`. Then your output is saved as a string in the
variable `out`. To remove `\n`. you can use `strip()` (e.g. `out.strip()`).
import subprocess
device = 'em1'
cmd = "ip addr show %s | awk '$1 == \"inet\" {gsub(/\/.*$/, \"\", $2); print $2}'" % device
p = subprocess.Popen(cmd, shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
out, _ = p.communicate()
print(out.strip())
But you should have a look at the Python module
[netifaces](https://pypi.python.org/pypi/netifaces/).
import netifaces
device = 'em1'
print(netifaces.ifaddresses(device)[netifaces.AF_INET])
This will output something like this:
[{'broadcast': '192.168.23.255', 'netmask': '255.255.255.0', 'addr': '192.168.23.110'}]
|
Why my hadoop output is many parts of file?
Question: I tried to count the frequency of word, and write the file:
`mapper.py`:
#!/usr/bin/env python
import sys
# input comes from STDIN (standard input)
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# split the line into words
words = line.split()
# increase counters
for word in words:
# write the results to STDOUT (standard output);
# what we output here will be the input for the
# Reduce step, i.e. the input for reducer.py
#
# tab-delimited; the trivial word count is 1
print '%s\t%s' % (word, 1)
use the hadoop statement: `hadoop streaming \ -input
"/app/hadoop_learn_test/book.txt" \ -mapper "python mapper.py" \ -reducer
"cat" \ -output "/app/hadoop_learn_test/book_out" \ -file "mapper.py" \ ` the
`book.txt` is:
foo foo quux labs foo bar quux
but I got 400 files named as `part-00000.gz` and when I use `hadoop dfs -cat
path` to get the contents, I got nothing.
Why can't I got the result?
I use `cat book.txt | python mapper.py | sort` in the local terminal and got the following:
bar 1
foo 1
foo 1
foo 1
labs 1
quux 1
quux 1
Answer: i think You need to use Counter
#!/usr/bin/env python
import sys
from collection import Counter
# input comes from STDIN (standard input)
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# split the line into words
words = line.split()
# increase counters
wordcount=Counter(words)
for word,count in wordcount.items():
print '%s\t%s' % (word, count)
|
How to create proper xml attribute with python parser
Question:
from xml.dom.minidom import parse
dom = parse('abc.xml')
for node in dom.getElementsBy('addr'):
print node.toxml()
What do i need to add (attribute) to print only addr's ( ip addresses from
nmap xml file) ?
<host starttime="1404053959" endtime="1404054014"><status state="up" reason="reset" reason_ttl="254"/> <address addr="ip" addrtype="ipv4"/>
Answer: I add `root` tag and `127.0.0.1`, `192.168.0.1` to show something more:
from xml.dom.minidom import parse, parseString
data = '''<root>
<host starttime="1404053959" endtime="1404054014">
<status state="up" reason="reset" reason_ttl="254"/>
<address addr="ip" addrtype="ipv4">127.0.0.1</address>
</host>
<host starttime="1404053959" endtime="1404054014">
<status state="up" reason="reset" reason_ttl="254"/>
<address addr="ip" addrtype="ipv4">192.168.0.1</address>
</host>
</root>'''
#dom = parse('abc.xml')
dom = parseString(data)
for node in dom.getElementsByTagName('address'):
print ' xml:', node.toxml()
print 'type:', node.getAttribute('addrtype')
print 'addr:', node.childNodes[0].data
result:
xml: <address addr="ip" addrtype="ipv4">127.0.0.1</address>
type: ipv4
addr: 127.0.0.1
xml: <address addr="ip" addrtype="ipv4">192.168.0.1</address>
type: ipv4
addr: 192.168.0.1
|
Error copying files
Question: I'm trying to write a short Python script that will copy all files from a
directory with a certain extension and place them in a different folder. Here
is the script:
import os, shutil
source = "C:\\TCR_Calgary\\projects\\"
destination = "C:\\TCR_Calgary\\reference\\my_code\\city_of_calgary_scripts\\"
extension = ".py"
for root, dirs, files in os.walk(source):
for file in files:
if file.lower().endswith(extension):
shutil.copy2(file, destination)
This is the exception I get:
Traceback (most recent call last):
File "C:/TCR_Calgary/reference/my_code/city_of_calgary_scripts/python_script_copier.py", line 13, in <module>
shutil.copy2(file, destination)
File "C:\Program Files (x86)\Python26\lib\shutil.py", line 99, in copy2
copyfile(src, dst)
File "C:\Program Files (x86)\Python26\lib\shutil.py", line 47, in copyfile
raise Error, "`%s` and `%s` are the same file" % (src, dst)
shutil.Error: `AnnoMover.py` and `C:\TCR_Calgary\reference\my_code\city_of_calgary_scripts\AnnoMover.py` are the same file
I'm confused because AnnoMover.py is not in the destination folder. In my
mind, the original file and its copy would be "the same" although their paths
would not. There's clearly something I'm not understanding here. Any help is
much appreciated!
Version: Python 2.6 Interpeter: PyCharm Community Edition 3.4 OS: Windows 7
Answer: The exception message could be a little clearer, but the last line should be
`shutil.copy2(os.path.join(source, file), destination)` so that the source is
fully qualified.
|
Are "Field" and "Fields" reserved words in Django or Python?
Question: I'm developing a django project for agriculture. I want to name an app
"fields" and inside the app "fields" I want to name a model "Field" (referring
to a farmer field).
I tried it and it works, so I assume that "fields" and "Field" are not
reserved words in Django or Python. But I was just wondering if using these
words can be problematic in the future or it's just fine?
And the general question: **Is there any way to check if a word is reserved in
Django or Python?**
Answer: Strictly speaking, "reserved word" is the wrong term here. Reserved words are
properties of a programming language, and Django is not a language.
There is a class called `Field` in Django in the `django.db.models.fields`
package, and another (similar but different) class with the same name in the
`django.forms.fields` package.
However, these `Field` classes are so low level classes that it's unlikely
that you will ever need to import them, so they won't interfere with your own
`Field` class.
In any case, and as you can see, within Django itself the same class can exist
in multiple packages with the same name. It's fine, because the package name
serves as a namespace, so you can always fully qualify a class by using its
complete package name.
Another useful thing in Python is importing classes with a different name:
from django.db.models.fields import Field as DjangoField
help(DjangoField)
Similarly, importing packages with a different name:
from django.db.models import fields as djangofields
help(djangofields.Field)
This way you can always avoid collisions in class and package names.
|
Python struct.error: unpack requires a string argument of length 2
Question: I have written some data using C++ in byte format. I am now trying to read
that data again using Python, but I run into an error;
Traceback (most recent call last):
File "binary-reader.py", line 61, in <module>
interaction_types.append(struct.unpack('<H',fp.read(2))[0]);
struct.error: unpack requires a string argument of length 2
I don't really understand since it looks like I am giving a string of length
2, right? Furthermore, I do the same thing at line 32
There is [another question like
mine](http://stackoverflow.com/questions/17579444/struct-error-unpack-
requires-a-string-argument-of-length-2) but it is without an answer is
targeted for Python 3.
Here is my code
import sys
import struct
import os
print "Arguments : "
print str(sys.argv)
#N = #isects
# 2 2 3*4 2 3*4*N 4N 4N 3*4N 2N 2N
#imageX,imageY,throughput,#isects,isect_positions,primitive_ids,shape_ids,spectra,interaction_types,light_ids
file_path = str(sys.argv[1]);
byte_count = 0;
line_number = 1;
fp = open(file_path, "rb");
output = open('output.txt',"w");
file_size = os.path.getsize(file_path)
print "(input) file size = " + str(file_size);
while byte_count < file_size:
print "Line number = " + str(line_number)
print "Current byte count = " + str(byte_count)
# Do stuff with byte.
x = struct.unpack('<H', fp.read(2))[0]
y = struct.unpack('<H', fp.read(2))[0]
throughputOne = struct.unpack('<f', fp.read(4))[0]
throughputTwo = struct.unpack('<f', fp.read(4))[0]
throughputThree = struct.unpack('<f', fp.read(4))[0]
nrIsects = struct.unpack('<H',fp.read(2))[0]
# print "x = " + str(x)
# print "y = " + str(y)
# print "throughputOne = " + str(throughputOne)
# print "throughputTwo = " + str(throughputTwo)
# print "throughputThree = " + str(throughputThree)
print "nrIsects = " + str(nrIsects)
isect_positions = []
for i in range(nrIsects*3):
value = struct.unpack('<f',fp.read(4))[0]
isect_positions.append(value);
primitive_ids = []
for i in range(nrIsects):
value = struct.unpack('<I',fp.read(4))[0]
primitive_ids.append(value);
shape_ids = []
for i in range(nrIsects):
shape_ids.append(struct.unpack('<I',fp.read(4))[0]);
spectra = []
for i in range(nrIsects*3):
spectra.append(struct.unpack('<f',fp.read(4))[0]);
interaction_types = []
for i in range(nrIsects):
interaction_types.append(struct.unpack('<H',fp.read(2))[0]);
light_ids = []
for i in range(nrIsects):
light_ids.append(struct.unpack('<H',fp.read(2))[0]);
output_vars = [x,y,throughputOne,throughputTwo,throughputThree,nrIsects]
line_string = ""
for i in range(len(output_vars)):
output.write(str(output_vars[i]))
line_string += str(output_vars[i])
if i is not len(output_vars) - 1:
output.write(',')
line_string += ','
print line_string
#Update counters
byte_count += 18 + 36*nrIsects
line_number+=1
# raw_input('Press any key to continue.');
# print byte
And here is a link to a input file to use. You can run the code by passing a
commandline argument specifying the path of the binary file. I have also
written the code in ASCII, which reads
0,0,[0.127076,0.127076,0.127076],1,{[0.144978,-0.294863,2.991749]},{3917},{3916},{[1.375603,1.375603,1.375603]},{5},{0}
<https://www.dropbox.com/s/tu1anqo5k0ygtd6/writetest.bin>
EDIT: The layout of my file can be found as a comment in the code
Answer: 50 bytes have already been read before the `fp.read(2)` that raises the error.
Thus, `fp.read(2)` returns an empty string, and `struct.unpack` raises an
exception:
In [83]: 2+2+4+4+4+2+12+4+4+12
Out[83]: 50
* * *
x = struct.unpack('<H', fp.read(2))[0] # 2 bytes read
y = struct.unpack('<H', fp.read(2))[0] # 2 bytes
throughputOne = struct.unpack('<f', fp.read(4))[0] # 4 bytes
throughputTwo = struct.unpack('<f', fp.read(4))[0] # 4 bytes
throughputThree = struct.unpack('<f', fp.read(4))[0] # 4 bytes
nrIsects = struct.unpack('<H',fp.read(2))[0] # 2 bytes
print "nrIsects = " + str(nrIsects)
isect_positions = []
for i in range(nrIsects*3):
value = struct.unpack('<f',fp.read(4))[0] # 12 bytes
isect_positions.append(value)
primitive_ids = []
for i in range(nrIsects):
value = struct.unpack('<I',fp.read(4))[0] # 4 bytes
primitive_ids.append(value)
shape_ids = []
for i in range(nrIsects):
shape_ids.append(struct.unpack('<I',fp.read(4))[0]) # 4 bytes
spectra = []
for i in range(nrIsects*3):
spectra.append(struct.unpack('<f',fp.read(4))[0]) # 12 bytes
interaction_types = []
for i in range(nrIsects):
interaction_types.append(struct.unpack('<H', fp.read(2))[0]) # error!
* * *
|
What would happen if all languages began doing strings in UTF-8?
Question: Unicode is awesome. There aren't too many people who disagree with this.
Apart from Python 3 ([which did it
wrong](http://lucumr.pocoo.org/2014/5/12/everything-about-unicode/)), what
would be the negative impact (if any) of the next major version of all
programming languages defaulting to using Unicode/UTF-8 strings?
I'm talking specifically about the many cases which require workarounds to get
UTF-8. For example, running a Java program:
java ... -Dfile.encoding=UTF-8
Or working with strings in Python 2:
# -*- coding: utf8 -*-
...
unicode_string = u"This is Unicode Text"
Certain MySQL databases default to a different character encoding by default:
[server]
collation_server=utf8_unicode_ci
character_set_server=utf8
etc. etc.
Why don't we all just default to using Unicode/UTF-8 and allow users to use
the workarounds if they need support for other character encodings? What would
be the problems with doing this?
Answer: UTF-8 is a variable-length encoding, which is slower to parse than fixed-
length encodings. Example: the 7th character of an ASCII string is always the
7th byte. We don't know exactly where the 7th character of a UTF-8 string is
in memory without starting from the beginning of the string and parsing the
whole thing. For long strings this can be expensive.
So for string operations where finding specific substrings based on
character/byte position is important (SQL databases are a great example of
this) other encodings can often be preferable.
Additionally, UTF-8 encodes non-english text (outside the ASCII range) as two
or more bytes, while a lot of character encodings (koi8-r for Russian, as an
example) encode all of the commonly used characters of other languages in a
single byte, which is handy for mediums such as email where all the data must
be sent over the network.
GB2312 is the primary Chinese character set, which encodes the entire Chinese
alphabet in two-byte characters, while all of these characters would be 3
bytes in UTF-8 (50% increase)
UTF-8 is amazing for compatibility, but in terms of how it represents
characters in memory, other encodings outcompete it in a lot of scenarios.
|
Python read Router IP from file and telnet into
Question: I would like to read a router IP from a text file, then telnet into it:
import sys
import telnetlib
f = open("C:\\MyIP.txt","r")
line = f.readline()
user = "username"
password = "password"
for line in f:
tn = telnetlib.Telnet(line)
tn.read_until("Login: ")
tn.write(user + "\n")
if password:
tn.read_until("Password: ")
tn.write(password + "\n")
f.close()
However, in the above code, if I try to read it directly from a file, it's not
working.
Only like this:
import getpass
import sys
import telnetlib
hostserver = "192.168.2.1"
newline = "\n"
username = "username" + newline
password = "password" + newline
telnet = telnetlib.Telnet(hostserver)
telnet.read_until("Login: ")
telnet.write(username+ "\n")
telnet.read_until("Password: ")
telnet.write(password+ "\n")
while 1:
command = raw_input("> ")
telnet.write(command+ "\n")
if command == "exit":
break
telnet.read_all()
Any help is appreciated
Answer: You have the statement
line = f.readline()
and then later, you have:
for line in f:
The `for` loop will loop over each line of the file _that has not already been
read_. It doesn't really look like you want to loop at all. Remove the `for`
statement since your `line` variable has already been set above.
|
Python object validation thanks to a Schema
Question: I want to validate a python object thanks to a schema. For this I found the
[`schema` framework](https://pypi.python.org/pypi/schema).
I would like to validate a numeric string:
a = {
'phone_number': '12233'
}
Do you know how can I validate this string thanks to a regex?
At this time, I only know how to perform a string validation:
Schema(str).validate('12')
Answer: `Schema` will call any callables; simply provide a function that uses a
regular expression:
import re
pattern = re.compile('^12\d+$')
Schema(And(str, lambda x: pattern.match(x) is not None))
Demo:
>>> import re
>>> from schema import Schema, And
>>> pattern = re.compile('^12\d+$')
>>> s = Schema(And(str, lambda x: pattern.match(x) is not None))
>>> s.validate('123234')
'123234'
>>> s.validate('42')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mj/Development/venvs/stackoverflow-2.7/lib/python2.7/site-packages/schema.py", line 153, in validate
raise SchemaError([None] + x.autos, [e] + x.errors)
schema.SchemaError: <lambda>('42') should evaluate to True
|
How to install 3. Party library into anaconda if it is not in conda list
Question: I have a general problem about module importation. Thank you very much.
The situation is the following:
1. I have a python compressed package *****.tar.gz
2. This package can not be found in conda list
3. if I uncompressed it and use 'python setup.py install' package do would be installed into system python namely user/local/lib/python 2.7/site-packages, but anaconda distribution, which causes a problem that if I start python in anaconda distribution this installed package can not be accessed to.
So is there any direct solution to this problem?
Secondly, I am confused what's the difference between ~anaconda/env and
virtualenv
thank you very much
Answer: You should use pip to do this: `pip install <path_to_file>`.
Alternatively, if your package is available on PyPi - you can just do a `pip
install <packagename>` (do a `pip search <packagename>` to see if its on pypi.
For instance, I wanted to install pymongo - was easy - `pip install pymongo`.
Caveat:
I installed anaconda as root into /opt/anaconda - so, I had so `sudo su`, the
add /opt/anaconda/bin to the start of PATH, then run `pip install pymongo` so
that it would install the package into the anaconda dist, and not the existing
ubuntu python dist.
|
Calling R script from python using rpy2
Question: I'm very new to rpy2, as well as R.
I basically have a R script, script.R, which contains functions, such as
rfunc(folder). It is located in the same directory as my python script. I want
to call it from Python, and then launch one of its functions. I do not need
any output from this R function. I know it must be very basic, but I cannot
find examples of R script-calling python codes. What I am currently doing, in
Python:
import rpy2.robjects as robjects
def pyFunction(folder):
#do python stuff
r=robjects.r
r[r.source("script.R")]
r["rfunc(folder)"]
#do python stuff
pyFunction(folder)
I am getting an error on the line with source:
`r[r.source("script.R")] File "/usr/lib/python2.7/dist-
packages/rpy2/robjects/__init__.py", line 226, in __getitem__ res =
_globalenv.get(item) TypeError: argument 1 must be string, not ListVector`
I quite do not understand how the argument I give it is not a string, and I
guess the same problem will then happen on the next line, with _folder_ being
a python string, and not a R thingie.
So, how can I properly call my script?
Answer: `source` is a `r` function, which runs a `r` source file. Therefore in `rpy2`,
we have two ways to call it, either:
r['source']("script.R")
or
r.source("script.R")
`r[r.source("script.R")]` is a wrong way to do it.
Same idea may apply to the next line.
|
How to search for a substrings value inside of a string?
Question: I am trying to find the value of a property, inside of a string. In `<img
src="invalidURL.com">` if the property/sub-string were to be `src`, I would
want to recieve `invalidURL.com`.
In Violent Python it uses the line `imgSrc = imgTag['src']`, which does not
givee a compiler error, and the script runs fine. (Full script can be found in
[this Github repo.](https://github.com/shadow-box/Violent-Python-
Examples/blob/master/Chapter-3/4-exifFetch.py)) However, when I tried to write
my own script it gives a compiler error.
htmlImgTags = ['<img src="/images/icons/product/chrome-48.png"/>', '<img src="asdasd">']
for tag in htmlImgTags:
print tag
tagSrc = tag['src']
print tagSrc
The error complains about using a string as an index instead of an int.
<img src="/images/icons/product/chrome-48.png"/>
Traceback (most recent call last):
File "looking in an array.py", line 4, in <module>
tagSrc = tag['src']
TypeError: string indices must be integers, not str
What exactly is going wrong in my code, but not in the book's?
Answer: The code you link to appears to be using a library called [Beautiful
Soup](http://www.crummy.com/software/BeautifulSoup/) to parse the HTML. That
loop is over a list of tag objects created by Beautiful Soup, not a list of
the actual tag text.
Here's an example using [Beautiful Soup
v3](http://www.crummy.com/software/BeautifulSoup/bs3/documentation.html):
from BeautifulSoup import BeautifulSoup
html_doc = """
<img src="/images/icons/product/chrome-48.png"/>
<img src="/images/icons/product/chrome-49.png"/>
"""
soup = BeautifulSoup(html_doc)
html_img_tags = soup.findAll("img")
for tag in html_img_tags:
print tag['src']
The output is:
/images/icons/product/chrome-48.png
/images/icons/product/chrome-49.png
Note that `tag` is **not** just a string, it's a BeautifulSoup tag object:
>>> type(html_img_tags[0])
<class 'BeautifulSoup.Tag'>
If you print it, it will display as a nicely formatted tag:
>>> print html_img_tags[0]
<img src="/images/icons/product/chrome-48.png" />
But that's only because BeautifulSoup makes sure that the object converts
itself to that string for easy inspection.
* * *
Note: if you happen to have BS4 on your machine instead, the import line
should be:
from bs4 import BeautifulSoup
...and the `findAll()` function is now `find_all()`.
|
Python exception because of invalid environment setup?
Question: Disclaimer: .net developer trying to setup python environment.
I have `.py` files trying to call the following line:
from paramiko import SSHClient, SSHConfig
However I get error saying
ImportError: No module named paramiko
I believe this is because I have missed to install something? I start python
command prompt and type
import paramiko without any problem.
Anyone familiar with this issue? (Using `python 2.7.6`)
Edit: Python command prompt recieves and understands the command "import
paramiko..." but my cygwin terminal does not. I believe cygwin has invalid
python install path.
Answer: If you are getting `ImportError: No module named paramiko` please install it.
If you already installed, please check path of the module and check same path
exists in `PYTHONPATH` environment variable.
|
Python 2: Why is this bytestring order switched in struct.pack() and struct.unpack() methods?
Question: In Python 2.7.5, I have an hex 0xbba1, and I want to change it in bytestring
format.
>>> bytetoint = lambda bytestr: struct.unpack('H', bytestr)[0]
>>> hextobyte = lambda hexnum: struct.pack('H', hexnum)
>>> hextobyte(0xbba1)
'\xa1\xbb'
>>> hex(bytetoint('\xa1\xbb'))
'0xbba1'
Why are the first byte '\xa1' and the second byte'\xbb' switched in place?
How can I get the right bytestring from hex, or vice versa?
e.g. 0xbba1 -> '\xbb\xa1' '\xbb\xa1' -> 0xbba1
Answer: It's a [little-endian/big-endian](http://en.wikipedia.org/wiki/Endianness)
thing. You can't really say the bytes are switched, because nothing in the
`int` definition says what order the bytes representing it are laid out in.
The result you have is a perfectly usable little-endian representation. If you
want to force big-endian, which may look better to a human reader, you can
specify the byte order with `>`:
>>> import struct
>>> struct.pack('>H', 0xbba1)
'\xbb\xa1'
>>> hex(struct.unpack('>H', '\xbb\xa1')[0])
'0xbba1'
|
Python numpy easier syntax?
Question: I am new to numpy, and I'm already a little sick of its syntax.
Something which could be written like this in Octave/matlab
1/(2*m) * (X * theta - y)' * (X*theta -y)
Becomes this in numpy
np.true_divide(((X.dot(theta)-y).transpose()).dot((X.dot(theta)-y)),2*m)
This is much harder for me to write and debug. Is there any better way to
write matrix operations like above so as to make life easier?
Answer: You can make some simplifications. By using `from __future__ import division`
at the beginning of your program, all division will automatically be "true"
division, so you won't need to use `true_divide`. (In Python 3 you don't even
need to do this, since true division is automatically the default.) Also, you
can use `.T` instead of `.transpose()`. Your code then becomes
1/(2*m) * ((X.dot(theta) - y).T).dot((X.dot(theta) - y))
which is a bit better.
In Python 3.5, a new matrix multiplication operator `@` is being added for
basically this exact reason. This is not out yet, but when it is (and when
numpy is updated to make use of it), your code will become very similar to the
Octave version:
1/(2*m) * (X@theta - y).T @ (X@theta - y)
|
Determing the type of number stored as a string
Question: How do I tell whether a number, stored as a string, is an int or a float?
For example:
def isint(x):
if f(x):
print 'this is an int'
else:
print 'this is a float'
>>> x = '3'
>>> isint(x)
>>> this is an int
>>> x = '3.14159'
>>> isint(x)
>>> this is a float
**What is the required`f(x)` function?**
One solution is to convert x to a float, find `r = x % 1`, and then determin
whether `r == 0`. But is there anything built into Python that does this for
me more neatly?
Answer: You can use
[`ast.literal_eval`](https://docs.python.org/2/library/ast.html#ast.literal_eval):
>>> from ast import literal_eval
>>> type(literal_eval('1.01'))
<type 'float'>
>>> type(literal_eval('1'))
<type 'int'>
>>> type(literal_eval('1+0j'))
<type 'complex'>
If you want to do some sanity check as well, in case user might also pass a
non-numeric string:
import numbers
from ast import literal_eval
def number_type(x):
try:
n = literal_eval(x)
if isinstance(n, numbers.Number):
print type(n).__name__
else:
print 'not a number'
except (ValueError, SyntaxError):
print 'not a number'
|
Is there an easy way of calculating number of IPs from 2 given IP addresses?
Question: I want to calculate number of number of IP addresses from 2 given IP
addresses.
Example: 127.0.1.10 and 127.0.0.200 is 67 IP addresses..
What's easy way of doing this?
I've seen other examples, but I'm looking for a Python example. Thanks.
Answer:
>>> import socket
>>> def iptoint(ip):
... return int(socket.inet_aton(ip).encode('hex'), 16)
...
>>> iptoint('127.0.0.200')
2130706632
>>> iptoint('127.0.1.10') - iptoint('127.0.0.200')
66
>>> def ipdistance(ip1, ip2):
... return abs(iptoint(ip1) - iptoint(ip2)) + 1
...
>>> ipdistance('127.0.1.10', '127.0.0.200')
67
|
For Loop vs While Loop differences PYTHON
Question: I am a beginner to Python, and my professor does poor job explaining the
differences between loops. I wanted to ask this community the differences
between For loops and While loops. I looked at various resources but what I am
confused about is how for loops have no counter to bring them back to the
beginning like while loops: I have the following code written in for() loop
but my goal is to change it to a while loop. The code is not what is important
is how to change this code to a While() loop.
for r_ow in range(Height_box):
for c_col in range(Width_box):
gridpoint = box * row + column
if gridpoint in gridList:
box[r_ow][c_col] = "Inside Box"
else:
outsideBox = (CurrentBox(boxWidth,Boxedge))
ctr = 0
for Box_edges in Box:
if eval(Box(boxWidth,boxHeight,box_edges,box_point)):
if box_edge in gridList:
ctr += 1
Bow[r_ow][c_col] = str(int(box[r_ow][c_col]) + ctr)
So far, I have gotten to a point where I think this is what it suppose to look
like but now i get stuck in a CMD infinite loop.
row = 0
while r_ow < boxHeight:
column = 0
while c_col < boxWidth:
gridpoint = box * row + column
if gridpoint in gridList:
box[r_ow][c_ol] = "Inside Box"
else:
outsideBox = (CurrentBox(boxWidth,Boxedge))
ctr = 0
while ctr < Boxes:
if eval(Box(boxWidth,boxHeight,box_edges,box_point)):
if box_edge in gridList:
ctr += 1
Bow[r_ow][c_col] = str(int(box[r_ow][c_col]) + ctr)
column += 1
row += 1
return box
Can anyone give some advice as to how to format the first code into while
loops()?
thank you very much!!
Answer: Your column increment statement (column += 1) is outside the corresponding
while loop.
Same for the row increment statement.
Right now, column will increment after the loop is done. But, you need it to
increment after every iteration of the loop.
Add a level of intend to both these statements and you are good to go.
Box[r_ow][c_colu] = str(int(box[r_ow][c_col]) + ctr)
column += 1
row += 1
return board
Also there is a problem with your innermost while loop. neighbors is a list
(or something similar) while ctr is an int. You can't compare the two. Also,
ctr is not the variable iteration through neighbors. It's a counter. Do
something like this.
i = 0
ctr = 0
while i < len(box):
if eval(Box(boxWidth,boxHeight,box_edges,box_point)):
if box_edge in gridList:
ctr += 1
i+=1
|
python scrapy does not working - "ImportError: No module named settings"
Question: scrappy lib in `/usr/lib/python2.7/site-packages/scrapy`
my project catalog:
.../projects/scrapy
.../projects/parser_module
....../projects/parser_module/parser
....../projects/parser_module/parser
........../projects/parser_module/parser/spiders/.....
........../projects/parser_module/parser/<files etc>
....../projects/parser_module/scrapy.cfg
In directory `.../projects/parser_module/` I am set command `scrapy crawl
parser` and get result:
Traceback (most recent call last):
File "/usr/bin/scrapy", line 4, in <module>
execute()
File "/usr/lib/python2.7/site-packages/scrapy/cmdline.py", line 109, in execute
settings = get_project_settings()
File "/usr/lib/python2.7/site-packages/scrapy/utils/project.py", line 60, in get_project_settings
settings.setmodule(settings_module_path, priority='project')
File "/usr/lib/python2.7/site-packages/scrapy/settings/__init__.py", line 108, in setmodule
module = import_module(module)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: No module named settings
Can you please tell me how to solve the problem?
Answer: to avoid such issue create your project folder with `scrapy startproject
parser_module`
now to fix your issue, you either have a fresh start or create a dummy project
with `scrapy startproject` to copy setting.py from. Then maybe next error you
will find other missing file in this folder.
This is the typical structure of a scrapy project.
.
├── scrapy.cfg
└── project_name
├── __init__.py
├── items.py
├── settings.py
└── spiders
└── __init__.py
spider.py
|
How to create an empty row in GTKTreeView?
Question: How can I create an empty row (2 floats) in GTKTreeView? I set up this:
self.liststore = Gtk.ListStore(float, float)
self.treeview = Gtk.TreeView(model=self.liststore)
and then add 3 rows:
self.liststore.append([2.35, 2.40])
self.liststore.append([3.45, 4.70])
self.liststore.append()
but the 3rd row is filled with 0.00000 each when I execute the program. How
can I create a true empty row? Or how can I display an empty row in
GtkTreeView?

I am working on a program that plots points and adds a new row each time the
empty row is filled:
#!/usr/bin/python3
from gi.repository import Gtk
from matplotlib.figure import Figure
from matplotlib.backends.backend_gtk3agg import FigureCanvasGTK3Agg as FigureCanvas
class DataBase():
def __init__(self):
self.window = Gtk.Window()
self.window.set_default_size(800, 500)
self.box = Gtk.Box()
self.window.add(self.box)
self.fig = Figure(figsize=(10,10), dpi=80)
self.ax = self.fig.add_subplot(111)
self.canvas = FigureCanvas(self.fig)
self.box.pack_start(self.canvas, True, True, 0)
self.liststore = Gtk.ListStore(float, float)
self.treeview = Gtk.TreeView(model=self.liststore)
self.box.pack_start(self.treeview, False, True, 0)
self.xrenderer = Gtk.CellRendererText()
self.xrenderer.set_property("editable", True)
self.column_text = Gtk.TreeViewColumn("x-Value", self.xrenderer, text=0)
self.treeview.append_column(self.column_text)
self.yrenderer = Gtk.CellRendererText()
self.yrenderer.set_property("editable", True)
self.column_editabletext = Gtk.TreeViewColumn("y-Value", self.yrenderer, text=1)
self.treeview.append_column(self.column_editabletext)
self.xrenderer.connect("edited", self.xedited)
self.yrenderer.connect("edited", self.yedited)
self.liststore.append([2.35, 2.40])
self.liststore.append([3.45, 4.70])
self.liststore.append()
def resetplot(self):
self.ax.cla()
self.ax.set_xlim(0,10)
self.ax.set_ylim(0,10)
self.ax.grid(True)
def plotpoints(self):
self.resetplot()
for row in self.liststore:
self.ax.scatter(row[:1], row[1:], marker='o', s=50)
self.fig.canvas.draw()
def xedited(self, widget, path, number):
self.liststore[path][0] = float(number.replace(',', '.'))
self.plotpoints()
def yedited(self, widget, path, number):
self.liststore[path][1] = float(number.replace(',', '.'))
self.plotpoints()
db = DataBase()
db.resetplot()
db.plotpoints()
db.window.connect("delete-event", Gtk.main_quit)
db.window.show_all()
Gtk.main()
Answer: I don't think you can, just don't add a row unless you've got data for it,
unless you store strings in the model. Then you can have empty strings for the
empty rows.
|
Convert from Dict to JSON in Python
Question: I want to construct a dict in Python which with json.dumps(arg) will convert
to the following JSON structure:
"{\"type\":\"id\",
\"entries:\":
[[\"a\",91],
[\"b\",65],
[\"c\",26],
[\"d\",25]]}"
This is what I have so far:
json_dict = {'type': str("id"),
'entries': [['a': "91"], #Error line
['b': "65"],
['c': "26"],
['d': "25"]]}
I am getting "invalid syntax" error on the line which is marked with #Error
line. How can I represent this hierarchical structure in a dict and still be
able to convert it to the desired JSON structure?
Answer: Python lists use commas, not colons:
json_dict = {'type': str("id"),
'entries': [['a', "91"], # note the comma after 'a', not a colon
['b', "65"],
['c', "26"],
['d', "25"]]}
With commas, this is now valid Python syntax, producing a data structure that
can be serialised to JSON:
>>> json_dict = {'type': str("id"),
... 'entries': [['a', "91"],
... ['b', "65"],
... ['c', "26"],
... ['d', "25"]]}
>>> import json
>>> json.dumps(json_dict)
'{"type": "id", "entries": [["a", "91"], ["b", "65"], ["c", "26"], ["d", "25"]]}'
|
TextBlob installation in windows
Question: I have followed the instruction in [Trouble installing TextBlob for
Python](http://stackoverflow.com/questions/20562768/trouble-installing-
textblob-for-python) for TextBlob installation in the Windows 7. It got
installed but when I go to Python Idle and type `import TextBlob` it says
> No module named TextBlob
How to solve this problem?
Or can I directly place the libraries associated with the package in the
Python Lib folder and try to import it in the program? If it is advisable
please tell the procedure to do that. Will it work?
Any help will be highly appreciated.
Answer: Try this:
from textblob import TextBlob
Source: [TextBlob package description](https://pypi.python.org/pypi/textblob)
|
Radio button display and selection issue in wxPython
Question: I am creating multi-column lists which are all equal in length and also
generating number of radio buttons equal to the length of list. I have couple
of issues:
1] Display issue: In following fig., I get radio buttons. 
But when I scroll down this is what happens. 
Following is a snippet of my code to generate it Please assist me to fix this
issue so that I get full list of radio buttons displayed properly.
w = 0
for i in range(1,len(lut_code)):
w += 30
rb_G = wx.RadioButton(scroll1, -1, "G", (500,w), style=wx.RB_GROUP)
rb_F = wx.RadioButton(scroll1, -1, "F", (540,w))
rb_P = wx.RadioButton(scroll1, -1, "P", (580,w))
2] Selection of radio button: When I want to select a single radio button from
a row, a complete row is selected instead and it gets blue in color like in
following fig. 
Is it because of my use of wx.ListCtrl to display these columns? What would be
the fix or alternative method to only select radio button of choice instead of
selecting whole row?
Answer: Took me some time to figure this out. I think you are actually drawing the
radio buttons on the frame by specifying the position! This will obviously not
work in this case because your listCtrl comes to the foreground when you try
to select a radio button.
If you notice the radio buttons are not correctly aligned according to the
list items. The gap between the buttons is greater than the gap between rows
of the list.
The problem with using list_ctrl is you cannot put widgets like radio buttons
in them.
I would suggest you to use something like wx.ScrolledWindow for this. Put each
row into a sizer and stack these sizers vertically.
UPDATE:
import wx
class Frame ( wx.Frame ):
def __init__( self, parent ):
wx.Frame.__init__ ( self, parent, id = wx.ID_ANY, title = u"Test", pos = wx.DefaultPosition, size = wx.Size( -1,-1 ), style = wx.DEFAULT_FRAME_STYLE|wx.TAB_TRAVERSAL )
sizer0 = wx.BoxSizer( wx.VERTICAL )
self.scrolledWindow = wx.ScrolledWindow( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.HSCROLL|wx.VSCROLL )
self.scrolledWindow.SetScrollRate( 5, 5 )
grid_sizer = wx.GridSizer( 0, 8, 0, 0 )
self.head1 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"Code", wx.DefaultPosition, wx.DefaultSize, 0 )
self.head1.Wrap( -1 )
grid_sizer.Add( self.head1, 0, wx.ALL, 5 )
self.head2 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"Classification", wx.DefaultPosition, wx.DefaultSize, 0 )
self.head2.Wrap( -1 )
grid_sizer.Add( self.head2, 0, wx.ALL, 5 )
self.head3 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"A", wx.DefaultPosition, wx.DefaultSize, 0 )
self.head3.Wrap( -1 )
grid_sizer.Add( self.head3, 0, wx.ALL, 5 )
self.head4 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"B", wx.DefaultPosition, wx.DefaultSize, 0 )
self.head4.Wrap( -1 )
grid_sizer.Add( self.head4, 0, wx.ALL, 5 )
self.head5 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"C", wx.DefaultPosition, wx.DefaultSize, 0 )
self.head5.Wrap( -1 )
grid_sizer.Add( self.head5, 0, wx.ALL, 5 )
self.head6 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"D", wx.DefaultPosition, wx.DefaultSize, 0 )
self.head6.Wrap( -1 )
grid_sizer.Add( self.head6, 0, wx.ALL, 5 )
self.head7 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"Cond.", wx.DefaultPosition, wx.DefaultSize, 0 )
self.head7.Wrap( -1 )
grid_sizer.Add( self.head7, 0, wx.ALL, 5 )
self.head8 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"Update", wx.DefaultPosition, wx.DefaultSize, 0 )
self.head8.Wrap( -1 )
grid_sizer.Add( self.head8, 0, wx.ALL, 5 )
self.column11 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"86", wx.DefaultPosition, wx.DefaultSize, 0 )
self.column11.Wrap( -1 )
grid_sizer.Add( self.column11, 0, wx.ALL, 5 )
self.column12 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"Urban", wx.DefaultPosition, wx.DefaultSize, 0 )
self.column12.Wrap( -1 )
grid_sizer.Add( self.column12, 0, wx.ALL, 5 )
self.column13 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"68", wx.DefaultPosition, wx.DefaultSize, 0 )
self.column13.Wrap( -1 )
grid_sizer.Add( self.column13, 0, wx.ALL, 5 )
self.column14 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"80", wx.DefaultPosition, wx.DefaultSize, 0 )
self.column14.Wrap( -1 )
grid_sizer.Add( self.column14, 0, wx.ALL, 5 )
self.column15 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"86", wx.DefaultPosition, wx.DefaultSize, 0 )
self.column15.Wrap( -1 )
grid_sizer.Add( self.column15, 0, wx.ALL, 5 )
self.column16 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, u"89", wx.DefaultPosition, wx.DefaultSize, 0 )
self.column16.Wrap( -1 )
grid_sizer.Add( self.column16, 0, wx.ALL, 5 )
self.column17 = wx.StaticText( self.scrolledWindow, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, 0 )
self.column17.Wrap( -1 )
grid_sizer.Add( self.column17, 0, wx.ALL, 5 )
radio1Choices = [ u"G", u"P", u"F" ]
self.radio1 = wx.RadioBox( self.scrolledWindow, wx.ID_ANY, wx.EmptyString, wx.DefaultPosition, wx.DefaultSize, radio1Choices, 3, wx.RA_SPECIFY_COLS )
self.radio1.SetSelection( 0 )
grid_sizer.Add( self.radio1, 0, wx.ALL, 5 )
self.scrolledWindow.SetSizer( grid_sizer )
self.scrolledWindow.Layout()
grid_sizer.Fit( self.scrolledWindow )
sizer0.Add( self.scrolledWindow, 1, wx.EXPAND |wx.ALL, 5 )
# Bind the radio box select event to a function
self.radio1.Bind( wx.EVT_RADIOBOX, self.on_selected )
self.SetSizer( sizer0 )
self.Layout()
self.Maximize()
self.Centre( wx.BOTH )
self.Show()
def on_selected(self, event):
# Depending upon the option selected the values of A,B,C,D are changed
if self.radio1.GetStringSelection() == 'P':
self.column13.SetLabel('25')
self.column14.SetLabel('27')
self.column15.SetLabel('34')
self.column16.SetLabel('12')
elif self.radio1.GetStringSelection() == 'F':
self.column13.SetLabel('56')
self.column14.SetLabel('70')
self.column15.SetLabel('49')
self.column16.SetLabel('54')
else:
self.column13.SetLabel('78')
self.column14.SetLabel('83')
self.column15.SetLabel('69')
self.column16.SetLabel('100')
if __name__ == "__main__":
app = wx.App()
Frame(None)
app.MainLoop()
I wrote this sample code to show you how you can do it. It's a bit crude but I
hope you get the picture.
You might also want to check the answer in this post:[Adding a button to every
row in ListCtrl WxPython](http://stackoverflow.com/questions/15466244/adding-
a-button-to-every-row-in-listctrl-wxpython). It might be a better choice.
|
How do I open the JSON response to my Twitter search query?
Question: I have this rauth-powered command-line Python script so far:
import json
from rauth import OAuth1Service
twitter = OAuth1Service(
name='twitter',
consumer_key='[REDACTED]',
consumer_secret='[REDACTED]',
request_token_url='https://api.twitter.com/oauth/request_token',
access_token_url='https://api.twitter.com/oauth/access_token',
authorize_url='https://api.twitter.com/oauth/authorize',
base_url='https://api.twitter.com/1.1/')
request_token, request_token_secret = twitter.get_request_token()
authorize_url = twitter.get_authorize_url(request_token)
print 'Copy-paste this URL into your browser: ' + authorize_url
twitter_pin = raw_input('Enter PIN: ')
session = twitter.get_auth_session(request_token,
request_token_secret,
method='POST',
data={'oauth_verifier': twitter_pin})
search_results = session.get('search/tweets.json', params={'q':'example'})
print json.dumps(search_results)
rauth does all the OAuth handling and I get an object in return. I try to
print it and it says Response [200] (in pointy brackets), indicating a
success. I try to use Python's json.dump to print out the contents and I get:
TypeError: <Response [200]> is not JSON serializable
I'm probably overlooking something very small. What's wrong with this?
[Relevant Twitter
documentation](https://dev.twitter.com/docs/api/1.1/get/search/tweets)
[Relevant rauth
documentation](https://rauth.readthedocs.org/en/latest/api/#oauth-1-0-services)
Answer: The `rauth` library builds on top of the [`requests` library](http://python-
requests.org/), returning a specialised [`requests.Session`
object](http://docs.python-requests.org/en/latest/user/advanced/#session-
objects).
The object returned by [`session.get()`](http://docs.python-
requests.org/en/latest/api/#requests.Session.get) is a [`requests.Response`
object](http://docs.python-requests.org/en/latest/api/#requests.Response);
call the [`response.json()` method](http://docs.python-
requests.org/en/latest/api/#requests.Response.json) to get the JSON response
data as a Python structure:
print search_results.json()
No need to dump that back into a JSON string, work directly with the Python
data structure.
|
Maya Python: Apply Transformation Matrix
Question: I have been looking for thi answer but i don't seem to figure it out anywhere,
so i hope i could get my answer here...
I'm in Maya Python API and i want to apply a transformation Matrix to a mesh.
This is how i made the mesh:
mesh = om.MFnMesh()
ShapeMesh = cmds.group(em=True)
parentOwner = get_mobject( ShapeMesh )
meshMObj = mesh.create(NumVerts, len(FaceCount), VertArray, FaceCount, FaceArray ,parentOwner)
cmds.sets( ShapeMesh, e=True,forceElement='initialShadingGroup')
defaultUVSetName = ''
defaultUVSetName = mesh.currentUVSetName(-1)
mesh.setUVs ( UArray, VArray, defaultUVSetName )
mesh.assignUVs ( FaceCount, FaceArray, defaultUVSetName )
This is how i create the TFM:
m = struct.unpack("<16f",f.read(64))
mm = om.MMatrix()
om.MScriptUtil.createMatrixFromList(m,mm)
mt = om.MTransformationMatrix(mm)
Basically i read 16 floats and convert them into a Transformation Matrix,
however i don't know how to apply the mt matrix to my mesh...
I managed to get the Position,Rotation and Scale from this though, maybe it
helps, this way:
translate = mt.translation(om.MSpace.kWorld)
rotate = mt.rotation().asEulerRotation()
scaleUtil = om.MScriptUtil()
scaleUtil.createFromList([0,0,0],3)
scaleVec = scaleUtil.asDoublePtr()
mt.getScale(scaleVec,om.MSpace.kWorld)
scale = [om.MScriptUtil.getDoubleArrayItem(scaleVec,i) for i in range(0,3)]
Now my last step comes in applying this Matrix to the mesh, but i can't find a
good way to do it, does someone know how to do this on maya?
Thanks in advance: Seyren.
Answer: Not sure what you mean by applying the matrix to your mesh, but if you want to
update the position of each point by transforming them with that matrix, then
here you go for a given MFnMesh `mesh` and a given MMatrix `matrix`:
import banana.maya
banana.maya.patch()
from maya import OpenMaya
mesh = OpenMaya.MFnMesh.bnn_get('pCubeShape1')
matrix = OpenMaya.MMatrix()
points = OpenMaya.MPointArray()
mesh.getPoints(points)
for i in range(points.length()):
points.set(points[i] * matrix, i)
mesh.setPoints(points)
If you don't want to directly update the points of the mesh, then you need to
apply the matrix to the transformation node by retrieving its parent transform
and using the `MFnTransform::set()` method.
Note that I've used in my code snippet a set of extensions that I've wrote and
that might be helpful if you're using the Maya Python API. The code is
available on [GitHub](https://github.com/christophercrouzet/banana.maya) and
it also comes with a [documentation](http://bananamaya.readthedocs.org/) to
give you an idea.
|
Automatically logging advertising data from Ghostery plugin with Selenium?
Question: I'm interested in keeping an eye on which advertising networks are running on
a variety of websites. The [Ghostery](https://www.ghostery.com) browser plugin
does a great job of showing me which ad networks are used on any website. For
example, on StackOverflow, Ghostery says we're being monitored by DoubleClick,
Google Analytics, Quantcast, and ScoreCard.
On a weekly basis, I'd like to use Selenium to automatically browse few
hundred websites and save the Ghostery data associated with these websites.
Using the Python bindings for Selenium, I wrote out some rough pseudocode:
import selenium.webdriver as webdriver
urls = ['www.stackoverflow.com', 'www.amazon.com', ...]
driver = webdriver.Firefox()
for url in urls:
driver.get(url)
# now, how do I access Ghostery's analysis of this URL?
I suppose the broader question is "**from Selenium, how do I connect to other
browser plugins?** "
* * *
For fun, I posted an example of what Ghostery's UI looks like (which I'd like
to access programmatically):

Answer: Selenium is used to access and interact with a browser's
[DOM](http://en.wikipedia.org/wiki/Document_Object_Model). Selenium is not
able to access a browser's controls; it is a completely inappropriate tool for
what you want to accomplish.
|
SOLVED: Embeded Python - [_socket gets module methods BUT socket.py: missing methods]
Question: # SOLVED
* * *
# Python 2.7 embedded with Marmalade C++ middle ware
I've embedded python 2.7 into my mobile program using Marmalade C++ middle
ware (arm gcc). I can run most of the standard modules and 3rd party
libraries.
* ( some source here: <https://github.com/guyburton/python-loves-marmalade> )
* (I'll upload my changes when this is fixed)
* ( module attribute dump script here: <http://code.activestate.com/recipes/137951-dump-all-the-attributes-of-an-object/> )
When I try to import socket.py ( making sure not to run in the home directory)
it says _socket is missing methods:
# socket:
>>> import socket
File "/pythonHome/Lib/socket.py", line 229, in <module>
[0xfa0] FILE: s3eFileOpen('/pythonHome/Lib/socket.py', 'rb') succeeded
p.__doc__ = getattr(_realsocket,_m).__doc__
AttributeError: type object '_socket.socket' has no attribute 'getpeername'
>>> import os
>>> os.chdir("/tmp")
[0xfa0] IWCRT: chdir: '/' -> '/tmp'
>>> import socket
AttributeError: type object '_socket.socket' has no attribute 'getpeername'
>>> from socket import *
AttributeError: type object '_socket.socket' has no attribute 'getpeername'
After many problems with the socket module, it finally compiles and I can
import the _socket c module (displays all methods available):
# _socket:
>>> import _socket
>>> print (os.path.dirname(_socket.__file__))
AttributeError: 'module' object has no attribute '__file__'
>>> print (dir("_socket"))
['__add__', '__class__', '__contains__', '__delattr__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__getslice__', '__gt__', '__hash__', '__init__', '__le__', '__len__', '__lt__', '__mod__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '_formatter_field_name_split', '_formatter_parser', 'capitalize', 'center', 'count', 'decode', 'encode', 'endswith', 'expandtabs', 'find', 'format', 'index', 'isalnum', 'isalpha', 'isdigit', 'islower', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lstrip', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill']
>>> dumpMod.dumpObj(_socket)
Documentation string: """Implementation module for socket operations. See
the socket module for documentation."""
Built-in Methods: fromfd, getaddrinfo, getdefaulttimeout, gethostbyaddr
gethostbyname, gethostbyname_ex, gethostname
getnameinfo, getprotobyname, getservbyname
getservbyport, htonl, htons, inet_aton, inet_ntoa
inet_ntop, inet_pton, ntohl, ntohs, setdefaulttimeout
__name__ _socket
__package__ None
* * *
# EDIT :: Here's a link that compares normal _socket attributes to my compiled
_socket version: <http://www.diffchecker.com/di6k3fsc>
* * *
* * *
# EDIT I've isolated the function throwing the error from socket.py ( I
subtracted getpeername to check all the other functions, they're fine):
import _socket
_socketmethods = (
'bind', 'connect', 'connect_ex', 'fileno', 'listen', 'getsockname','getsockopt','setsockopt','sendall', 'setblocking','settimeout', 'gettimeout', 'shutdown')
for _m in _socketmethods:
print(getattr(_socket.socket,_m).__doc__)
= output ( minus 'getpeername' method)
bind(address)
Bind the socket to a local address. For IP sockets, the address is a
pair (host, port); the host must refer to the local host. For raw packet
sockets the address is a tuple (ifname, proto [,pkttype [,hatype]])
connect(address)
Connect the socket to a remote address. For IP sockets, the address
is a pair (host, port).
connect_ex(address) -> errno
This is like connect(address), but returns an error code (the errno value)
instead of raising an exception when an error occurs.
fileno() -> integer
Return the integer file descriptor of the socket.
listen(backlog)
Enable a server to accept connections. The backlog argument must be at
least 1; it specifies the number of unaccepted connection that the system
will allow before refusing new connections.
getsockname() -> address info
Return the address of the local endpoint. For IP sockets, the address
info is a pair (hostaddr, port).
getsockopt(level, option[, buffersize]) -> value
Get a socket option. See the Unix manual for level and option.
If a nonzero buffersize argument is given, the return value is a
string of that length; otherwise it is an integer.
setsockopt(level, option, value)
Set a socket option. See the Unix manual for level and option.
The value argument can either be an integer or a string.
sendall(data[, flags])
Send a data string to the socket. For the optional flags
argument, see the Unix manual. This calls send() repeatedly
until all data is sent. If an error occurs, it's impossible
to tell how much data has been sent.
setblocking(flag)
Set the socket to blocking (flag is true) or non-blocking (false).
setblocking(True) is equivalent to settimeout(None);
setblocking(False) is equivalent to settimeout(0.0).
settimeout(timeout)
Set a timeout on socket operations. 'timeout' can be a float,
giving in seconds, or None. Setting a timeout of None disables
the timeout feature and is equivalent to setblocking(1).
Setting a timeout of zero is the same as setblocking(0).
gettimeout() -> timeout
Returns the timeout in floating seconds associated with socket
operations. A timeout of None indicates that timeouts on socket
operations are disabled.
shutdown(flag)
Shut down the reading side of the socket (flag == SHUT_RD), the writing side
of the socket (flag == SHUT_WR), or both ends (flag == SHUT_RDWR).
* * *
Added getpeername, the only one with a problem
>>> import _socket
>>> print(getattr(_socket.socket,'getpeername').__doc__)
AttributeError: type object '_socket.socket' has no attribute 'getpeername'
* * *
# Thoughts?
* Could it still be a python path issue?
* setenv in c++
* Another missing dependent module? ( selectmodule.c is also included)
* ~~Since The compiled module is embedded in a libary containing the enterpreter aswell, and there is no file. do I need to modify socket.py...?~~
* problems with getaddrinfo.c? getnameinfo.c ? >> problems with socketmodule.c?
Answer: # SOLVED
In **pyconfig.h** set **HAVE_GETPEERNAME** 1 ( while you're at it do
gethostbyname too)
/* Define to 1 if you have the `getpeername' function. */
//#undef HAVE_GETPEERNAME
/* Define to 1 if you have the `gethostbyname' function. */
//#undef HAVE_GETHOSTBYNAME
/* SOCKETS FUNCTION */
#define HAVE_GETPEERNAME 1
#define HAVE_GETHOSTBYNAME 1
* * *
I was loading the wrong BARE minimum pyconfig which didnt include the settings
for the other modules.... yet most of them worked just not this one. W>E
solved
|
Python's urllib.request.urlopen with disrupted internet connection
Question: I have had some problems with python's urllib and disrupted internet
connection: I can never get information from urllib.request.urlopen when
calling it first without active internet connection. The following works fine:
> python
>>> import urllib.request
>>> urllib.request.urlopen("http://www.google.com")
<http.client.HTTPResponse object at 0x7f6f54681438>
#Now disable internet connection:
> sudo ip link set enp4s0 down
>>> urllib.request.urlopen("http://www.google.com")
Traceback (most recent call last):
File "/usr/lib/python3.4/urllib/request.py", line 1189, in do_open
h.request(req.get_method(), req.selector, req.data, headers)
File "/usr/lib/python3.4/http/client.py", line 1090, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python3.4/http/client.py", line 1128, in _send_request
self.endheaders(body)
File "/usr/lib/python3.4/http/client.py", line 1086, in endheaders
self._send_output(message_body)
File "/usr/lib/python3.4/http/client.py", line 924, in _send_output
self.send(msg)
File "/usr/lib/python3.4/http/client.py", line 859, in send
self.connect()
File "/usr/lib/python3.4/http/client.py", line 836, in connect
self.timeout, self.source_address)
File "/usr/lib/python3.4/socket.py", line 491, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "/usr/lib/python3.4/socket.py", line 530, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.4/urllib/request.py", line 153, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.4/urllib/request.py", line 455, in open
response = self._open(req, data)
File "/usr/lib/python3.4/urllib/request.py", line 473, in _open
'_open', req)
File "/usr/lib/python3.4/urllib/request.py", line 433, in _call_chain
result = func(*args)
File "/usr/lib/python3.4/urllib/request.py", line 1215, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/usr/lib/python3.4/urllib/request.py", line 1192, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno -2] Name or service not known>
#Reenable internet connection:
> sudo ip link set enp4s0 up #and wait a bit
>>> urllib.request.urlopen("http://www.google.com")
<http.client.HTTPResponse object at 0x7f6f5468c898>
So far so good. Now the exact same thing, but without calling urlopen the
first time:
> python
>>> import urllib.request
# do not call urlopen before internet is down...
#Now disable internet connection:
> sudo ip link set enp4s0 down
>>> urllib.request.urlopen("http://www.google.com")
[exactly the same error message as above]
#Reenable internet connection:
> sudo ip link set enp4s0 up #and wait a bit
#Ensure internet connection is up
> ip link show enp4s0 up
2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP [...]
>>> urllib.request.urlopen("http://www.google.com")
[exactly the same error message as above]
#What's the problem? The internet connection IS up
#However:
> host www.google.com
www.google.com has address 173.194.69.104
[...]
>>> urllib.request.urlopen("http://173.194.69.104")
<http.client.HTTPResponse object at 0x7f3116a72e48>
So I suppose it has to do something with DNS(-Caching)?
Finally some information about my system:
> python --version
Python 3.4.1
> uname -a
Linux charon 3.15.3-1-ARCH #1 SMP PREEMPT Tue Jul 1 07:32:45 CEST 2014 x86_64 GNU/Linux
Sorry about the weird formatting. I mixed up 'normal' (prefixed with '>') and
python (prefixed with '>>>') shell command to make the exact command sequence
clear (which obviously happened in different terminal).
Answer: You are running in a [well-known glibc
problem](https://sourceware.org/bugzilla/show_bug.cgi?id=3675). One can argue
whether this is a mis-use of glibc or whether glibc is doing something wrong
here. `res_init` is not part of POSIX, but a interface originating from BSD,
so it is hard to do right in a platform-independent manner.
There seems to be no bug report against python for this problem, so you might
want to [file one](http://bugs.python.org/).
As a workaround, you could use `ctypes` to make a call to `res_init` yourself,
but I don’t know how to do this exactly off the top of my head.
|
Swift: How to get Console User, UID, and GID via SCDynamicStoreCopyConsoleUser?
Question: I am able to get the Username, UID and GID from SCDynamicStoreCopyConsoleUser
using python:
#!/usr/bin/python
from SystemConfiguration import SCDynamicStoreCopyConsoleUser
cfuser = SCDynamicStoreCopyConsoleUser( None, None, None )
print cfuser[0] # Returns console user, e.g.: myUsername
print cfuser[1] # Returns console user’s UID, e.g.: 501
print cfuser[2] # Returns console user’s GID, e.g.: 20
How can I get this same return using Swift?
Swift Declaration of SCDynamicStoreCopyConsoleUser
func SCDynamicStoreCopyConsoleUser(_ store: SCDynamicStore!,
_ uid: CMutablePointer<uid_t>,
_ gid: CMutablePointer<gid_t>) -> Unmanaged<CFString>!
My Swift Call
var uid: CMutablePointer<uid_t>!
var gid: CMutablePointer<gid_t>!
var cfuser: NSArray = [SCDynamicStoreCopyConsoleUser(nil,uid,gid)]
// the return has only one element containing the username
Answer: The Swift function `SCDynamicStoreCopyConsoleUser` takes a
`CMutablePointer<uid_t>` argument which means that you have to pass the
_address_ of a `uid_t` variable. Also you should check if the call succeeded
(otherwise `nil` is returned), and you have to convert the returned unmanaged
object as described in [Working with Cocoa Data
Types](https://developer.apple.com/library/prerelease/mac/documentation/Swift/Conceptual/BuildingCocoaApps/WorkingWithCocoaDataTypes.html):
import SystemConfiguration
var uid: uid_t = 0
var gid: gid_t = 0
if let theResult = SCDynamicStoreCopyConsoleUser(nil, &uid, &gid) {
let name = theResult.takeUnretainedValue()
println("name = \(name), uid = \(uid), gid = \(gid)")
} else {
println("failed")
}
|
Python Shell not running Scrapy
Question: I am running Python.org version 2.7 64 bit on Windows Vista 64 bit to use
Scrapy. I have some code that is working when I run it via Command Shell
(apart from some issues with Command Shell not recognising non Unicode
characters), however when I try running the script via the Python IDLE i get
the following error message:
Warning (from warnings module):
File "C:\Python27\mrscrap\mrscrap\spiders\test.py", line 24
class MySpider(BaseSpider):
ScrapyDeprecationWarning: __main__.MySpider inherits from deprecated class scrapy.spider.BaseSpider, please inherit from scrapy.spider.Spider. (warning only on first subclass, there may be others)
The code used to generate this error is:
from scrapy.spider import BaseSpider
from scrapy.selector import Selector
from scrapy.utils.markup import remove_tags
import re
class MySpider(BaseSpider):
name = "wiki"
allowed_domains = ["wikipedia.org"]
start_urls = ["http://en.wikipedia.org/wiki/Asia"]
def parse(self, response):
titles = response.selector.xpath("normalize-space(//title)")
for titles in titles:
body = response.xpath("//p").extract()
body2 = "".join(body)
print remove_tags(body2)
Firstly, what is the cause of this error when it works fine in Command Shell?
Secondly, when I follow the instructions in the error and replace both
instances of BaseSpider within the code with just 'Spider' the code runs in
Python shell, but does nothing. No error, nothing printed to the log, no
errors or warnings, nothing.
Can anyone tell me why this revised version of the code is not printing it's
output to the Python IDLE?
Thanks
Answer: Add `from scrapy.cmdline import execute` to your imports
Then put `execute(['scrapy','crawl','wiki'])` and run your script.
from scrapy.spider import Spider
from scrapy.selector import Selector
from scrapy.utils.markup import remove_tags
import re
from scrapy.cmdline import execute
class MySpider(Spider):
name = "wiki"
allowed_domains = ["wikipedia.org"]
start_urls = ["http://en.wikipedia.org/wiki/Asia"]
def parse(self, response):
titles = response.selector.xpath("normalize-space(//title)")
for title in titles:
body = response.xpath("//p").extract()
body2 = "".join(body)
print remove_tags(body2)
execute(['scrapy','crawl','wiki'])
|
python and scrapy THE encoding issue
Question: I simple can't figure out! :( I am scrapping data from an utf-8 encoded site,
well that is at least what it says:
Content-Type: text/html;charset=utf-8
I am getting a list of regular unicode strings with XPath selector extract()
call:
item['city']= element.select('//div[@id="bubble_2"]/div/text()').extract()
This is the list:
[u'Westbahnhofstr.\xa010', u'72070\xa0T\xfcbingen']
Now I join the list into one unicode string:
item['city']= "".join(element.select('//div[@id="bubble_2"]/div/text()').extract())
So far so good:
u'Beim Nonnenhaus\xa0672070\xa0T\xfcbingen'
The problem appears while I try to output this unicode string either to screen
(print) or to a file (write). whatever I try it returns an error
(<http://pastebin.com/51DkX2R2>):
exceptions.UnicodeEncodeError: 'ascii' codec can't encode character u'\xa0' in position 11: ordinal not in range(128)
I have encoded unicode to byte string before output of course:
item['city'].encode('utf-8')
This is my pipeline.py and how I use to open and write to my cvs:
import csv
import items
import urlparse
import codecs
class DepostPipeline(object):
def __init__(self):
self.modelsCsv = csv.writer(codecs.open('Dees.csv', mode='w',encoding='utf-8'))
self.modelsCsv.writerow(['city'])
def process_item(self, item, spider):
if isinstance(item, items.DetailsItem):
item['city'] = item['city'].encode('utf-8')
self.modelsCsv.writerow([item['city']])
return item
The most weird thing is that my system (python on windows) handles unicode
strings perfectly:
C:\Console2>python
Python 2.7.6 (default, Nov 10 2013, 19:24:18) [MSC v.1500 32 bit (Intel)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> s=u'Beim Nonnenhaus\xa0672070\xa0T\xfcbingen'
>>> print s
Beim Nonnenhaus 672070 Tübingen
I have been reading about utf-8, unicode, encoding and decoding a lot over the
last 10 days but it seems that I still miss something here?! I appreciate any
help or advice.
Answer: You are ignoring the result of the `.encode()` call:
item['city'].encode('utf-8')
Strings are immutable, and are not encoded in-place. Even better, the _type of
the returned object_ is different. You'll need to assign the return value
back:
item['city'] = item['city'].encode('utf-8')
However, you should **not** use `codecs.open()` for the CSV file. The `csv`
module will always write bytestrings, not Unicode.
By using a `codecs.open()` file object, an implicit **decode** takes place to
get back to Unicode, and it is _that_ that fails for you; it is why you get a
`UnicodeDecodeError` exception, not an encode error:
File "C:\Python27\lib\codecs.py", line 351, in write
data, consumed = self.encode(object, self.errors)
exceptions.UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 11: ordinal not in range(128)
Use a regular `open()` call instead:
self.modelsCsv = csv.writer(open('Dees.csv', mode='wb'))
Note the `'wb'`; the `csv` module handles line endings itself.
|
Automated tool to modify python source to support 2.7 and 3.4
Question: We want to support Python 2.7 and 3.4+ from one code base in the future.
I searched for automated tools, but lib2to3 seems to only support Python 3 in
the result.
I know that the library six could help us, but we have a lot of Python source
files.
My dream: A tool that adds this (or even more future imports) to each file:
from __future__ import absolute_import, division, unicode_literals, print_function
and the tool should update:
* all `u'...'` \--> `'...'`
* `print foo` \--> `print(foo)`
* ...
Answer: This should work, but up to now untested:
1. use [2to3](https://docs.python.org/2/library/2to3.html)
2. use [3to2](http://www.startcodon.com/wordpress/category/3to2/)
|
Python performing multiple tasks
Question: I have an endpoint in my API, which actually get data from different
datasources, what I am trying to do is send request to all the datasources at
once and as soon as I get result from once datasource return the result to
user (terminate all remaining requests if possible).
What are good libraries in python that can be used? Any Example would be great
help
Thanks
Answer: You can use the
[multiprocessing](https://docs.python.org/2/library/multiprocessing.html)
library for this:
from multiprocessing import Process, Queue
import time
q = Queue()
def some_func1(arg1, arg2, q):
#this one will take longer, so we'll kill it after the other finishes
time.sleep(20)
q.put('some_func1 finished!')
def some_func2(arg1, arg2, q):
q.put('some_func2 finished!')
proc1 = Process(target=some_func1,
args = ('arga', 'argb', q))
proc2 = Process(target=some_func2,
args = ('arg1', 'arg2', q))
proc1.start()
proc2.start()
#this will be the result from the first thread that finishes.
#At this point you can wait for the other threads or kill them, or whatever you want.
result = q.get()
print result
#if you want to kill all the procs now:
proc1.terminate()
proc2.terminate()
**EDIT: Use the Queue in Multiprocessing for this as it's process safe.**
|
Crypto.PublicKey RSA Keysize off by one?
Question: I am trying to write a simple python method using Crypto.PublicKey.RSA that
returns the size of an RSA public key, but the number returned is always the
number I expect minus 1.
For example I give it a 1024-bit key and the number I get back from the size()
function is 1023. I give it a 768-bit key and the number I get back is 767.
What am I missing here?
Results of the code below: **I would expect this to return 1024: 1023**
POC Python2 code:
#!/usr/bin/python2
from Crypto.PublicKey import RSA
from base64 import b64decode
def computeRSAStrength(pubkey,verbose=None):
"""Compute and return RSA key strength given a public key"""
#TODO: add base64 validation on pubkey
#Format key and use python crypto libs to determine key strength
keyDER = b64decode(pubkey)
try:
#IMPORTANT PART
keyPub = RSA.importKey(keyDER)
keySize = int(keyPub.size()) #+ 1 ??? WHY DOES THIS RETURN (expected - 1) ???
#END IMPORTANT PART
except ValueError, e:
if verbose:
print 'ValueError Exception: {}'.format(e)
keySize = 0
# ALTERNATE METHOD?: use import os and make a syscall to openssl
if verbose:
print ' - Key in PEM format:'
print keyPub.exportKey('PEM')
return keySize
if __name__ == '__main__':
print 'I would expect this to read 1024: '+str(computeRSAStrength('MIGfMA0GCSqGSIb3DQEBA'+
'QUAA4GNADCBiQKBgQDLM0fpK/rhklYDRJSBQ6bSyZKjQxTeEnZywzodwGAjAste2aOQzXJyZmZrjHZ'+
'0JL6Gy/e351n1P0Yo0cVE4nEQ7WD9jo0cqVEmkf1SInnhN1FGX/pOTjrjh7QU398YFxks/rqnX6C1q'+
'doeu7B4wwRhNEHUjlaH79afYqOtk0ta0wIDAQAB'))
Answer: I'm not sure that `_RSAobj.size()` refers to the actual size of the key. It's
a bit vague but the documentation for
[size()](https://www.dlitz.net/software/pycrypto/api/current/Crypto.PublicKey.RSA._RSAobj-
class.html#size) states this:
size(self)
Tell the maximum number of bits that can be handled by this key.
Returns:
int
Overrides: pubkey.pubkey.size
(inherited documentation)
I'm unsure, but it might actually mean the maximum number of plaintext bits
that can be handled by crypto operations (although you can encrypt up to 128
bytes/ 1024 bit of plaintext). The code explicitly subtracts one from the
number of bits in the modulus, so it is safe to add one to `size()` to get the
modulus size.
There is some seemingly related information at
<http://security.stackexchange.com/q/44702>.
|
How to read a file with a semi colon separator in pandas
Question: I a importing a `.csv` file in python with pandas.
Here is the file format from the `.csv` :
a1;b1;c1;d1;e1;...
a2;b2;c2;d2;e2;...
.....
here is how get it :
from pandas import *
csv_path = "C:...."
data = read_csv(csv_path)
Now when I print the file I get that :
0 a1;b1;c1;d1;e1;...
1 a2;b2;c2;d2;e2;...
And so on... So I need help to read the file and split the values in columns,
with the semi color character `;`.
Answer: [`read_csv`](http://pandas.pydata.org/pandas-
docs/stable/generated/pandas.io.parsers.read_csv.html) takes a `sep` param, in
your case just pass `sep=';'` like so:
data = read_csv(csv_path, sep=';')
The reason it failed in your case is that the default value is `','` so it
scrunched up all the columns as a single column entry.
|
reading CSV file and inserting it into 2d list in python
Question: I want to insert the data of CSV file (network data such as: time,IP address,
Port number ) into 2D list in Python.
Here is the code:
import csv
datafile = open('a.csv', 'r')
datareader = csv.reader(datafile,delimiter=';')
data = []
for row in data reader:
data.append(row)
print (data[1:4])
the result is:
[['1', '6', '192.168.4.118', '1605', '', '115.85.145.5', '80', '', '60', '0.000000000', '0x0010', 'Jun 15, 2010 18:27:57.490835000', '0.000000000'],
['2', '6','115.85.145.5', '80', '', '192.168.4.118', '1605', '', '1514', '0.002365000', '0x0010', 'Jun 15, 2010 18:27:57.493200000', '0.002365000'],
['3', '6', '115.85.145.5', '80', '', '192.168.4.118', '1605', '', '1514', '0.003513000', '0x0018', 'Jun 15, 2010 18:27:57.496713000', '0.005878000']]
But It is just one dimension and I don't know how to create 2D array and
insert each element into the array.
Please suggest me what code should i use for this purpose. (I looked the
previous hints in the website but non of them worked for me)
Answer: You already have list of lists, which is sort of 2D array and you can address
it like one data[1][1], etc.
|
Boost::Python Not Finding C++ Class in OSX
Question: I'm porting an Application from Linux to OS X and the Boost::Python
integration is failing at run time.
I'm exposing my C++ classes like so:
using namespace scarlet;
BOOST_PYTHON_MODULE(libscarlet) {
using namespace boost::python;
class_<VideoEngine, boost::noncopyable>("VideoEngine", no_init)
.def("play", &VideoEngine::play)
.def("pause", &VideoEngine::pause)
.def("isPaused", &VideoEngine::isPaused)
[...]
;
}
I'm importing the library like so:
try {
boost::python::import("libscarlet");
} catch (boost::python::error_already_set & e) {
PyErr_Print();
}
Then I'm inject an instance into the global Python namespace like so:
void VideoEngine::init() {
[...]
try {
auto main_module = boost::python::import("__main__");
auto main_namespace = main_module.attr("__dict__");
main_namespace["engine"] = boost::python::object(boost::python::ptr(this));
} catch (boost::python::error_already_set & e) {
PyErr_Print();
}
[...]
}
It works great in Linux but in OS X an exception is thrown and `PyErr_Print()`
returns `TypeError: No Python class registered for C++ class
scarlet::VideoEngine`.
As far as I can tell the module works without issue when imported via the
Python interpreter. It is difficult to test since it designed to be injected
as a pre-constructed instance but the class and functions are present as shown
below:
$ python
Python 2.7.5 (default, Mar 9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import libscarlet
>>> libscarlet.VideoEngine
<class 'libscarlet.VideoEngine'>
>>> libscarlet.VideoEngine.play
<unbound method VideoEngine.play>
Any ideas as to where the incompatibility lies?
**Edit:** ~~I'm starting to think it might be related to multithreading since
my OS X implementation uses a different threading structure, although all of
the calls detailed here happen in the same thread. Could that be the cause of
such an issue?~~ Probably not the issue since it doesn't work in MS Windows in
single-threaded mode.
Answer: I have solved this now.
It was caused entirely by Boost::Python being statically compiled and once I
recompiled it as a shared library the problem went away entirely, on all
platforms.
The lesson: don't compile boost statically. I'm pretty sure there are warnings
against it and they should be heeded.
|
My opencv python program displays a seemingly expectant prhrase, but neither terminates, nor allows progress
Question: I am having trouble with a program in which I want to perform a sobel
derivation from openCV to find edges in a picture. I found a python adaptation
of this code: sobel_derivatives.html#sobel-derivatives from the opencv
tutorials on this adress: Official_Tutorial_Python_Codes/3_imgproc/sobel.py
which I wanted to copy and adapt to run on an image I found on the web. after
having done abit of debugging, and running the program as "sudo", I simply get
the message:
init done
opengl support available
and the prgram never terminates. I cannot use ctrl+c, or any other such
commands (I am accessing my ubuntu OS from a puTTy client) my only option to
retry, is to stop and restart my session.
My problem seems rather similar to this one: init-done-opengl-support-
available but I cannot quite seem to work it out, nor am I able to post a
further question on this thread. as far as I can tell, there seems to be an
error in my openCV2. I tried to redownload it with a "sudo apt-get install",
but no change. My code looks like this:
"""
Sobel_derivation.py
attempt at finding the derivatives in a picture.
this is extremely beginner-level coding, so please
bear with me. Might also take a look at the supposedly
more accurate 'Scharr' derivatives.
mostly imported and attempted to interpret from
https://github.com/abidrahmank/OpenCV2-Python/blob/master/Official_Tutorial_Python_Codes/3_imgproc/sobel.py
"""
import cv2 # this is the currently installed version of openCV
import numpy as np # useful tool for most array-based computing
scale = 1 # for scaling the derivatives in the x and y direction
delta = 0 # optional value, here apparently set as trivial (also, mostly
#important for the 'Scharr' function from what I can see.
#ddepth = cv2.CV_16u # output image depth for 'scharr' function
ddepth = -1 #this should give the return image equal depth
img = cv2.imread('splash.jpg') # importing the image to be read from
# this directory, for simplicity
img = cv2.GaussianBlur(img, (3,3),0) # calling the function from cv. img
# is the previous image and (3,3) seems to be the size. the point of
# the function is to smooth the image to reduce noise.
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)# the function returns a
# colour gray, greyscale of the image.
# time for the gradient in the x-direction
grad_x = cv2.Sobel(gray, ddepth, 1,0, ksize=3, scale=scale, delta=delta, borderType=cv2.BORDER_DEFAULT)
# grad_x = cv2.Scharr(gray,ddepth, 1,0)
#now for y
grad_y = cv2.Sobel(gray, ddepth, 0,1, ksize=3, scale=scale, delta=delta, borderType=cv2.BORDER_DEFAULT)
# grad_y = cv2.Scharr(gray, ddepth, 0, 1)
abs_grad_x = cv2.convertScaleAbs(grad_x) #converting back to uint8
abs_grad_y = cv2.convertScaleAbs(grad_y)
dst = cv2.addWeighted(abs_grad_x, 0.5, abs_grad_y, (0.5), 0)
#dst = cv2.add(abs_grad_x. abs_grad_y)
cv2.imshow('dst', dst)
cv2.waitKey(0)
cv2.DestroyAllWindows()
Answer: You have to enable X11 forwarding in order to open windows from a remote host
to your local machine. If you use putty go to Putty Configuration > Connection
> X11 and check enable X11 forwarding.

If you are connecting trough ssh simply add -X flag to you command e.g. `ssh
-X [email protected]`
The cryptic output
init done
opengl support available
was left there (minor bug I guess) by the developers and is printed anytime
cv::imshow(..) is executed in a program.
|
In Python, get the argument passed to a function as a string within the function
Question: I'm currently trying to access the arguments of a Python function as strings,
because I would like to use these in a naming convention for the output of the
function. To start off, I would first like to create a function which simply
'echoes' its arguments (cf. [Getting list of parameter names inside python
function](http://stackoverflow.com/questions/582056/getting-list-of-parameter-
names-inside-python-function)). I tried this:
import inspect
def echo_arguments(arg1,arg2):
frame = inspect.currentframe()
args, _, _, _ = inspect.getargvalues(frame)
arg1=args[0]
arg2=args[1]
return arg1, arg2
However, if I try to call it, I get this:
>>> (h1,h2)=echo_arguments('hello','world')
>>> h1
'arg1'
>>> h2
'arg2'
In other words, it is returning the 'dummy' arguments from when the function
was defined, instead of the 'current' arguments at the time the function is
called. Does anybody know how I could get the latter?
Answer: Use the locals return by `getargvalues`:
import inspect
def echo_arguments(arg1,arg2):
frame = inspect.currentframe()
args, _, _, locals_ = inspect.getargvalues(frame)
return (locals_[arg] for arg in args)
Results in:
>>> (h1,h2)=echo_arguments('hello','world')
>>> h1
'hello'
>>> h2
'world'
|
Sqlite3 with python inserting same row
Question: I am having a problem with a python script inserting on different rows in
sqlite3. But I want it to insert on the same row. I am fairly new to using
scripts to insert information into a db so my knowledge is limited in this.
My Id in my database autoincrement. Is there a way to update the previous row
inserted with information? I want everything to be on a single line in the
database. Any help appreciated, Thanks.
Sqlite table:
create table ansible_packagelist (id integer PRIMARY KEY, date datetime default current_timestamp, host text, package_name text, installed_version text NULL, upgradeable_version text NULL, required_version text NULL);
Below is my python code for this.
import apt
import sys, getopt
import subprocess
import sqlite3
con = sqlite3.connect("ansible.db")
def aptpkg(package_name):
cache = apt.Cache()
pkg = cache[package_name]
host = subprocess.Popen('hostname', stdout=subprocess.PIPE, universal_newlines=True).stdout.read().strip()
if pkg.is_installed:
print host
print 'Current ' + package_name + ' installed:', pkg.installed.version
con.execute("insert into ansible_packagelist (date, host, package_name, installed_version) values (current_timestamp,?,?,?)", (host, package_name, pkg.installed.version,))
else:
print host, package_name + ' is not installed on this system.\n'
if pkg.is_upgradable:
print 'Upgradeable version of ' + package_name + ' :', pkg.candidate.version
con.execute("insert into ansible_packagelist (upgradeable_version) values (?)", (pkg.candidate.version,))
con.commit()
def aptver(package_version):
cache1 = apt.Cache()
pkg1 = cache1[packname]
con.execute("insert into ansible_packagelist (required_package) values (?)", (package_version,))
if package_version >= pkg1.installed.version:
print 'The Package needs to be upgraded, ' + package_version + ' is the version needed'
elif package_version == pkg1.installed.version:
print 'The package is at the correct version'
elif package_version <= pkg1.installed.version:
print 'The package is at a greater version than requested'
else:
print 'The package is at the correct version'
con.commit()
Answer: In SQL Insert will always create a new row.
To modify an existing row, you need to use an Update statement. To do this
though you need a unique key to match on. In this case you can probably get
away with using the package name. But typically you want to fetch the ID (that
you set to auto increment) and reference that in your code.
Your upgradeable update statement should look like
con.execute("Update ansible_packagelist Set upgradeable_version = ? Where package_name = ?", (pkg.candidate.version, package_name))
|
Python print flexible amount of lists side by side vertically
Question: I'm new to python and trying to print a few lists side by side vertically e.g.
`list_1 = [1,2,3]` and `list_2 = [4,5]`:
output:
1 4
2 5
3 None
i found `map(None, list_1, list_2)` can achieve this.
However, I may have up to >10 lists to print vertically. I wonder if there
some neat way to do this.
Thanks in advance!
Answer: You could use `map(None, list_1, list_2, list_3, ...)`, since `map` can take
an arbitrary number of arguments.
However, it is usually not a good idea to have numbered variable names. It is
often a sign that you should be using a `list` or `tuple` instead. So instead
of having variables `list_1`, ..., `list_10`, you should have one variable
`columns` which is a list of lists.
If `columns = [list_1, list_2]`, then
import itertools as IT
rows = IT.izip_longest(*columns)
is equivalent to `map(None, list_1, list_2)` and is compatible with Python3,
which removes the `map(None, ...)` option. Notice that this notation
generalizes well to arbitrary number of lists without requiring you to type
all those variable names by hand.
The `*` in `IT.izip_longest(*columns, ...)` is [Python's unpacking
operator](http://www.saltycrane.com/blog/2008/01/how-to-use-args-and-kwargs-
in-python/). It unpacks the items in `columns` and sends those items to
`IT.izip` as individual arguments. This notation allows you to send an
arbitrary number of arguments to a function.
|
Python-Requests full URL from error message
Question: I'm trying to unshorten URLs with the requests library. I'm currently doing
something like this:
import requests
from contextlib import closing
def unshorten(url):
session = requests.session()
with closing(session.head(url)) as req:
r = req
if not r.headers.get('location'): # not a redirect
return url
tmp_url = url
try:
for redir in session.resolve_redirects(r, r.request):
if redir.status_code == 200 and not url_no_good(redir.url): # ok!
return redir.url
else:
tmp_url = redir.url
else: # no acceptable responses :(
return tmp_url
except requests.exceptions.TooManyRedirects:
return url
(url_no_good is shorthand for some tests to make sure the url isn't, e.g., a
DNS-mediated 404 page)
I'm running into a problem where a given url redirects to a no-longer-valid
site. I don't want the shortened link, I want the 'bad' url. I 'solved' this
with
ERR_PAT = re.compile(r'host=\'([\w\d\.]+)\'')
...
try:
for redir in session.resolve_redirects(r, r.request):
...
except requests.exceptions.TooManyRedirects:
return url
except requests.exceptions.ConnectionError as e:
return 'http://' + re.search(ERR_PAT, e.message.message).group(1) + e.message.url
since, for a requests ConnectionError, `err.message.message` is the string
representation of the error and `err.message.url` is the non-domain part of
the url, e.g., `/foo/bar?baz=bloo`. Cobbling together the different parts of
the error message like this feels _incredibly_ hacky, and I'm really curious
if there's a less-involved way of handling this kind of thing.
Answer: When that exception is raised, `redir` is still bound to the last request that
you tried to follow the redirect on:
try:
redir = r # in case the first redirect fails
for redir in session.resolve_redirects(r, r.request):
if redir.status_code == 200 and not url_no_good(redir.url): # ok!
return redir.url
else:
tmp_url = redir.url
else: # no acceptable responses :(
return tmp_url
except requests.exceptions.TooManyRedirects:
return url
except requests.exceptions.ConnectionError:
return redir.headers.get('location', url)
|
How do I clear the buffer upon start/exit in ZMQ socket? (to prevent server from connecting with dead clients)
Question: I am using a REQ/REP type socket for ZMQ communication in python. There are
multiple clients that attempt to connect to one server. Timeouts have been
added in the client script to prevent indefinite wait.
The problem is that when the server is not running, and a client attempts to
establish connection, it's message gets added to the queue buffer, which
should not even exist at this moment ideally. When the script starts running
and a new client connects, the previous client's data is taken in first by the
server. This should not happen.
When the server starts, it assumes a client is connected to it since it had
tried to connect previously, and could not exit cleanly (since the server was
down).
In the code below, when the client tries the first time, it gets `ERR 03:
Server down` which is correct, followed by `Error disconnecting`. When server
is up, I get `ERR 02: Server Busy` for the first client which connects. This
should not occur. The client should be able to seamlessly connect with the
server now that it's up and running.
Server Code:
import zmq
def server_fn():
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://192.168.1.14:5555")
one=1
while one == 1:
message = socket.recv()
#start process if valid new connection
if message == 'hello':
socket.send(message) #ACK
#keep session alive until application ends it.
while one == 1:
message = socket.recv()
print("Received request: ", message)
#exit connection
if message == 'bye':
socket.send(message)
break
#don't allow any client to connect if already busy
if message == 'hello':
socket.send ('ERR 00')
continue
#do all data communication here
else:
socket.send('ERR 01: Connection Error')
return
server_fn()
Client Code:
import zmq
class client:
def clientInit(self):
hello='hello'
#zmq connection
self.context = zmq.Context()
print("Connecting to hello world server...")
self.socket = self.context.socket(zmq.REQ)
self.socket.connect("tcp://192.168.1.14:5555")
#RCVTIMEO to prevent forever block
self.socket.setsockopt(zmq.RCVTIMEO, 5000)
#SNDTIME0 is needed since script may not up up yet
self.socket.setsockopt(zmq.SNDTIMEO, 5000)
try:
self.socket.send(hello)
except:
print "Sending hello failed."
try:
echo = self.socket.recv()
if hello == echo:
#connection established.
commStatus = 'SUCCESS'
elif echo == 'ERR 00':
#connection busy
commStatus = "ERR 00. Server busy."
else:
#connection failed
commStatus="ERR 02"
except:
commStatus = "ERR 03. Server down."
return commStatus
def clientQuit(self):
try:
self.socket.send('bye')
self.socket.recv()
except:
print "Error disconnecting."
cObj = client()
commStatus=cObj.clientInit()
print commStatus
cObj.clientQuit()
PS - I have a feeling the solution may lie in the correct usage of socket.bind
and socket.connect.
Answer: Answering my own question-
The problem is that the first client sends a message which the server accepts
when it starts running, regardless of the status of the client.
To prevent this, 2 things have to be done. The most important thing is to use
socket.close() to close the client connection. Secondly, the LINGER parameter
can be set to a low value or zero. This clears the buffer after the timeout
value from the time the socket is closed.
class client:
def clientInit(self):
...
self.socket.setsockopt(zmq.LINGER, 100)
...
def clientQuit(self):
try:
self.socket.send('bye')
self.socket.recv()
except:
print "Error disconnecting."
self.socket.close()
|
How to compare two releases in a MusicBrainz Picard plugin?
Question: I have been trying to write a Picard plugin. My idea is for it to
automatically insert transliterated track listings as comments for releases
with track titles written in non-Latin scripts. MusicBrainz contains these
transliterations as pseudo-releases existing in parallel with the actual
releases. For [a given
release](https://musicbrainz.org/release/1492ce2b-a9ee-4aa3-b9e2-b18ad093bc51),
for example, I would have to first read the relation for the ID of the related
release. I would then have to transfer the titles of those tracks into the
comment field of my files. In pseudocode:
pseudo_release = actual_release.getTransliteration() # As indicated in a relationship
i = 1
while i <= actual_release.numTracks():
actual_release.getTrack(i).setComment(pseudo_release.getTrack(i).getTitle())
i++
The first line is what I'm not sure how to do. Since the relation is not
available as a Picard variable, it can't be accessed by any way indicated in
the API.
I've been looking through the source of Picard and also [the standalone web
service](http://python-musicbrainzngs.readthedocs.org/en/latest/#) but haven't
been able to find anything. Is this at all possible, and if so how do I do it?
Answer: The information about relationships, including transliterated tracklistings,
is available in Picard if you write a metadata processor. A simple plugin
iterating over all relationships of a release would look like this:
PLUGIN_NAME = "Find transliterated tracklisting relationships"
PLUGIN_AUTHOR = "Wieland Hoffmann"
PLUGIN_DESCRIPTION = "I'm too lazy"
PLUGIN_VERSION = "0.1"
PLUGIN_API_VERSIONS = ["1.0"]
from picard.metadata import register_album_metadata_processor
from picard import log
# The relationship type id for transliterations from
# https://musicbrainz.org/relationship/fc399d47-23a7-4c28-bfcf-0607a562b644
TRANS_REL_UUID = "fc399d47-23a7-4c28-bfcf-0607a562b644"
@register_album_metadata_processor
def find_transliteration_relationship(album, metadata, release):
if "relation_list" in release.children:
for rel in release.relation_list:
if rel.relation[0].type_id == TRANS_REL_UUID:
log.info("Found a transliterated tracklisting relationship")
for release in rel.relation[0].release:
log.info("Its target is https://musicbrainz.org/release/%s",
release.id)
The `release` argument that gets passed to the processor is an instance of
Picards
[XmlNode](https://github.com/musicbrainz/picard/blob/master/picard/webservice.py#L72)
class and its structure (including its child objects) resembles the XML you
get by asking the MusicBrainz server about this release via the web service
([this](https://beta.musicbrainz.org/ws/2/release/1492ce2b-a9ee-4aa3-b9e2-b18ad093bc51?inc=release-
rels) is what it returns for your example release if you only ask it about
relationships). Now that you have the MBID of the relationships target, you
can use the
[get](https://github.com/musicbrainz/picard/blob/master/picard/webservice.py#L292)
method of Picards webservice module (the `album`s `tagger.xmlws` attribute is
an instance of the XmlWebService class) to send another request to the
MusicBrainz website asking for data about that release (don't forget to in-
and decrement the `album`s `_requests` attribute so it doesn't complete its
loading steps until after you've changed its data).
Some other plugins that use this to request and process further data are [the
album artist
website](https://github.com/musicbrainz/picard/blob/master/contrib/plugins/albumartist_website.py)
and
[Last.FM.Plus](https://github.com/musicbrainz/picard/blob/master/contrib/plugins/lastfmplus/__init__.py)
plugins.
/edit: I've just been informed that there's already [a
ticket](http://tickets.musicbrainz.org/browse/PICARD-145) for improving how
Picard handles pseudoreleases which has a link to [a
plugin](https://github.com/96187/picard-plugins/blob/master/use-pseudo-
releases.py) doing what you want to be doing.
|
How to make python choose randomly between multiple strings?
Question: How to make python choose randomly between multiple strings?
Answer: Add them all to a list, import random, then call the choice method like so:
In [1]: import random
In [2]: hello = ['hi', 'hello', 'yo', 'bonjour', 'hola', 'salaam']
In [3]: random.choice(hello)
Out[3]: 'bonjour'
|
How to make a text field always be the size of the window. Python
Question: How would I make a text field always be the size of the window. Here is the
rest of the code I tryed what you posted by itself and it worked but it wont
work here.
here is what I have but it does not work.
from Tkinter import *
import tkFileDialog
from tkMessageBox import *
from tkColorChooser import askcolor
import re
class Application(Frame):
def __init__(self, master):
Frame.__init__(self,master)
self.pack()
self.Saved = None
self.FontColor = "Black"
self.BackgroundColor = "White"
self.FontSize = IntVar()
self.Check = None
self.Create_Widgets()
def Create_Widgets(self):
menubar = Menu(root)
FileMenu = Menu(menubar, tearoff=0)
FileMenu.add_command(label="New", command=self.New)
FileMenu.add_command(label="Save", command=self.Save)
FileMenu.add_command(label="Save As", command=self.SaveAs)
FileMenu.add_command(label="Open", command=self.Open)
menubar.add_cascade(label="File", menu=FileMenu)
FormatMenu = Menu(menubar, tearoff=0)
FormatMenu.add_command(label="Font Color", command=self.TextCC)
FormatMenu.add_command(label="Background Color", command=self.BackgroundCC)
FormatMenu.add_command(label="Font Size", command=self.ChangeFontSize)
menubar.add_cascade(label="Format", menu=FormatMenu)
root.config(menu=menubar)
self.Cont = Text(self,wrap=WORD)
self.Cont.pack(side=LEFT,fill=BOTH,expand=YES)
Answer: I try this code and `Text` is resized to window size - even if I change window
size.
from Tkinter import *
self = Tk()
Cont = Text(self,width=110,height=30,wrap=WORD)
Cont.pack(side=LEFT,fill=BOTH,expand=YES)
self.pack()
self.mainloop()
So maybe there is another problem.
* * *
**EDIT:**
`Application` is a `Frame` inside window created automaticly by Tkinter.
You have to resize that frame too
def __init__(self, master):
Frame.__init__(self,master)
self.pack(side=LEFT,fill=BOTH,expand=YES)
|
How do I specify a serial port in the following python script using sys.argv and serial?
Question: I am relatively new to python, and am trying to specify a bluetooth serial
port to be used with a script I obtained from GitHub
(<https://github.com/ShimmerResearch/tinyos-
shimmer/blob/e04d83d9df615fc5f49f43765642cd59e979503e/apps/SimpleGSR/simpleGsr.py>).
I am on a mac and want to specify a bluetooth device whose port looks like
this: "/dev/tty.XXXX-XXX-XXX". So far all of my attempts result in the "no
device specified" error provided by the program. How to I embed my serial port
path into this script?
#!/usr/bin/python
import sys, struct, array, time, serial
def wait_for_ack():
ddata = ""
ack = struct.pack('B', 0xff)
while ddata != ack:
ddata = ser.read(1)
return
if len(sys.argv) < 2:
print "no device specified"
print "You need to specifiy the serial port of the shimmer you wish to connect to"
print "example:"
print " getBtStreamVersion.py Com5"
print " or"
print " getBtStreamVersion.py /dev/rfcomm0"
print
else:
ser = serial.Serial(sys.argv[1], 115200)
ser.flushInput()
Thanks for your help and sharing your expertise!
Answer: Pass it as an argument into the script when you run it, like this: `python
getBtStreamVersion.py /dev/tty.XXXX-XXX-XXX`
Or, if you want to hard-code your serial port into the program (not
recommended), you can replace the following:
if len(sys.argv) < 2:
print "no device specified"
print "You need to specifiy the serial port of the shimmer you wish to connect to"
print "example:"
print " getBtStreamVersion.py Com5"
print " or"
print " getBtStreamVersion.py /dev/rfcomm0"
print
else:
ser = serial.Serial(sys.argv[1], 115200)
ser.flushInput()
with this:
ser = serial.Serial("/dev/tty.XXXX-XXX-XXX", 115200)
ser.flushInput()
sys.argv[1] is just getting the first command-line argument that you're
passing to the script.
|
django on heroku: ImportError: cannot import name get_path_info
Question: I don't run into any problems running my django app locally, but for some
reason on heroku I get the error `ImportError: cannot import name
get_path_info` and have no idea how to fix this.
Here are my heroku logs:
2014-07-07 18:45:45 [18423] [INFO] Starting gunicorn 0.13.4
2014-07-07 18:45:45 [18424] [ERROR] Exception in worker process:
Traceback (most recent call last):
File "/lib/python2.7/site-packages/gunicorn/arbiter.py", line 456, in spawn_worker
worker.init_process()
File "/lib/python2.7/site-packages/gunicorn/workers/base.py", line 100, in init_process
self.wsgi = self.app.wsgi()
File "/lib/python2.7/site-packages/gunicorn/app/base.py", line 101, in wsgi
self.callable = self.load()
File "/lib/python2.7/site-packages/gunicorn/app/wsgiapp.py", line 24, in load
return util.import_app(self.app_uri)
File "/lib/python2.7/site-packages/gunicorn/util.py", line 241, in import_app
__import__(module)
File "/app/wsgi.py", line 2, in <module>
from dj_static import Cling
File "/lib/python2.7/site-packages/dj_static.py", line 7, in <module>
from django.core.handlers.base import get_path_info
ImportError: cannot import name get_path_info
2014-07-07 18:45:45 [18424] [INFO] Worker exiting (pid: 18424)
18:45:46 web.1 | 2014-07-07 18:45:46 [18423] [INFO] Shutting down: Master
18:45:46 web.1 | 2014-07-07 18:45:46 [18423] [INFO] Reason: Worker failed to boot.
My procfile:
web: gunicorn app.wsgi
and my app/wsgi.py file:
from django.core.wsgi import get_wsgi_application
from dj_static import Cling
application = Cling(get_wsgi_application())
Answer: Try setting `dj-static==0.0.6` in your Virtualenv requirements.txt file. The
Heroku Django tutorial currently shows 0.0.5. You might be running 0.0.6
locally but not in Heroku.
I've reported this issue to Heroku and their tutorial is now updated to
reflect newer versions in the requirements.txt.
|
Optimizing mysql update on table over million records
Question: I have a mysql table which contains about 1.7 million records. The goal is to
fill missing information in the table. The following is the pseudocode of what
I am trying to do:
SELECT DISTINCT A,B FROM table1
for each value A1,B1 from above query
SELECT C FROM table2 WHERE A LIKE '%A1' AND B LIKE '%B1'
UPDATE table1 SET C=C WHERE A=A1 AND B=B1
Unfortunately, the nature of the problem is that parts of values A1 and B1 are
in table2 columns, so I cannot use JOIN statements.
There are about 0.15 million unique updates which have to be made, and this
will affect the 1.7 million records.
I have built indexes on columns A1, B1 in table 1 and table 2 respectively.
I wrote a simply python script to do the above, but it is way too slow to
imagine running the script - it has run for about 15 hours now and only 1/4th
of the work has been done.
How do I optimize the queries in mysql. The tables use InnoDB.
Answer: After a bit of research and experiments, I found that indexes are not going to
be used for leading-wildcard queries (queries of type %value), so any effort
to optimize through improved querying was only futile.
Fortunately for me, I knew the most important variants that would fall under
the leading-wildcard queries (the ones that covered the most number of
records), and I directly searched for them, by avoiding the LIKE clause. After
most of the records were covered, I copied the rest of the records into
another table, and used the LIKE clause to achieve my goals.
Thanks to everyone who helped.
|
Iterative procedure for a Binary tree post order traversal
Question: I did recursive procedure for binary tree post order traversal in python. This
is the code.
from collections import namedtuple
from sys import stdout
Node = namedtuple('Node', 'data, left, right')
tree = Node(1,
Node(2,
Node(4,
Node(7, None, None),
None),
Node(5, None, None)),
Node(3,
Node(6,
Node(8, None, None),
Node(9, None, None)),
None))
def printwithspace(i):
stdout.write("%i " % i)
def postorder(node, visitor = printwithspace):
if node:
print "%d-->L"%node.data
postorder(node.left, visitor)
print "%d-->R"%node.data
postorder(node.right, visitor)
print "Root--%d"%node.data
else:
print "Null"
stdout.write('\n postorder: ')
postorder(tree)
stdout.write('\n')
Now, I want to do an iterative procedure for binary tree post order traversal
in PYTHON. Can anyone help with this?
Thanks in advance.
Answer: The following code should work. Basically, you do a depth first search with a
stack for the nodes. Additionally, you have a second stack which parallelly
stores whether a node has been expanded already.
def postorder_iteratively(node):
stack = [node]
expanded = [False]
while stack:
while stack and not stack[-1]: #remove "non-existent" nodes from the top
stack = stack[:-1]
expanded = expanded[:-1]
if stack and not expanded[-1]: #expand node
stack += [stack[-1].right, stack[-1].left]
expanded[-1] = True
expanded += [False, False]
elif stack and expanded[-1]: #if node has been expanded already, print it
print stack[-1].data
stack = stack[:-1]
expanded = expanded[:-1]
|
Python ImportError when attempting to import sqlite3 module
Question: I am trying to cross compile Python 2.7.3 for an arm based embedded device. I
have managed to compile it successfully (based on these instructions:
<http://randomsplat.com/id5-cross-compiling-python-for-embedded-linux.html>)
and all of the tests pass on the target device so I'm confident that the build
process works. I've cross compiled sqlite3 (version 3.8.5) and included it in
the python cross compile process which it seems to pick up fine (it is no
longer listed in the modules which were not found at the end of the build
process).
I'm having difficulty actually trying to import the sqlite3 library on the
target device, I get the error listed below (python is running with the -v
flag).
Python 2.7.3 (default, Jul 7 2014, 19:06:12)
[GCC 3.4.6] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
import sqlite3 # directory /mnt/card/arm-python/lib/python2.7/sqlite3
# /mnt/card/arm-python/lib/python2.7/sqlite3/__init__.pyc matches /mnt/card/arm-python/lib/python2.7/sqlite3/__init__.py
import sqlite3 # precompiled from /mnt/card/arm-python/lib/python2.7/sqlite3/__init__.pyc
# /mnt/card/arm-python/lib/python2.7/sqlite3/dbapi2.pyc matches /mnt/card/arm-python/lib/python2.7/sqlite3/dbapi2.py
import sqlite3.dbapi2 # precompiled from /mnt/card/arm-python/lib/python2.7/sqlite3/dbapi2.pyc
dlopen("/mnt/card/arm-python/lib/python2.7/lib-dynload/datetime.so", 2);
import datetime # dynamically loaded from /mnt/card/arm-python/lib/python2.7/lib-dynload/datetime.so
dlopen("/mnt/card/arm-python/lib/python2.7/lib-dynload/time.so", 2);
import time # dynamically loaded from /mnt/card/arm-python/lib/python2.7/lib-dynload/time.so
dlopen("/mnt/card/arm-python/lib/python2.7/lib-dynload/_sqlite3.so", 2);
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mnt/card/arm-python/lib/python2.7/sqlite3/__init__.py", line 24, in <module>
from dbapi2 import *
File "/mnt/card/arm-python/lib/python2.7/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ImportError: File not found
It seems to be complaining about a "file not found" but I've been through all
of the paths listed in the output an all of the files seem to exist. Is there
anything I can do to diagnose this problem further?
Answer: I've managed to get it working although I don't think I fully understand
what's going on, I'm not that familiar with compiling C/C++ code and how the
whole static/shared libraries and linking works, maybe someone can shed some
light on what's actually going on here. Anyway, here's how I've resolved it.
First I ran strace (Linux debugging utility) on the python process:
strace /mnt/card/arm-python/bin/python
This spits out a load of output as the python process starts, once it's
settled down I tried to import the sqlite library:
import sqlite3
This will then spit out a load more output, most of it relating to opening the
files involved in the module your are importing. I then noticed it managed to
open `_sqlite3.so` but shortly afterwards it tried to load `libsqlite3.so`
which wasn't found on the library paths ($LD_LIBRARY_PATH).
open("/mnt/card/arm-python/lib/python2.7/lib-dynload/_sqlite3.so", O_RDONLY|O_LARGEFILE) = 5
...
open("/lib/libsqlite3.so.0", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/lib/libsqlite3.so.0", O_RDONLY) = -1 ENOENT (No such file or directory)
I copied `libsqlite3.so.0.8.6` from the /lib directory within the cross
compiled sqlite library on my build machine to `/mnt/card` on the embedded arm
device and renamed it to `libsqlite3.so.0`. I then added `/mnt/card` to the
$LD_LIBRARY_PATH as the existing locations in that path reside on the read-
only filesystem.
I then tried to import sqlite3 again and it all seem to be working fine. So
what is the role of the stuff in the `lib-dynload` and also the role of
`libsqlite3.so.0`?
|
back-and-forth unix domain sockets lock
Question: I am writing two programs one in c++ and the other in Python to communicate
with each other using unix domain sockets. What I am trying to do is have c++
code send a number to the python code, which in turn send another number back
to c++. This goes on till c++ code runs out of numbers to send and the
execution stops. Below are my codes. I can't seem to run them past the first
iteration of the loop.
I run Python first:
_python code.py /tmp/1 /tmp/2_
Then I run the c++ code:
_./code /tmp/1 /tmp/2_
Here is the output:
## C++ output:
sent 0
Listening
Connection successful
received 5
sent 1
Listening
## Python output:
listening ...
received (0,)
>5
sent 5
listening ...
## C++ Code:
static int connFd;
int main(int argc, char* argv[])
{
int recv_sock,
send_sock;
struct sockaddr_un server, client;
///////////////////////////////////////////
//
// setup send
//
///////////////////////////////////////////
/* Create socket on which to send. */
send_sock = socket(AF_UNIX, SOCK_STREAM, 0);
if (send_sock < 0)
{
perror("opening unix socket");
exit(1);
}
/* Construct name of socket to send to. */
client.sun_family = AF_UNIX;
strcpy(client.sun_path, argv[1]);
if (connect(send_sock, (struct sockaddr *) &client, sizeof(struct sockaddr_un)) < 0)
{
close(send_sock);
perror("connecting stream socket");
exit(1);
}
///////////////////////////////////////////
//
// setup recv
//
///////////////////////////////////////////
recv_sock = socket(AF_UNIX, SOCK_STREAM, 0);
if(recv_sock< 0)
{
cerr << "Cannot open socket" << endl;
return 0;
}
bzero((char*) &server, sizeof(server));
server.sun_family = AF_UNIX;
strcpy(server.sun_path, argv[2]);
//bind socket
if(bind(recv_sock, (struct sockaddr *)&server, sizeof(server)) < 0)
{
cerr << "Cannot bind" << endl;
return 0;
}
listen(recv_sock, 10);
int X;
for (int i = 0; i < 10; i++)
{
write(send_sock, &i, sizeof(i));
cout << "sent " << i << endl;
cout << "Listening" << endl;
connFd = accept(recv_sock, 0, 0);
if (connFd < 0)
{
cerr << "Cannot accept connection" << endl;
return 0;
}
else
{
cout << "Connection successful" << endl;
read(connFd, &X, sizeof(X));
cout << "received " << X << endl;
}
usleep(2000000);
}
close(send_sock);
close(recv_sock);
unlink(argv[2]);
unlink(argv[1]);
return 0;
}
## Python Code:
import socket,os,struct, glob, sys
import random
send_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
recv_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
try:
os.remove(sys.argv[1])
except OSError:
pass
recv_socket.bind(sys.argv[1])
recv_socket.listen(10)
while 1:
print "listening ..."
conn, addr = recv_socket.accept()
data = conn.recv(4)
p = struct.unpack('i',data)
print 'received ', p
if p is '9':
break
l = int(raw_input(">"))
a = struct.pack('i', l)
send_socket.connect(sys.argv[2])
send_socket.sendall(a)
print 'sent ', l
send_socket.close()
conn.close()
recv_socket.close()
What am I doing wrong in this approach? Do I need to use threads?
Thanks
Answer: You handle differently the send and receive sockets in your C++ code: the send
socket is bound once at the start of the program whereas the receive socket
accepts a new connection at each iteration.
Your current Python implementation accepts a new connection on `recv_socket`
and connects `send_socket` at each iteration which explains the issue that you
are facing.
The most efficient fix would be to connect each socket once prior to the loop
unless you have a good reason to open a new connection at each iteration. Here
are the corresponding code listings:
# C++
static int connFd;
int main(int argc, char *argv[]) {
int recv_sock, send_sock;
struct sockaddr_un server, client;
///////////////////////////////////////////
//
// setup send
//
///////////////////////////////////////////
/* Create socket on which to send. */
send_sock = socket(AF_UNIX, SOCK_STREAM, 0);
if (send_sock < 0) {
perror("opening unix socket");
exit(1);
}
/* Construct name of socket to send to. */
client.sun_family = AF_UNIX;
strcpy(client.sun_path, argv[1]);
if (connect(send_sock, (struct sockaddr *)&client,
sizeof(struct sockaddr_un)) < 0) {
close(send_sock);
perror("connecting stream socket");
exit(1);
}
///////////////////////////////////////////
//
// setup recv
//
///////////////////////////////////////////
recv_sock = socket(AF_UNIX, SOCK_STREAM, 0);
if (recv_sock < 0) {
cerr << "Cannot open socket" << endl;
return 0;
}
bzero((char *)&server, sizeof(server));
server.sun_family = AF_UNIX;
strcpy(server.sun_path, argv[2]);
// bind socket
if (::bind(recv_sock, (struct sockaddr *)&server, sizeof(server)) < 0) {
cerr << "Cannot bind" << endl;
return 0;
}
listen(recv_sock, 10);
connFd = accept(recv_sock, 0, 0);
if (connFd < 0) {
cerr << "Cannot accept connection" << endl;
return 0;
} else {
cout << "Connection successful" << endl;
}
int X;
for (int i = 0; i < 10; i++) {
write(send_sock, &i, sizeof(i));
cout << "sent " << i << endl;
cout << "Listening" << endl;
read(connFd, &X, sizeof(X));
cout << "received " << X << endl;
usleep(2000000);
}
close(send_sock);
close(recv_sock);
unlink(argv[2]);
unlink(argv[1]);
return 0;
}
# Python
recv_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
send_socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
try:
os.remove(sys.argv[1])
except OSError:
pass
recv_socket.bind(sys.argv[1])
recv_socket.listen(10)
conn, addr = recv_socket.accept()
send_socket.connect(sys.argv[2])
while 1:
print "listening ..."
data = conn.recv(4)
p = struct.unpack('i',data)
print 'received ', p
if p is '9':
break
l = int(raw_input(">"))
a = struct.pack('i', l)
send_socket.sendall(a)
print 'sent ', l
send_socket.close()
conn.close()
recv_socket.close()
|
Unicode and urllib.open
Question: I am creating an application in python that can parse weather data from
[yr.no](http://yr.no "yr.no") in Python. It works fine with regular ASCII
strings, but fails when I use unicode.
def GetYRNOWeatherData(country, province, place):
#Parse the XML file
wtree = ET.parse(urllib.urlopen("http://www.yr.no/place/" + string.replace(country, ' ', '_').encode('utf-8') + "/" + string.replace(province, ' ', '_').encode('utf-8') + "/" + string.replace(place, ' ', '_').encode('utf-8') + "/forecast.xml"))
For example, when I try
GetYRNOWeatherData("France", "Île-de-France", "Paris")
I get this error
'charmap' codec can't encode character u'\xce' in position 0: character maps to <undefined>
Is it true that urllib doesn't handle unicode very well? Since I am using
Tkinter as a frontend to this function, would that be the source of the
problem (does the Tkinter Entry widget handle unicode well?)
Answer: You can handle this by keeping every string as a `unicode` right up until you
actually make the `urllib.urlopen` request, at which point you `encode` to
`utf-8`:
#!/usr/bin/python
# -*- coding: utf-8 -*-
# This import makes all literal strings in the file default to
# type 'unicode' rather than type 'str'. You don't need to use this,
# but you'd need to do u"France" instead of just "France" below, and
# everywhere else you have a string literal.
from __future__ import unicode_literals
import urllib
import xml.etree.ElementTree as ET
def do_format(*args):
ret = []
for arg in args:
ret.append(arg.replace(" ", "_"))
return ret
def GetYRNOWeatherData(country, province, place):
country, province, place = do_format(country, province, place)
url = "http://www.yr.no/place/{}/{}/{}/forecast.xml".format(country, province, place)
wtree = ET.parse(urllib.urlopen(url.encode('utf-8')))
return wtree
if __name__ == "__main__":
GetYRNOWeatherData("France", "Île-de-France", "Paris")
|
Python: questions about format in SVM coding
Question: I want to use svm to do supervised machine learning. My project is: Given
Obama's several speeches, and Romney's several speeches, the classifier can
decide which speaker spoke this speech when we input an unknown speech.
The code on the site wrote like this: SVC, NuSVC and LinearSVC take as input
two arrays: an array **X** of size **[n_samples, n_features]** holding the
training samples, and an array **Y** of **integer values** , **size
[n_samples]** , holding the class labels for the training samples:
>>> from sklearn import svm
>>> X = [[0, 0], [1, 1]]
>>> y = [0, 1]
>>> clf = svm.SVC()
>>> clf.fit(X, y)
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3,
gamma=0.0, kernel='rbf', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False)
After being fitted, the model can then be used to predict new values:
>>> clf.predict([[2., 2.]])
array([1])
**My question** : **1.In my project, in**
X = [[0, 0], [1, 1]]
Should I replace the first number in each square bracket with label ‘Obama’ or
'Romeny'.
**2.And how about**
y = [0, 1]
The component should be replaced by 'Obama' or 'Romney', either? But it should
be integer number based on the code annotation above.
**3.And about the content in the clf.predict()**
clf.predict([[2., 2.]])
Shouldn't be just your extracted feature of your input, then the classifier
decide which category(obama or romney) it belongs to. Why there is two
component?
Answer: Question #1: No. Your X should contain the information you want to use to
predict who gave the speech. Presumably this will be dervied somehow from the
text of the speech. Take a look at [the text feature
extraction](http://scikit-
learn.org/stable/modules/feature_extraction.html#text-feature-extraction) for
some ideas.
Question #2: Yes, because y is what you want to predict, and you want to
predict whether it was Obama or Romney.
Question #3: There are two components because in that example the input has
two features. Your data may have any number of features. If you're using text
data to predict something, you will usually have a lot of features (in the
simplest case, one feature per distinct word in the texts, although you may
trim this down by eliminating some words, like frequent function words such as
"the").
|
id3demux "streaming task paused, reason not-linked (-1)" on certain MP3s
Question: I'm creating a player in Python-GStreamer, on a pretty dated GStreamer
0.10.32, like this:
import pygst
pygst.require("0.10")
import gst
import gobject
self.__player = gst.parse_launch(
'filesrc name="source" location="/file/here.mp3" '
'! audio/mpeg, mpegversion=1, layer=3 '
'! ffdec_mp3 '
'! audioconvert ! audioresample ! volume name="vol" '
'! alsasink name="sink" sync=false')
It works fine, but I never get a tags message from the player's bus. I do need
id3 tags. So I'm replacing caps filter (`audio/mpeg, mpegversion=1, layer=3`)
with `id3demux`, and an error appears on certain MP3s: "`streaming task
paused, reason not-linked (-1)`".
Putting `identity` or `queue` in front and linking to them doesn't help with
`id3demux`.
For some reason, `mad` element is not available on my platform.
Why won't my second replacement work, or is there another way to get id3 tags
from the stream?
**EDIT:** Apparently, this is caused by specific files. No idea yet what is so
specific about those MP3s. This also happens when I simply test the pipeline
with `gst-launch`.
With GST_DEBUG=2, I'm getting:
0:00:00.046048767 32720 0x22388a0 WARN tagdemux gsttagdemux.c:680:gst_tag_demux_chain:<id3demux0> Downstream did not handle newsegment event as it should
0:00:00.046096615 32720 0x22388a0 WARN basesrc gstbasesrc.c:2625:gst_base_src_loop:<source> error: Internal data flow error.
0:00:00.046106087 32720 0x22388a0 WARN basesrc gstbasesrc.c:2625:gst_base_src_loop:<source> error: streaming task paused, reason not-linked (-1)
Replacing `id3demux` with a caps filter back helps, but I never get the tags
then.
Answer: I ended up resorting to `playbin2`. It manages to build a working pipelines
that do send tags message somehow.
|
how to run a set of Python unit tests
Question: I am running a set of unit tests using a Bash script. What is a more Pythonic
way of doing this generally?
Assuming I cannot change the unit tests, what would be the most Pythonic way
of doing this?
The Bash script to run all of the tests is as follows:
#!/bin/bash
function PJTUnitTests () {
exitCode=0
for test in $(ls PACKAGE_DIRECTORY/test/test_transform.py PACKAGE_DIRECTORY/test/test_trf*.py); do
name=$(basename $test)
echo "running ${name}"
${test} &> ${name}.test
if [ $? != "0" ]; then
echo "$(date) "${test}" failed" | tee -a test.fail
exitCode=1
else
echo "$(date) ${test} ok" | tee -a test.ok
fi
done
if [ $exitCode != "0" ]; then
echo "At least one test failed -- see summary file test.fail for details."
else
echo "All tests passed."
fi
}
PJTUnitTests
A representative unit test is the following:
import json
import subprocess
import os
import os.path
import sys
import unittest
from PACKAGE.MODULE1 import msg
class Echotest(unittest.TestCase):
def test_runEcho(self):
cmd = ['Echo_1.py']
cmd.extend(['--testInt', '1234'])
cmd.extend(['--testFloat', '-1.212'])
cmd.extend(['--testIntList', '1,2,3,4,5,6'])
cmd.extend(['--testSubstepList', 'all:juice', 'jane:apple', 'bob:orange', 'alice:pear'])
cmd.extend(['--testSubstepInt', 'all:34', 'jane:1', 'bob:2', 'alice:-3'])
cmd.extend(['--testSubstepBool', 'all:True', 'jane:false', 'bob:tRuE', 'alice:FaLse'])
msg.info('Will run this transform: {0}'.format(cmd))
p = subprocess.Popen(cmd, shell = False, stdout = subprocess.PIPE, stderr = subprocess.STDOUT, bufsize = 1)
while p.poll() is None:
line = p.stdout.readline()
sys.stdout.write(line)
# Hoover up remaining buffered output lines.
for line in p.stdout:
sys.stdout.write(line)
self.assertEqual(p.returncode, 0)
# Now load metadata and test a few important values.
with open('jobReport.json') as jr:
md = json.load(jr)
self.assertEqual(isinstance(md, dict), True)
if __name__ == '__main__':
unittest.main()
Answer: I thing the most pythonic would be to use the testing frameworks - nose,
py.test are the most popular(IMHO). So in your bash script pass control
variables to the framework used and you will be more pythonic, I guess.
|
jinja2.exceptions.UndefinedError: 'function' is undefined
Question: I am running a flask server on nginx + uwsgi. When I run just the flask server
via `python server.py`, I am able to use `id_encode` function in my jinja2
templates, no errors thrown.
However, when I launch (server.py) via
`uwsgi --socket 0.0.0.0:8002 --module server --callab app`
It will crash saying that it was unable to find the function `id_encode`.
`jinja2.exceptions.UndefinedError: 'id_encode' is undefined`
Declaring via:
if __name__ == '__main__':
app.jinja_env.globals.update(id_encode=id_encode)
app.run(host=host,port=5000, debug=True)
What's causing this problem and how can I make the function available?
Answer: The issue is that the `__main__` block will only get executed if the script is
_run_ as a top level script. uwsgi _imports_ your module and so the `__main__`
block is never run. Move your
`app.jinja_env.globals.update(id_encode=id_encode)` outside of the `__main__`
block and everything should work correctly.
|
cxfreeze icon error python34
Question: I have got a python script like `gorsel.py`. I wanted to convert it to an exe
by `setup.py` but I get icon error.
my setup codes:
* * *
import sys
from cx_Freeze import setup, Executable
build_exe_options = {"packages": ["os"], "excludes": ["Tkinter"]}
. base = None
if sys.platform == "win32":
base = "Win32GUI"
setup( name = "sifreleme",
version = "0.1",
description = "My GUI application!",
options = {"build_exe": build_exe_options},
executables = [Executable("gorsel.py", base=base)])
try:
import tkinter except ImportError:
import Tkinter as tkinter`
* * *
How can I fix my `setup.py` script? Thank you
Answer: Add this to build_exe_options: `"icon": "path/to/icon.ico"`
|
returning array from function in javascript
Question: so I come from a heavy python background, and I'm trying to wrap my head
around javascript. Here I have a function that returns an array of track IDs
for soundcloud songs by the artist 'v-2-followers'. How would I go about
assigning the output of SC.get(stuff) to a variable to reuse the track list in
another function. I'm sure I'm missing something fundamental. I'm less looking
for an answer that explains how to do this, but more **why** it's done like
that.
That said I would also very much appreciate the how. :)
(function() {
SC.initialize({
client_id:'__CLIENTID__';
});
// Would like to set a variable equal to the output of
SC.get('/tracks', { q: 'v-2-followers' }, function(tracks) {
trackIdList = [];
tracks.forEach(function(track){
trackIdList.push(track.id);
});
return trackIdList;
});
// And use the variable here.
SC.stream('/tracks/'+trackIdList[Math.floor(Math.random() * myArray.length)], function(sound) {
sound.play();
sound.pause();
$('#fabrizio').hover(function(e){
sound.resume();
}, function(e){
sound.pause();
});
});
})();
I can see that I'm missing something fundamental about variable assignment and
scope, or function callbacks here. I've exhausted myself skimming docs on the
subject. If anyone can tell me how to do this, and more importantly, why it's
done that way, for future reference.
Answer: You have `trackIdList` as a global variable because it is not created using
`var`. So as it is, you can already access it from any other function. If you
wanted to limit its scope to just the outer function, add `var trackIdList;`
as the first line of your function. You should be declaring variables with
`var` everywhere in order to limit their scope.
(function() {
var trackIdList;
...
})();
Further reading: [JavaScript Variable
Scope](http://stackoverflow.com/questions/500431/javascript-variable-scope)
The other concept you need to understand is regarding asynchronous execution
and callbacks in JavaScript. Your code that populates `trackIdList` is
contained within a callback function which is (most likely) called _after_
your call to `SC.stream()`. If `SC.stream()` depends on the value of
`trackIdList`, it should be called from the callback function.
It may help to illustrate what's going on by separating out your callback
functions.
(function () {
var trackIdList = [];
SC.initialize({
client_id: '__CLIENTID__'
});
SC.get('/tracks', { q: 'v-2-followers' }, processTracks);
var randomIndex = Math.floor(Math.random() * myArray.length);
SC.stream('/tracks/' + trackIdList[randomIndex], processSound);
function processTracks(tracks) {
tracks.forEach(function (track) {
trackIdList.push(track.id);
});
}
function processSound(sound) {
sound.play();
sound.pause();
$('#fabrizio').hover(function (e) {
sound.resume();
}, function (e) {
sound.pause();
});
}
})();
`SC.get()` makes an asynchronous request and returns immediately. Then
`SC.stream()` is called _without waiting for the request to return_.
`processTracks()` isn't called until the request comes back. The trouble is
that `SC.stream()` depends on `processTracks()`, but is called immediately. To
fix this, call `SC.stream()` from the callback function of `SC.get()`:
(function () {
SC.initialize({
client_id: '__CLIENTID__'
});
SC.get('/tracks', { q: 'v-2-followers' }, processTracks);
function processTracks(tracks) {
var trackIdList = [];
tracks.forEach(function (track) {
trackIdList.push(track.id);
});
var randomIndex = Math.floor(Math.random() * myArray.length);
SC.stream('/tracks/' + trackIdList[randomIndex], processSound);
}
function processSound(sound) {
sound.play();
sound.pause();
$('#fabrizio').hover(function (e) {
sound.resume();
}, function (e) {
sound.pause();
});
}
})();
|
Processing Experimential Measurements in Python
Question: I need to process data from a series of experiments. Each experiment has
several sensor measurements in a 'csv' file, for example:
_experiment1.csv:_
time, sensor1, sensor2, sensor3
0, 1.3, 4.7, 2.9, 6.6
1, 2.8, 7.1, 4.2, 1.1
.
.
_experiment2.csv_
time, sensor1, sensor3, sensor6
0, 3.8, 7.1, 2.2, 1.1
1, 1.6, 4.1, 14.1, 3.7
.
.
I need to orginize the data so that I can easily compare measurements between
trials. For example I might want to subtract the values measured by sensor 1
in experiment 1 from those measured by sensor 6 in experiment 2 and plot that
difference as a function of time.
Right now I'm using pandas data frames to work with the data
exp1 = pd.read_csv('experiment1.csv')
exp2 = pd.read_csv('experiment2.csv')
diff = exp1['sensor1'] - exp2['sensor3']
plot(exp1['time'],diff)
This is cumbersome since I need to go in and manually create a new data frame
when I want to add another experiment. Typically I might be working with data
from 50 trials simultaneously and the method outlined does not scale well.
Ideally I'd like to have a script I can run in a directory with an arbitrary
number of data files which would create a data frame for each file that I
could then refer to by experiment designator and sensor number.
Is there some way I can modify the above code to create a data frame for each
file in the directory without manually adding 'pd.read_csv' commands for every
case?
Thanks.
Answer: What you really are interested in is the `os` library -- specifically the
`listdir` function.
But while you're at it, why not just combine all your measurements into one
dataframe? You can add them as a column or as a level of a MultiIndex,
whichever you prefer.
Here's the column approach:
import os
import pandas as pd
directory_contents = os.listdir(your_directory)
experiment_files = [fn for fn in directory_contents if '.csv' in fn]
df = pd.DataFrame()
for filename in experiment_files:
experiment_df = pd.read_csv(filename)
experiment_df['experiment_name'] = file_name
df = pd.concat([df,experiment_df])
|
What is the best stemming method in Python?
Question: I tried all the nltk methods for stemming but it gives me weird results with
some words.
Examples
It often cut end of words when it shouldn't do it :
* poodle => poodl
* article articl
or doesn't stem very good :
* easily and easy are not stemmed in the same word
* leaves, grows, fairly are not stemmed
Do you know other stemming libs in python, or a good dictionary?
Thank you
Answer: The results you are getting are (generally) expected for a stemmer in English.
You say you tried "all the nltk methods" but when I try your examples, that
doesn't seem to be the case.
Here are some examples using the PorterStemmer
import nltk
ps = nltk.stemmer.PorterStemmer()
ps.stem('grows')
'grow'
ps.stem('leaves')
'leav'
ps.stem('fairly')
'fairli'
The results are 'grow', 'leav' and 'fairli' which, even if they are what you
wanted, are stemmed versions of the original word.
If we switch to the Snowball stemmer, we have to provide the language as a
parameter.
import nltk
sno = nltk.stem.SnowballStemmer('english')
sno.stem('grows')
'grow'
sno.stem('leaves')
'leav'
sno.stem('fairly')
'fair'
The results are as before for 'grows' and 'leaves' but 'fairly' is stemmed to
'fair'
So in both cases (and there are more than two stemmers available in nltk),
words that you say are not stemmed, in fact, are. The LancasterStemmer will
return 'easy' when provided with 'easily' or 'easy' as input.
Maybe you really wanted a lemmatizer? That would return 'article' and 'poodle'
unchanged.
import nltk
lemma = nltk..wordnet.WordNetLemmatizer()
lemma.lemmatize('article')
'article'
lemma..lemmatize('leaves')
'leaf'
|
Package Your Python Django Application into a Reusable Component
Question: I am trying to **Package my Django application** and for that I am following
**django official docs** and I have successfully packaged my app But I have
one problem with requirements of my app .
Since my app is using other packages too like **requests** etc. Now if any one
install my package in their project , the package will be installed but its
requirements not so surely it gives import error . Now I don't know that how
to tell my package that installed its dependences too , I am sure I have to
define these requirements somewhere but I don't know where ? OR I will follow
the other path that place my requirement file in my package and tell users
(from read me file) to install all dependencies from that file.
Also I have one more question , I am installing packages using this command
**python setup.py install** , is there any other command from which I can
install this package like **easy_install or pip** (My package is not in pypi
can I still use pip if yes then how ?)
Answer: In the setup.py file you can list your requirements under `install_requires`,
see an example:
install_requires=[
'Pillow>=2.0',
'django-appconf>=0.6',
],
About your second question, yes, you can install a package not published in
PyPI using pip, [check this guide](http://guide.python-
distribute.org/pip.html#installing-from-other-sources)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.