Q_CreationDate
stringlengths
23
23
Title
stringlengths
11
149
Question
stringlengths
25
6.53k
Answer
stringlengths
15
5.1k
Score
float64
-1
1.2
Is_accepted
bool
2 classes
N_answers
int64
1
17
Q_Id
int64
0
6.76k
2011-11-19 10:58:31.493
Predicting Values with k-Means Clustering Algorithm
I'm messing around with machine learning, and I've written a K Means algorithm implementation in Python. It takes a two dimensional data and organises them into clusters. Each data point also has a class value of either a 0 or a 1. What confuses me about the algorithm is how I can then use it to predict some values for another set of two dimensional data that doesn't have a 0 or a 1, but instead is unknown. For each cluster, should I average the points within it to either a 0 or a 1, and if an unknown point is closest to that cluster, then that unknown point takes on the averaged value? Or is there a smarter method? Cheers!
If you are considering assigning a value based on the average value within the nearest cluster, you are talking about some form of "soft decoder", which estimates not only the correct value of the coordinate but your level of confidence in the estimate. The alternative would be a "hard decoder" where only values of 0 and 1 are legal (occur in the training data set), and the new coordinate would get the median of the values within the nearest cluster. My guess is that you should always assign only a known-valid class value (0 or 1) to each coordinate, and averaging class values is not a valid approach.
0.101688
false
1
1,530
2011-11-19 19:24:18.203
How to duplicate a window's clientarea to another window?
I want to create a simple project (the language doesn't really matter/but I prefer C++) which simply takes a window title as it's input and duplicate it's visual part, bit by bit, to a new window. just like a mirror better I'd say. As far as I remember there was a win32 API for this but I can't remember, so would you please tell me how can I achieve this? And please tell me, will your answers work with DirectX/Open-GL applications as well or not?
You can get DC of first window, and get DC of second window. And after this do BitBlt or StretchBlt. It has to work... But I don't know what about DirectX/Open-Gl... I think it has to work too. But anyway. It won't take much time to check it.
0
false
1
1,531
2011-11-20 16:35:38.280
How do I inspect a Python's class hierarchy?
Assuming I have a class X, how do I check which is the base class/classes, and their base class/classes etc? I'm using Eclipse with PyDev, and for Java for example you could type CTRL + T on a class' name and see the hierarchy, like: java.lang.Object java.lang.Number java.lang.Integer Is it possible for Python? If not possible in Eclipse PyDev, where can I find this information?
Hit f4 with class name highlighted to open hierarchy view.
1.2
true
1
1,532
2011-11-21 05:42:01.633
pywinauto with Remote Desktop
I have just started to use the pywinauto module to automate GUI tasks. One of them is to launch the remote desktop exe (mstsc.exe)->Login->Invoke another tool there. However, I manged to connect to the remote server but the control gets lost after that. I did not manage to login. So, the question is how to use the pywinauto with Remote Desktop? Has any one tried this before?
After trying out various things I figured that the window which opens after pressing the "Connect" button on mstsc can be found by searching for the window with the title 'XYZ - Remote Desktop' where XYZ is the name of the remote server. I have tested this on the WinXP.
0.386912
false
1
1,533
2011-11-21 15:41:18.600
CSound and Python communication
I am currently working on a specialization project on simulating guitar effects with Evolutionary Algorithms, and want to use Python and CSound to do this. The idea is to generate effect parameters in my algorithm in Python, send them to CSound and apply the filter to the audio file, then send the new audio file back to Python to perform frequency analysis for comparison with the target audio file (this will be done in a loop till the audio file is similar enough to the target audio file, so sending/receiving between CSound and Python will be done alot). Shortly phrased, how do I get Python to send data to a CSound(.csd file), how do I read the data in the .csd file, and how do I send a .wav file from CSound to Python? It is also preferred that this can work dynamically on its own till the criterions for the audio file is met. Thanks in advance
You can use Csound's python API, so you can run Csound within python and pass values using the software bus. See csound.h. You might also want to use the csPerfThread wrapper class which can schedule messages to and from Csound when it is running. All functionality is available from python.
0.201295
false
2
1,534
2011-11-21 15:41:18.600
CSound and Python communication
I am currently working on a specialization project on simulating guitar effects with Evolutionary Algorithms, and want to use Python and CSound to do this. The idea is to generate effect parameters in my algorithm in Python, send them to CSound and apply the filter to the audio file, then send the new audio file back to Python to perform frequency analysis for comparison with the target audio file (this will be done in a loop till the audio file is similar enough to the target audio file, so sending/receiving between CSound and Python will be done alot). Shortly phrased, how do I get Python to send data to a CSound(.csd file), how do I read the data in the .csd file, and how do I send a .wav file from CSound to Python? It is also preferred that this can work dynamically on its own till the criterions for the audio file is met. Thanks in advance
sending parameter values from python to csound could be done using the osc protocol sending audio from csound to python could be done by routing jack channels between the two applications
0.201295
false
2
1,534
2011-11-23 20:06:01.947
How to know/change current directory in Python shell?
I am using Python 3.2 on Windows 7. When I open the Python shell, how can I know what the current directory is and how can I change it to another directory where my modules are?
If you import os you can use os.getcwd to get the current working directory, and you can use os.chdir to change your directory
0.173164
false
1
1,535
2011-11-24 00:27:42.660
Accessing Django Runserver on Shared Hosting
I'm developing a Django application on a shared hosting service (hostmonster) and am, of course, unable to access the runserver on the default localhost ip of 127.0.0.1:8000 over Firefox. The Django Project site's documentation details how to set up remote access to the run-server, but I'm not having any success with that. Setting the runserver to 0.0.0.0:8000 leaves it inaccessible. Though I figured it wouldn't work, I tried to configure the runserver to my home ip address. That gave me a "That IP address can't be assigned-to" error, as I'd expected. So, I tried configuring it to my hosted IP, the one through which I SSH in the first place. That set up properly, but still was unable to access the address via Firefox. When I plug in the IP address on its own, I just get a hostmonster error page. When I affix the port number, the connection times out. When I plug in the IP, port number and the /admin to access the Django admin page I've created, I also time out.
I'm betting that port 8000 is blocked. That explains the timeouts you mention in the last couple sentences: the firewall is set to simply drop the packets and not return any connection refusal responses. You're going to have to ask your hosting company if there's a way around this, but there might not be one. At the least, they'll have to open a non-root port (8000 or something else over 1023), but the OS can't tell when it's you opening the port or something else, so it'll be a potential security hole (e.g. an intruder can set up something to listen for commands on that port, just like you are). runserver wasn't really designed to be run on the production box. It's designed to be run on your development machine, with a small test database or something. This lets you get most of the bugs out. Then you push your code to a beta server, configured with the real server apps (e.g. apache on port 80) and databases and such, to do the heavy testing (make sure there's a filter for what IPs can connect, at least). Then you push to production from there. Or not; there are lots of ways to do this.
0
false
2
1,536
2011-11-24 00:27:42.660
Accessing Django Runserver on Shared Hosting
I'm developing a Django application on a shared hosting service (hostmonster) and am, of course, unable to access the runserver on the default localhost ip of 127.0.0.1:8000 over Firefox. The Django Project site's documentation details how to set up remote access to the run-server, but I'm not having any success with that. Setting the runserver to 0.0.0.0:8000 leaves it inaccessible. Though I figured it wouldn't work, I tried to configure the runserver to my home ip address. That gave me a "That IP address can't be assigned-to" error, as I'd expected. So, I tried configuring it to my hosted IP, the one through which I SSH in the first place. That set up properly, but still was unable to access the address via Firefox. When I plug in the IP address on its own, I just get a hostmonster error page. When I affix the port number, the connection times out. When I plug in the IP, port number and the /admin to access the Django admin page I've created, I also time out.
First, a webserver typically has at least two "interfaces", each with one or more IPs. The "loopback" interface will have the IP 127.0.0.1, and is ONLY accessible from the machine running the server. So, running on 127.0.0.1:8000 means that you are telling runserver to be accessible ONLY from that server itself, on port 8000. That's secure, but a little rough for testing. In order to see the result in a web browser you would need to use an SSH tunnel with port forwarding. (I'd explain how to do that, but honestly, it won't solve your real issue. But I'll come back to that.) Running on :8000 means that you are telling runserver to be accessible from the internet - which is probably what you want. If that's not working, then it probably means they're firewalling the port. You could contact support and ask them to open a hole, OR use an SSH tunnel, but at this point I have to ask: What are you trying to achieve? You shouldn't use runserver for production. Use runserver on your local machine for testing, then deploy to Hostmonster. (Apparently they support Django via FastCGI, according to their website.) Don't use runserver on Hostmonster, it won't do what you want.
0
false
2
1,536
2011-11-24 13:16:38.747
Virtual memory address management in c#
I'm working on a monte carol pricer and I need to improve the efficiency of the engine. MonteCarlo path are created by a third party library (in c++) Pricing is done in IronPython (script created by the end user) Everything else is driven by a c# application the pricing process is as follow: C# application request the path and collect them C# application push the paths to the script, who price and return the values C# application display the result to the end user The number and size of the paths collected are know in advance. I have 2 solutions with some advantages and drawback: Request path generation, for each path, ask the script to return the result and finaaly aggregate the results once all paths are processed Request path generation, collect all of them, request the script to process all of them at once and retrun me the final price The first solutions work fine in all scenarios but as the number of path requested increase the performance decrease (I think it's due to the multiple call to ironpython) The second solution is faster but can hit an "out of memory" exception (I think it's not enough virtual memory addressing space) if the number of path requested is too large I choose the middle ground and process a bunch of path then aggregate the prices. What I want now is to increase the performance futher by knowing in advance how many path I can process withou hitting the "out of memory" exception I did the math and I know in advance the size (in memory) of path for a given request. However because I'm quiet sure it's not a memory problem but more virtual memory addressing issue So all this text is summarize by the following 2 questions: Is it possible to know in advance how much virtual memory address my process wil need to store n instance of a class (size in memory and structure are known)? Is it possible to know how much virtual memory address are still available for my process btw I'm working on the 32 bit computer Thanks in advance for the help
Finding out how much memory an object takes in .NET is a pretty difficult task. I've hit the same problem several times. There are some imperfect methods, but none are very precise. My suggestion is to get some estimate of how much a path will take, and then pass a bunch of them leaving a good margin of safety. Even if you're processing them just 10 at a time, you've reduced the overhead 10 times already. You can even make the margin configurable and then tweak it until you strike a good balance. An even more elegant solution would be to run the whole thing in another process and if it hits an OutOfMemoryException, restart the calculation with less items (and adjust the margin accordingly). However, if you have so much data that it runs out of memory, then it might be a bit slow to pass it across two processes (which will also duplicate the data). Could it be that the memory overflow is because of some imperfections in the path processor? Memory leaks maybe? Those are possible both in C++ and .NET.
1.2
true
1
1,537
2011-11-24 20:06:28.207
Vector in python
I'm working on this project which deals with vectors in python. But I'm new to python and don't really know how to crack it. Here's the instruction: "Add a constructor to the Vector class. The constructor should take a single argument. If this argument is either an int or a long or an instance of a class derived from one of these, then consider this argument to be the length of the Vector instance. In this case, construct a Vector of the specified length with each element is initialized to 0.0. If the length is negative, raise a ValueError with an appropriate message. If the argument is not considered to be the length, then if the argument is a sequence (such as a list), then initialize with vector with the length and values of the given sequence. If the argument is not used as the length of the vector and if it is not a sequence, then raise a TypeError with an appropriate message. Next implement the __repr__ method to return a string of python code which could be used to initialize the Vector. This string of code should consist of the name of the class followed by an open parenthesis followed by the contents of the vector represented as a list followed by a close parenthesis." I'm not sure how to do the class type checking, as well as how to initialize the vector based on the given object. Could someone please help me with this? Thanks!
Your instructor seems not to "speak Python as a native language". ;) The entire concept for the class is pretty silly; real Python programmers just use the built-in sequence types directly. But then, this sort of thing is normal for academic exercises, sadly... Add a constructor to the Vector class. In Python, the common "this is how you create a new object and say what it's an instance of" stuff is handled internally by default, and then the baby object is passed to the class' initialization method to make it into a "proper" instance, by setting the attributes that new instances of the class should have. We call that method __init__. The constructor should take a single argument. If this argument is either an int or a long or an instance of a class derived from one of these This is tested by using the builtin function isinstance. You can look it up for yourself in the documentation (or try help(isinstance) at the REPL). In this case, construct a Vector of the specified length with each element is initialized to 0.0. In our __init__, we generally just assign the starting values for attributes. The first parameter to __init__ is the new object we're initializing, which we usually call "self" so that people understand what we're doing. The rest of the arguments are whatever was passed when the caller requested an instance. In our case, we're always expecting exactly one argument. It might have different types and different meanings, so we should give it a generic name. When we detect that the generic argument is an integer type with isinstance, we "construct" the vector by setting the appropriate data. We just assign to some attribute of self (call it whatever makes sense), and the value will be... well, what are you going to use to represent the vector's data internally? Hopefully you've already thought about this :) If the length is negative, raise a ValueError with an appropriate message. Oh, good point... we should check that before we try to construct our storage. Some of the obvious ways to do it would basically treat a negative number the same as zero. Other ways might raise an exception that we don't get to control. If the argument is not considered to be the length, then if the argument is a sequence (such as a list), then initialize with vector with the length and values of the given sequence. "Sequence" is a much fuzzier concept; lists and tuples and what-not don't have a "sequence" base class, so we can't easily check this with isinstance. (After all, someone could easily invent a new kind of sequence that we didn't think of). The easiest way to check if something is a sequence is to try to create an iterator for it, with the built-in iter function. This will already raise a fairly meaningful TypeError if the thing isn't iterable (try it!), so that makes the error handling easy - we just let it do its thing. Assuming we got an iterator, we can easily create our storage: most sequence types (and I assume you have one of them in mind already, and that one is certainly included) will accept an iterator for their __init__ method and do the obvious thing of copying the sequence data. Next implement the __repr__ method to return a string of python code which could be used to initialize the Vector. This string of code should consist of the name of the class followed by an open parenthesis followed by the contents of the vector represented as a list followed by a close parenthesis." Hopefully this is self-explanatory. Hint: you should be able to simplify this by making use of the storage attribute's own __repr__. Also consider using string formatting to put the string together.
0.16183
false
2
1,538
2011-11-24 20:06:28.207
Vector in python
I'm working on this project which deals with vectors in python. But I'm new to python and don't really know how to crack it. Here's the instruction: "Add a constructor to the Vector class. The constructor should take a single argument. If this argument is either an int or a long or an instance of a class derived from one of these, then consider this argument to be the length of the Vector instance. In this case, construct a Vector of the specified length with each element is initialized to 0.0. If the length is negative, raise a ValueError with an appropriate message. If the argument is not considered to be the length, then if the argument is a sequence (such as a list), then initialize with vector with the length and values of the given sequence. If the argument is not used as the length of the vector and if it is not a sequence, then raise a TypeError with an appropriate message. Next implement the __repr__ method to return a string of python code which could be used to initialize the Vector. This string of code should consist of the name of the class followed by an open parenthesis followed by the contents of the vector represented as a list followed by a close parenthesis." I'm not sure how to do the class type checking, as well as how to initialize the vector based on the given object. Could someone please help me with this? Thanks!
This may or may not be appropriate depending on the homework, but in Python programming it's not very usual to explicitly check the type of an argument and change the behaviour based on that. It's more normal to just try to use the features you expect it to have (possibly catching exceptions if necessary to fall back to other options). In this particular example, a normal Python programmer implementing a Vector that needed to work this way would try using the argument as if it were an integer/long (hint: what happens if you multiply a list by an integer?) to initialize the Vector and if that throws an exception try using it as if it were a sequence, and if that failed as well then you can throw a TypeError. The reason for doing this is that it leaves your class open to working with other objects types people come up with later that aren't integers or sequences but work like them. In particular it's very difficult to comprehensively check whether something is a "sequence", because user-defined classes that can be used as sequences don't have to be instances of any common type you can check. The Vector class itself is quite a good candidate for using to initialize a Vector, for example! But I'm not sure if this is the answer your teacher is expecting. If you haven't learned about exception handling yet, then you're almost certainly not meant to use this approach so please ignore my post. Good luck with your learning!
0
false
2
1,538
2011-11-25 13:17:33.497
how do i go over lines in an open text file on Python (2.72)
I have text files with a few thousands words in them (one word in a line). I've written a function which take two words (strings), and checks if one word is an Anagram of the other (that means if the two words contains the same letters, even if in different order). Now I want to go over my huge text file and search for anagrams. My output should be a list which contains tuples of couple of words which are anagrams. The problem is that I have no idea how to go over the words with a for/while loop. Everything I've tried has failed. (I'm clear with the way of doing it, but I just don't know python well enough). edit#1: Assuming I want to go over lines 1 to 100 in the text instead of the whole text, how do I do that?
load all words (lines) into list, while words are in separate lines this can be done via readlines() (you will have to use strip() to remove line ends): words = [s.strip() for s in f.readlines()] for each word create anagram use word list in operator for that anagram to check if anagram exists if exists then print
0
false
1
1,539
2011-11-25 22:07:31.913
Django design patterns - models with ForeignKey references to multiple classes
I'm working through the design of a Django inventory tracking application, and have hit a snag in the model layout. I have a list of inventoried objects (Assets), which can either exist in a Warehouse or in a Shipment. I want to store different lists of attributes for the two types of locations, e.g.: For Warehouses, I want to store the address, manager, etc. For Shipments, I want to store the carrier, tracking number, etc. Since each Warehouse and Shipment can contain multiple Assets, but each Asset can only be in one place at a time, adding a ForeignKey relationship to the Asset model seems like the way to go. However, since Warehouse and Shipment objects have different data models, I'm not certain how to best do this. One obvious (and somewhat ugly) solution is to create a Location model which includes all of the Shipment and Warehouse attributes and an is_warehouse Boolean attribute, but this strikes me as a bit of a kludge. Are there any cleaner approaches to solving this sort of problem (Or are there any non-Django Python libraries which might be better suited to the problem?)
what about having a generic foreign key on Assets?
1.2
true
2
1,540
2011-11-25 22:07:31.913
Django design patterns - models with ForeignKey references to multiple classes
I'm working through the design of a Django inventory tracking application, and have hit a snag in the model layout. I have a list of inventoried objects (Assets), which can either exist in a Warehouse or in a Shipment. I want to store different lists of attributes for the two types of locations, e.g.: For Warehouses, I want to store the address, manager, etc. For Shipments, I want to store the carrier, tracking number, etc. Since each Warehouse and Shipment can contain multiple Assets, but each Asset can only be in one place at a time, adding a ForeignKey relationship to the Asset model seems like the way to go. However, since Warehouse and Shipment objects have different data models, I'm not certain how to best do this. One obvious (and somewhat ugly) solution is to create a Location model which includes all of the Shipment and Warehouse attributes and an is_warehouse Boolean attribute, but this strikes me as a bit of a kludge. Are there any cleaner approaches to solving this sort of problem (Or are there any non-Django Python libraries which might be better suited to the problem?)
I think its perfectly reasonable to create a "through" table such as location, which associates an asset, a content (foreign key) and a content_type (warehouse or shipment) . And you could set a unique constraint on the asset_fk so thatt it can only exist in one location at a time
0
false
2
1,540
2011-11-26 22:30:42.270
In laymans terms, what does the Python string format "g" actually mean?
I feel a bit silly for asking what I'm sure is a rather basic question, but I've been learning Python and I'm having difficulty understanding what exactly the "g" and "G" string formats actually do. The documentation has this to say: Floating point format. Uses lowercase exponential format if exponent is less than -4 or not less than precision, decimal format otherwise. I'm sure this is supposed to make sense, but I'm just not getting it. Can someone provide a clearer explanation for this format, and possibly provide some examples of when and how it should be used, vs. just using "e" or "f". Thanks
g and G are similar to the output you would get on a calculator display if the output is very large or very small you will get a response in scientific notation. For example 0.000001 gives "1e-06" with g and "1E-06" with G. However numbers that are not too small or too large are displayed simply as decimals 1000 gives "1000" e always gives the result in exponential format 1000 gives 1.000000e+03 f always gives the result in decimal format, however it does not do the rounding off that g and G do 1000 gives "1000.000000"
0.986614
false
1
1,541
2011-11-26 23:13:00.050
Can a program written in Python be AppleScripted?
I want my Python program to be AppleScript-able, just like an Objective C program would be. Is that possible? (Note, this is not about running AppleScript from Python programs, nor about calling Python programs from AppleScript via Unix program invocation. Those are straightforward. I need genuine AppleScriptability of my program's operations.) There is some documentation about how to do this. Python 2.7.2 documentation describes MiniAEFrame, for example, but even a minimal reference to from MiniAEFrame import AEServer, MiniApplication dies with an ImportError and a complaint that a suitable image can't be found / my architecture (x86) not supported. Rut roh! It seems that MiniAEFrame might pertain to the earlier ("Carbon") API set. In other words, obsolete. There's a very nice article about "Using PyObjC for Developing Cocoa Applications with Python" (http://developer.apple.com/cocoa/pyobjc.html). Except it was written in 2005; my recently-updated version of Xcode (4.1) doesn't have any of the options it describes; the Xcode project files it provides blow up in an impressive build failure; and the last PyObjC update appears to have been made 2 years ago. Apple seems to have removed all of the functions that let you build "real" apps in AppleScript or Python, leaving only Objective C. So, what are my options? Is it still possible to build a real, AppleScriptable Mac app using Python? If so, how? (If it matters, what I need AppleScripted is text insertion. I need Dragon Dicate to be able to add text to my app. I'm currently using Tk as its UI framework, but would be happy to use the native Cocoa/Xcode APIs/tools instead, if that would help.)
You can also use the py-aemreceive module from py-appscript. I use that to implement AppleScript support in my Tkinter app.
0
false
1
1,542
2011-11-27 06:45:25.727
How to calculating and testing the number of concurrent connections?
Twisted server supports more than 10,000 concurrent connections. I want to write a test case to check it using Python. What I know is just using multiprocessing + threading + greenlet. My question is how to confirm Twisted's support for more than 10,000 concurrent connections? Something I know is using logger, but is logger accurate? Or has other way to calculate it?
It's easy to support more than 10,000 concurrent connections. What can be harder is supporting so many active connections. The kind of activity you expect to find on your connections is more likely to determine how many you can support at once. What do you expect these connections to be doing? Construct a test which reflects real world behavior and then see how many clients you can support with it. The more accurately your test reflects real world behavior, the more accurate your results will be.
0.673066
false
1
1,543
2011-11-28 02:03:47.933
Draw complex table layout in Python GUI
I need to create a GUI in Python for a client server interaction. The GUI is on the client side for which I want to create complex tables. I tried to use wxPython's grid class, but its too tough to create a complex table in that. I saw a couple of examples for simple table layouts. I tried to put up a snapshot of the complex table layout but the site doesn't allow me to. So, I am just drawing a format here : [ Blah Blah Heading ] -------------------------- [col1] | [col 2] | [col 3] -------------------------- | | | | | | | | | | | | | | | | | | Can someone please help regarding how to draw the complex layout and which module to use?
Tkinter would be enough for this. You can create any level of complexity on the layout even with the "grid" layout manager, which allows you to specify "columnspam" and "rowspam" for widgets that will take more than a single cell.
1.2
true
1
1,544
2011-11-28 11:22:12.170
"Invalidate" RabbitMQ queue or sending "DONE signal"
I'm using RabbitMQ with Python/pika to distribute some batch jobs. So I think I have a very common scenario: One process fills a queue with jobs to be done. Multiple workers retrieve jobs, transform data and put the results in a second queue. Another single process retrives the results and merges them. The works very fine so far. But how do I stop my scripts in a controlled way? Is there some build in functionality to "invalidate" a queue, so that the works will be aware that no more jobs will be filled in?
No, there is no way to find how many publishers are still publishing to a queue in AMQP. You'll have to roll your own system. A way to do this would be to have a fanout exchange that every worker binds a queue to (let's call it the "control" exchange), and have the publisher send a special message to it when it finishes. Workers could then check their "control" queue to see if the publisher is still there; if it isn't and there are no more messages available, they can safely disconnect and shutdown.
1.2
true
1
1,545
2011-11-28 15:51:31.137
Should I use a ramdisk for pictures that are converted and removed?
I have a little program here (python 2.7) that runs on an old machine and it basically keeps getting pictures (for timelapses) by running an external binary and converts them to an efficient format to save up disk space. I want to minimize the disk operations, because it's already pretty old and I want it to last some more time. At the moment the program writes the data from the camera on the disk, then converts it and removes the original data. However it does that for every image, 1- it writes a large file on disk, 2- reads it to convert, 3- and then deletes it... a bunch of disc operations that aren't necessary and could be done in ram, because the original file doesn't have to be stored and is only used as a basis to create another one. I was sure a ramdisk was the solution, then I googled on how to do that, and google returned me a bunch of links that discourage the use of ramdisk, the reasons are many: because they are not useful in modern systems (i'm running a pretty new linux kernel); they should only be used if you want to decrypt data that shouldn't hit the disk; some tests shows that ramdisk could be actually slower than hd; the operating system has a cache... So I'm confused... In this situation, should I use a ramdisk? Thank you. PS: If you want more info: I have a proprietary high-res camera, and a proprietary binary that I run to capture a single image, I can specify where it will write the file, which is a huge TIFF file, and then the python program runs the convert program from imagemagick to convert it to JPEG and then compress it in tar.bz2, so the quality is almost the same but the filesize is 1/50 of the TIFF.
Ah, proprietary binaries that only give certain options. Yay. The simplest solution would be adding a solid state hard drive. You will still be saving to disk, but disk IO will be much higher for reading and writing. A better solution would be outputting the tiff to stdout, perhaps in a different format, and piping it to your python program. It would never hit the hard drive at all, but it would be more work. Of course, if the binary doesn't allow you to do this, then it's moot.
0
false
2
1,546
2011-11-28 15:51:31.137
Should I use a ramdisk for pictures that are converted and removed?
I have a little program here (python 2.7) that runs on an old machine and it basically keeps getting pictures (for timelapses) by running an external binary and converts them to an efficient format to save up disk space. I want to minimize the disk operations, because it's already pretty old and I want it to last some more time. At the moment the program writes the data from the camera on the disk, then converts it and removes the original data. However it does that for every image, 1- it writes a large file on disk, 2- reads it to convert, 3- and then deletes it... a bunch of disc operations that aren't necessary and could be done in ram, because the original file doesn't have to be stored and is only used as a basis to create another one. I was sure a ramdisk was the solution, then I googled on how to do that, and google returned me a bunch of links that discourage the use of ramdisk, the reasons are many: because they are not useful in modern systems (i'm running a pretty new linux kernel); they should only be used if you want to decrypt data that shouldn't hit the disk; some tests shows that ramdisk could be actually slower than hd; the operating system has a cache... So I'm confused... In this situation, should I use a ramdisk? Thank you. PS: If you want more info: I have a proprietary high-res camera, and a proprietary binary that I run to capture a single image, I can specify where it will write the file, which is a huge TIFF file, and then the python program runs the convert program from imagemagick to convert it to JPEG and then compress it in tar.bz2, so the quality is almost the same but the filesize is 1/50 of the TIFF.
If on Debian (and possibly its derivatives), use "/run/shm" directory.
0
false
2
1,546
2011-11-28 17:14:11.647
Could someone give me their two cents on this optimization strategy
Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction. Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points. Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?
Regarding: "would this be faster:" The behind-the-scenes logistics of the SQL engine are really optimized for this sort of thing. You might need to create an SQL PROCEDURE or a fairly complex query, however. Caveat, if you're not particularly good at or fond of maintaining SQL, and this isn't a time-sensitive query, then you might be wasting programmer time over CPU/IO time in getting it right. However, if this is something that runs often or is time-sensitive, you should almost certainly be building some kind of JOIN logic in SQL, passing in the appropriate values (possibly wildcards), and letting the database do the filtering in the relational data set, instead of collecting a larger number of "wrong" records and then filtering them out in procedural code. You say the database is "pretty slow." Is this because it's on a distant host, or because the tables aren't indexed for the types of searches you're doing? … If you're doing a complex query against columns that aren't indexed for it, that can be a pain; you can use various SQL tools including ANALYZE to see what might be slowing down a query. Most SQL GUI's will have some shortcuts for such things, as well.
0.265586
false
2
1,547
2011-11-28 17:14:11.647
Could someone give me their two cents on this optimization strategy
Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction. Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points. Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?
Your strategy is reasonable though I would first look at doing as much of the work as possible in the database query using LIKE and other SQL functions. It should be possible to make a query that matches complex criteria.
0
false
2
1,547
2011-11-28 17:40:57.727
Biomorph Implementation in Python
I'm trying to create a Python implementation of Dawkins' biomorphs as described in his book, The Blind Watchmaker. It works like this: A parent organism is displayed, as well as its offspring which are just mutated versions of the parent. Then the user clicks on a descendant it wants to breed, and all the offspring will "evolve" based on cumulative selection. What I'm unsure of is how to get started in Python. I've already created genetic algorithm and l-system programs that are supposed to be used. The l-system program evolves trees given certain parameters (which is my goal in this biomorph implementation), and the genetic algorithm program evolves the genotypes that are created in the l-system program. What library would be good to use (turtle, pygame, etc)? I am familiar with turtle, but the documentation says, "To use multiple turtles an a screen one has to use the object-oriented interface." I'm not sure what that means. The reason I thought of using multiple turtles on a screen is to have the the parent and its descendants displayed on one screen, as they are in online apps. Then the user can click on the organism it wants to breed by using mouse events. Is this a good idea, or is there better way to do it? Thanks in advance.
Depending on graphical requirements, I would say that for a lightweight app you could get away with PyQt or PyGame. For more demanding real-time graphical requirements you could use something like PyOgre or PyOpenGL. You may need to also research graph-layout/data-visualisation algorithms or libraries (e.g. dot) depending on your UI goals.
1.2
true
1
1,548
2011-11-29 11:55:22.447
Corba python integration with web service java
i'm working on a project thata i need develop one web service ( in java ) that get one simple number from a Corba python implementation... how can i proceed with this?? im using omniOrb and already done the server.py that genetares one simple number! thx a lot
You will need a Java CORBA provider - for example IONA or JacORB. Generate the IDL files for your python service and then use whatever IDL -> stub compiler your Java ORB provides to generate the java client-side bindings. From there it should be as simple as binding to the corbaloc:// at which your python server is running and executing the remote calls from your java stubs. Of course, CORBA being CORBA, it is likely to require the ritual sacrifice of small mammals and, possibly, lots of candles.
0.999329
false
1
1,549
2011-12-01 08:55:56.417
Implementation HMAC-SHA1 in python
I am trying to use the OAuth of a website, which requires the signature method to be 'HMAC-SHA1' only. I am wondering how to implement this in Python?
In Python 3.7 there is an optimized way to do this. HMAC(key, msg, digest).digest() uses an optimized C or inline implementation, which is faster for messages that fit into memory.
0
false
1
1,550
2011-12-01 11:23:56.213
Python ast to dot graph
I'm analyzing the AST generated by python code for "fun and profit", and I would like to have something more graphical than "ast.dump" to actually see the AST generated. In theory is already a tree, so it shouldn't be too hard to create a graph, but I don't understand how I could do it. ast.walk seems to walk with a BFS strategy, and the visitX methods I can't really see the parent or I don't seem to find a way to create a graph... It seems like the only way is to write my own DFS walk function, is does it make sense?
If you look at ast.NodeVisitor, it's a fairly trivial class. You can either subclass it or just reimplement its walking strategy to whatever you need. For instance, keeping references to the parent when nodes are visited is very simple to implement this way, just add a visit method that also accepts the parent as an argument, and pass that from your own generic_visit. P.S. By the way, it appears that NodeVisitor.generic_visit implements DFS, so all you have to do is add the parent node passing.
1.2
true
1
1,551
2011-12-01 13:45:32.507
Xpath vs DOM vs BeautifulSoup vs lxml vs other Which is the fastest approach to parse a webpage?
I know how to parse a page using Python. My question is which is the fastest method of all parsing techniques, how fast is it from others? The parsing techniques I know are Xpath, DOM, BeautifulSoup, and using the find method of Python.
lxml was written on C. And if you use x86 it is best chose. If we speak about techniques there is no big difference between Xpath and DOM - it's very quickly methods. But if you will use find or findAll in BeautifulSoup it will be slow than other. BeautifulSoup was written on Python. This lib needs a lot of memory for parse any data and, of course, it use standard search methods from python libs.
0.201295
false
1
1,552
2011-12-02 12:45:05.637
Permanent 'Temporary failure in name resolution' after running for a number of hours
After running for a number of hours on Linux, my Python 2.6 program that uses urllib2, httplib and threads, starts raising this error for every request: <class 'urllib2.URLError'> URLError(gaierror(-3, 'Temporary failure in name resolution'),) If I restart the program it starts working again. My guess is some kind of resource exhaustion but I don't know how to check for it. How do I diagnose and fix the problem?
This was caused by a library's failure to close connections, leading to a large number of connections stuck in a CLOSE_WAIT state. Eventually this causes the 'Temporary failure in name resolution' error due to resource exhaustion.
1.2
true
1
1,553
2011-12-03 14:51:41.897
Is it necessary to have the knowledge of using terminal/command-prompt for learning python
I saw a couple of tutorials and they all make use of termial/command-prompt and I just dont know how they work. Is it necessary to know how they work before learning python or you can just earn it like you would learn some other language(lets say C) It'll be great if you could recommend something. NOTE: I am a windows user.
No. If you know how to open the command prompt, navigate to a directory ("cd") and list a directory ("dir" on windows and "ls" on linux), then you can probably jump right into those python tutorials.
0
false
1
1,554
2011-12-03 18:59:02.863
how to pause a program until a button is pressed
I am using py2exe to make executable. I want to know how to pause the program until a button is pressed.. just like we do in C using system("pause"); As the program terminates automatically in windows, I need this tool. Is there anything better than py2exe which does similar work ?
import os Then in your code os.system("pause")
-0.265586
false
1
1,555
2011-12-03 20:57:22.533
What is the best Python library for 3D game development?
Pygame and Pyglet are for 2D game development. Pysoy needs many requirements to be installed. I can't figure out how to install Pyogre. Panda3d is giving me this error and don't know how to fix it. importerror no module named direct.showbase.showbase Is there any other good 3D game development library that could be installed on Windows XP with Python 2.7? I prefer to install it using pypm or pypi to avoid possible errors that I'm currently having with Panda3d.
You can use pyopengl. i think you might have to install pygame...
1.2
true
1
1,556
2011-12-04 23:39:17.103
Start Python Celery task via Redis Pub/Sub
Is there an efficient way to start tasks via Redis Pub/Sub and return the value of the task back to a Pub/Sub channel to start another task based on the result? Does anybody have an idea on how to put this together? Maybe decorators are a good idea to handle and prepare the return value back to a Pub/Sub channel without changing the code of the task too much. Any help is very much appreciated!
The problem with using pub/sub is that it's not persistant. If you're looking to do closer to real time communication celery might not be your best choice.
0.386912
false
1
1,557
2011-12-05 12:53:05.907
Why are NumPy arrays so fast?
I just changed a program I am writing to hold my data as numpy arrays as I was having performance issues, and the difference was incredible. It originally took 30 minutes to run and now takes 2.5 seconds! I was wondering how it does it. I assume it is that the because it removes the need for for loops but beyond that I am stumped.
Numpy arrays are extremily similar to 'normal' arrays such as those in c. Notice that every element has to be of the same type. The speedup is great because you can take advantage of prefetching and you can instantly access any element in array by it's index.
0
false
1
1,558
2011-12-06 07:20:01.970
Conditional formatting using ruby or python
Does anybody know how to apply conditional formatting to a google spreadsheet document using ruby or python?
This is not currently possible, Google does not let python or ruby runs in the browser. Conditional formatting is currently limited to simple rules based on cell contents. Google has recently rolled macros ability in their spreadsheets but that is limited to JavaScript and does not support conditional formatting.
1.2
true
1
1,559
2011-12-07 09:00:17.007
backup msSQL stored proc or UDF code via Python
The idea is to make a script that would get stored procedure and udf contents (code) every hour (for example) and add it to SVN repo. As a result we have sql versioning control system. Does anyone know how to backup stored procedure code using Python (sqlAlchemy, pyodbc or smth). I'v done this via C# before using SQL Management Objects. Thanks in advance!
There is no easy way to access SMO from Python (because there is no generic solution for accessing .NET from Python), so I would write a command-line tool in C# and call it from Python using the subprocess module. Perhaps you could do something with ctypes but I have no idea if that's feasible. But, perhaps a more important question is why you want or need to do this. Does the structure of your database really change so often? If so, presumably you have no real control over it so what benefit does source control have in that scenario? How do you deploy database changes in the first place? Usually changes go from source control into production, not the other way around, so the 'master' source of DDL (including tables, indexes etc.) is SVN, not the database. But you haven't given much information about what you really need to achieve, so perhaps there is a good reason for needing to do this in your environment.
0.386912
false
1
1,560
2011-12-07 15:36:34.360
Filter POST data in Django
I have a view in django that is going to save HTML data to a model, and I'm wondering how I might go about filtering it before saving it to the model? Are there built in functions for it? I know there are template filters, but I don't think those help me in this case. What I'll be doing is getting the content of a div via JQuery, sending that to a view via ajax, then saving it to a model.
django takes care of safely storing strings in the database. html is a worry when displaying to the user, and django provides some help there as well, escaping html unless explicitly told not to
1.2
true
1
1,561
2011-12-07 17:17:05.077
In Python how to call subprocesses under a different user?
For a Linux system, I am writing a program in Python, who spawns child processes. I am using the "multiprocessing" library and I am wondering if there is a method to call sub-processes with a different user than the current one. I'd like to be able to run each subprocess with a different user (like Postfix, for example.) Any idea or pointers ?
You could look in os.setpgid(pid, pgrp) direction.
0.386912
false
1
1,562
2011-12-08 01:35:55.407
Python - Read in binary file over SSH
With Python, I need to read a file into a script similar to open(file,"rb"). However, the file is on a server that I can access through SSH. Any suggestions on how I can easily do this? I am trying to avoid paramiko and am using pexpect to log into the SSH server, so a method using pexpect would be ideal. Thanks, Eric
If it's a short file you can get output of ssh command using subprocess.Popen ssh root@ip_address_of_the_server 'cat /path/to/your/file' Note: Password less setup using keys should be configured in order for it to work.
0
false
1
1,563
2011-12-08 04:32:42.317
What is the equivalent to python equivalent to using Class.getResource()
In java if I want to read a file that contains resource data for my algorithms how do I do it so the path is correctly referenced. Clarification I am trying to understand how in the Python world one packages data along with code in a module. For example I might be writing some code that looks at a string and tries to classify the language the text is written in. For this to work I need to have a file that contains data about language models. So when my code is called I would like to load a file (or files) that is packaged along with the module. I am not clear on how I should do that in Python. TIA.
For Pythonistas who don't know, the behaviour of Java's Class.getResource is basically: the supplied file name is (unless it's already an absolute path) transformed into a relative path by using the class' package (since the directory path to the class file is expected to mirror the explicit "package" declaration for the class). The ClassLoader that was used to load the class in the first place then gets to transform this path string, by its own logic, into a URL object that could encode a file name, a location on the WWW, etc. Python is not Java, so we have to approximate a few things and read intent into the question. Python classes don't really explicitly go into packages, although you can create packages by putting them in folders with an additional __init__.py file. Python does not really have anything quite like the URL class in its standard library; although there is plenty of support for connecting to the Internet, you're generally expected to just use strings to represent URLs (and file names) and format them appropriately. This is arguably an unfortunate missed opportunity for polymorphism (it would not be hard to make your own wrapper, though you might miss lots of special cases and useful functionality). Anyway, in normal cases with Java, you're not expecting to get a web URL from this process. Python has a concept of a "working directory" that depends on how the Python process was launched. File paths are not necessarily relative to the directory where the "main class" (well, really, "main module", because Python doesn't make you put everything in a class) is found. So what you really want, probably, is to get the absolute path on disk to the source file corresponding to the class. But that isn't really going to work out either. The problem is: given a class, you can get the name of the module it comes from, and then look up that name to get the actual module object, and then from the module object get the file name that the module was loaded from. However, that file name is relative to whatever the working directory was when the module was loaded, and that information isn't recorded. If the working directory has changed since then (with os.chdir), you're out of luck. Please try to be more clear about what you're really trying to do.
0
false
1
1,564
2011-12-08 13:07:03.767
How to store data with large number (constant) of properties in SQL
I am parsing the USDA's food database and storing it in SQLite for query purposes. Each food has associated with it the quantities of the same 162 nutrients. It appears that the list of nutrients (name and units) has not changed in quite a while, and since this is a hobby project I don't expect to follow any sudden changes anyway. But each food does have a unique quantity associated with each nutrient. So, how does one go about storing this kind of information sanely. My priorities are multi-programming language friendly (Python and C++ having preference), sanity for me as coder, and ease of retrieving nutrient sets to sum or plot over time. The two things that I had thought of so far were 162 columns (which I'm not particularly fond of, but it does make the queries simpler), or a food table that has a link to a nutrient_list table that then links to a static table with the nutrient name and units. The second seems more flexible i ncase my expectations are wrong, but I wouldn't even know where to begin on writing the queries for sums and time series. Thanks
Use the second (more normalized) approach. You could even get away with fewer tables than you mentioned: tblNutrients -- NutrientID -- NutrientName -- NutrientUOM (unit of measure) -- Otherstuff tblFood -- FoodId -- FoodName -- Otherstuff tblFoodNutrients -- FoodID (FK) -- NutrientID (FK) -- UOMCount It will be a nightmare to maintain a 160+ field database. If there is a time element involved too (can measurements change?) then you could add a date field to the nutrient and/or the foodnutrient table depending on what could change.
0.673066
false
1
1,565
2011-12-08 14:35:47.680
QTreeWidget and PyQt: Methods to save and read data back in from file
I'm working on a project that contains a QTreeWidget. This widget will have Parent nodes with childs nodes and or other siblings nodes. Attached to each node will be a unique ID in the form of an integer as well as a name. Now..... what methods can or should I use to save all the nodes in the QTreeWidget to disk and how to read them back in????? What I would do is traverse the tree and save each node into a sqlite DB. I already can do that in exactly the same order as it is in the tree. BUT...... QTreeWidget and its related QTreeWidgetItem objects do NOT have a uniq index that you can USE to insert other nodes around. When I rebuild the tree, childs have to be attached to their parents/ancestor (which by then exists in the tree). To do so I keep track of the Parents and their IDs I already added to the tree, like (parent name, parentID) inside an array. Note that the parentID has nothing to do with location as a node inside the QTreeWidget/WidgetItem. It is created by myself and stored insite the sqlite db allong with the name of the node just as some sort of tag since QTreeWidget/QTreeWidgetItem does not supply any. To determine the correct parent I take the ParentID from the sqlite DB, search the array or the QTreeWidget for that same ID, select that parent in the QTreeWidget, add the sqlite db item as a child node to the parent in the treeview. But it feels clumsy this way, because of the search each time a node is added and because of a missing index inside the QTreeWidget/WidgetItem I can use for inserting nodes around. I know this should be done much more elegant and less resource intensive. I'm not looking for exact coding examples but more for ideas and hints of the methods and properties to use from QTreeWidget and QTreeWidgetItem. Any ideas?
I managed this by using the data property of the TreewidgetItem by setting a value in it. Using that value as a reference to find one or more or even all TreewidgetItem(s) makes it easy to save a complete tree.
0
false
1
1,566
2011-12-09 20:37:17.777
How do I list the libraries that exist in python?
For example, in rails you can do a 'gem list' and it will show all of the gems that you have installed. Any clue how I can do this in python? Also, I am using virtualenv, not sure if that helps?
Install pip and do pip freeze. If you are using virtualenv, it is already installed in it.
0.296905
false
1
1,567
2011-12-10 00:25:45.100
Simple python code to website
I am a beginner in python. For practice reasons I want to learn how to upload a python code to a website. I have a domain and web hosting service, however I'm really confused about how to integrate my code with a web page on my website. (I have a decent bit of knowledge with Tkinter) Can anybody show me how to upload this simple function to my website?: There are two entries on the interface for the user to enter two numbers. On pressing a button a new window(web page) with the answer displayed. Thank you.
I would recommend you look at something like Django and mod_wsgi. Generally helps if you have shell access to the server.
0
false
1
1,568
2011-12-10 04:15:35.990
Append system python installation's sys.path to my personal python installation
I have a system python installation and a personal python installation in my home directory. My personal python comes in ahead in my $PATH and I usually run that. But the system python installation has some modules that I want to use with my personal python installation. So, basically, I want to append the sys.path of the system python installation to the sys.path of the personal python installation. I read up on the docs and source of site module and saw that I could use the sitecustomize or usercustomize modules to do this. But where I am stuck is how do I get the sys.path of the system python to be appended to the personal python's sys.path. The gensitepackages function in the site modules seems to calculate the paths to be added to sys.path but it is using the PREFIXES global variable instead of taking it as an argument, so for all I know, I can't use it. Adding system python's prefixes to PREFIXES is also not an option as by the time the customize module(s) are loaded, the PREFIXES is already used to build the path. Any ideas on how to go about this? Also, I'm not sure if I should ask this on askubuntu/unix&linux. Comments? Edit: Guess I wasn't clear on this part. I want the system python's path to be appended so that when I try to use modules that are not present in my personal python, it will automatically fallback to the system python's modules.
python is gonna check if there is a $PYTHONPATH environment variable set. Use that for the path of your other modules. use export PYTHONPATH="path:locations"
0.201295
false
1
1,569
2011-12-11 19:37:40.357
Making first name, last name a required attribute rather than an optional one in Django's auth User model
I'm trying to make sure that the first name and last name field are not optional for the auth User model but I'm not sure how to change it. I can't use a sub class as I have to use the authentication system. Two solutions I can think of are: to put the name in the user profile but it's a little silly to have a field that I can't use correctly. To validate in the form rather than in the model. I don't think this really fits with Django's philosophy... For some reason I can't seem to find a way to do this online so any help is appreciated. I would have thought that this would be a popular question. Cheers, Durand
I would definitely go with validating on the form. You could even go as far as having more form validation in the admin if you felt like it.
1.2
true
1
1,570
2011-12-11 21:38:08.333
How do I merge results from target page to current page in scrapy?
Need example in scrapy on how to get a link from one page, then follow this link, get more info from the linked page, and merge back with some data from first page.
Partially fill your item on the first page, and the put it in your request's meta. When the callback for the next page is called, it can take the partially filled request, put more data into it, and then return it.
1.2
true
1
1,571
2011-12-12 18:51:59.977
How to evenly distribute tasks among nodes with Celery?
I am using Celery with Django to manage a task que and using one (or more) small(single core) EC2 instances to process the task. I have some considerations. My task eats 100% CPU on a single core. - uses whatever CPU available but only in one core If 2 tasks are in progress on the same core, each task will be slowed down by half. I would like to start each task ASAP and not let it be que. Now say I have 4 EC2 instances, i start celery with "-c 5" . i.e. 5 concurrent tasks per instance. In this setup, if I have 4 new tasks, id like to ensure, each of them goes to different instance, rather than 4 going to same instance and each task fighting for CPU. Similarly, if I have 8 tasks, id like each instance to get 2 tasks at a time, rather than 2 instances processing 4 tasks each. Does celery already behave the way I described? If not then how can i make it behave as such?
it's actually easy: you start one celery-instance per ec2-instance. set concurrency to the number of cores per ec2-instance. now the tasks don't interfere and distribute nicely among you instances. (the above assumes that your tasks are cpu bound)
0.673066
false
1
1,572
2011-12-14 06:17:00.793
How to pass object between python instances
I have a genetic expression tree program to control a bot. I have a GP.py and a MyBot.py. I need to be able to have the MyBot.py access an object created in GP.py The GP.py is starting the MyBot.py via the os.system() command I have hundreds of tree objects in the GP.py file and the MyBot.py needs to evaluate them. I can't combine the two into one .py file because a fresh instance of MyBot.py is executed thousands of times, but GP.py needs to evaluate the fitness of MyBot.py with each tree. I know how to import the methods and Class definitions using import GP.py, but I need the specific instance of the Tree class object Any ideas how to send this tree from the first instance to the second?
You could serialize the object with the pickle module (or maybe json?) If you really want to stick with os.system, then you could have MyBot.py write the pickled object to a file, which GP.py could read once MyBot.py returns. If you use the subprocess module instead, then you could have MyBot.py write the pickled object to stdout, and GP.py could read it over the pipe. If you use the multiprocessing module, then you could spawn the MyBot process and pass data back and forth over a Queue instance.
1.2
true
1
1,573
2011-12-14 08:59:17.003
Representing filesystem table
I’m working on simple class something like “in memory linux-like filesystem” for educational purposes. Files will be as StringIO objects. I can’t make decision how to implement files-folders hierarchy type in Python. I’m thinking about using list of objects with fields: type, name, parent what else? Maybe I should look for trees and graphs. Update: There will be these methods: new_dir(path), dir_list(path), is_file(path), is_dir(path), remove(path), read(file_descr), file_descr open(file_path, mode=w|r), close(file_descr), write(file_descr, str)
What's the API of the filestore? Do you want to keep creation, modification and access times? Presumably the primary lookup will be by file name. Are any other retrieval operations anticipated? If only lookup by name is required then one possible representation is to map the filestore root directory on to a Python dict. Each entry's key will be the filename, and the value will either be a StringIO object (hint: in Python 2 use cStringIO for better performance if it becomes an issue) or another dict. The StringIO objects represent your files, the dicts represent subdirectories. So, to access any path you split it up into its constituent components (using .split("/")) and then use each to look up a successive element. Any KeyError exceptions imply "File or directory not found," as would any attempts to index a StringIO object (I'm too lazy to verify the specific exception). If you want to implement greater detail then you would replace the StringIO objects and dicts with instances of some "filestore object" class. You could call it a "link" (since that's what it models: A Linux hard link). The various attributes of this object can easily be manipulated to keep the file attributes up to date, and the .data attribute can be either a StringIO object or a dict as before. Overall I would prefer the second solution, since then it's easy to implement methods that do things like keep access times up to date by updating them as the operations are performed, but as I said much depends on the level of detail you want to provide.
0
false
2
1,574
2011-12-14 08:59:17.003
Representing filesystem table
I’m working on simple class something like “in memory linux-like filesystem” for educational purposes. Files will be as StringIO objects. I can’t make decision how to implement files-folders hierarchy type in Python. I’m thinking about using list of objects with fields: type, name, parent what else? Maybe I should look for trees and graphs. Update: There will be these methods: new_dir(path), dir_list(path), is_file(path), is_dir(path), remove(path), read(file_descr), file_descr open(file_path, mode=w|r), close(file_descr), write(file_descr, str)
You should first ask the question: What operations should my "file system" support? Based on the answer you select the data representation. For example, if you choose to support only create and delete and the order of the files in the dictionary is not important, then select a python dictionary. A dictionary will map a file name (sub path name) to either a dictionary or the file container object.
0
false
2
1,574
2011-12-14 14:22:33.660
Tornadoweb webapp cannot be managed via upstart
Few days ago I found out that my webapp wrote ontop of the tornadoweb framework doesn't stop or restart via upstart. Upstart just hangs and doesn't do anything. I investigated the issue and found that upstart recieves wrong PID, so it can only run once my webapp daemon and can't do anything else. Strace shows that my daemon makes 4 (!) clone() calls instead of 2. Week ago anything was good and webapp was fully and correctly managed by the upstart. OS is Ubuntu 10.04.03 LTS (as it was weeks ago). Do you have any ideas how to fix it? PS: I know about "expect fork|daemon" directive, it changes nothing ;)
There are two often used solutions The first one is to let your application honestly report its pid. If you could force your application to write the actual pid into the pidfile then you could get its pid from there. The second one is a little more complicated. You may add specific environment variable for the script invocation. This environment variable will stay with all the forks if forks don't clear environment and than you can find all of your processes by parsing /proc/*/environ files. There should be easier solution for finding processes by their environment but I'm not sure.
0
false
2
1,575
2011-12-14 14:22:33.660
Tornadoweb webapp cannot be managed via upstart
Few days ago I found out that my webapp wrote ontop of the tornadoweb framework doesn't stop or restart via upstart. Upstart just hangs and doesn't do anything. I investigated the issue and found that upstart recieves wrong PID, so it can only run once my webapp daemon and can't do anything else. Strace shows that my daemon makes 4 (!) clone() calls instead of 2. Week ago anything was good and webapp was fully and correctly managed by the upstart. OS is Ubuntu 10.04.03 LTS (as it was weeks ago). Do you have any ideas how to fix it? PS: I know about "expect fork|daemon" directive, it changes nothing ;)
Sorry my silence, please. Investigation of the issue ended with the knowledge about uuid python library which adds 2 forks to my daemon. I get rid of this lib and tornado daemon works now properly. Alternative answer was supervisord which can run any console tools as a daemon which can't daemonize by itself.
1.2
true
2
1,575
2011-12-14 16:40:24.343
using data from a txt file in python
I'm learning python and now I'm having some problems with reading and analyzing txt files. I want to open in python a .txt file that contains many lines and in each line I have one fruit and it's price. I would like to know how to make python recognize their prices as a number (since it recognizes as a string when I use readlines()) so then I could use the numbers in some simple functions to calculate the minimum price that I have to sell the fruits to obtain profit. Any ideas of how to do it?
I had the same problem when I first was learning Python, coming from Perl. Perl will "do what you mean" (or at least what it thinks you mean), and automatically convert something that looks like a number into a number when you try to use it like a number. (I'm generalizing, but you get the idea). The Python philosophy is to not have so much magic occurring, so you must do the conversion explicitly. Call either float("12.00") or int("123") to convert from strings.
0
false
1
1,576
2011-12-17 14:48:37.237
Find the position of difference between two strings
I have two strings of equal length, how can I find all the locations where the strings are different? For example, "HELPMEPLZ" and "HELPNEPLX" are different at positions 4 and 8.
Easiest way is to split data into two char arrays and then loop through comparing the letters and return the index when the two chars do not equal each other. This method will work fine as long as both strings are equal in length.
0
false
1
1,577
2011-12-18 12:36:37.527
How do I run Python code from Sublime Text 2?
I want to set up a complete Python IDE in Sublime Text 2. I want to know how to run the Python code from within the editor. Is it done using build system? How do I do it ?
I had the same problem. You probably haven't saved the file yet. Make sure to save your code with .py extension and it should work.
0
false
4
1,578
2011-12-18 12:36:37.527
How do I run Python code from Sublime Text 2?
I want to set up a complete Python IDE in Sublime Text 2. I want to know how to run the Python code from within the editor. Is it done using build system? How do I do it ?
seems the Ctrl+Break doesn't work on me, neither the Preference - User... use keys, Alt → t → c
0.025505
false
4
1,578
2011-12-18 12:36:37.527
How do I run Python code from Sublime Text 2?
I want to set up a complete Python IDE in Sublime Text 2. I want to know how to run the Python code from within the editor. Is it done using build system? How do I do it ?
[ This applies to ST3 (Win), not sure about ST2 ] To have the output visible in Sublime as another file (+ one for errors), do this: Create a new build system: Tools > Build Systems > New Build System... Use the following configuration: { "cmd": ["python.exe", "$file", "1>", "$file_name.__STDOUT__.txt", "2>", "$file_name.__STDERR__.txt"], "selector": "source.python", "shell": true, "working_dir": "$file_dir" } For your Python file select the above build system configuration file: Tools > Build Systems > {your_new_build_system_filename} ctrl + b Now, next to your file, e.g. "file.py" you'll have "file.__STDOUT__.py" and "file.__STDERR__.py" (for errors, if any) If you split your window into 3 columns, or a grid, you'll see the result immediately, without a need to switch panels / windows
0.126864
false
4
1,578
2011-12-18 12:36:37.527
How do I run Python code from Sublime Text 2?
I want to set up a complete Python IDE in Sublime Text 2. I want to know how to run the Python code from within the editor. Is it done using build system? How do I do it ?
You can access the Python console via “View/Show console” or Ctrl+`.
0
false
4
1,578
2011-12-18 20:27:54.363
Personalizing Online Assignments for a Statistics Class
I teach undergraduate statistics, and am interested in administering personalized online assignments. I have already solved one portion of the puzzle, the generation of multiple version of a question using latex/markdown + knitr/sweave, using seeds. I am now interested in developing a web-based system, that would use the various versions generated, and administer a different one for each student, online. I have looked into several sites related to forms (google docs, wufoo, formsite etc.), but none of them allow programmatic creation of questionnaires. I am tagging this with R since that is the language I am most familiar with, and is key to solving the first part of the problem. I know that there are several web-based frameworks for R, and was wondering whether any of them are suitable for this job. I am not averse to solutions in other languages like Ruby, Python etc. But the key consideration is the ability to programatically deliver online assignments. I am aware of tools like WebWork, but they require the use of Perl and the interfaces are usually quite clunky. Feel free to add tags to the post, if you think I have missed a framework that would be more suitable. EDIT. Let me make it clear by giving an example. Currently, if I want to administer an assignment online, I could simply create a Google Form, send the link to my students, and collect all responses in a spreadsheet, and automatically grade it. This works, if I just have one version of the assignment. My questions is, if I want to administer a different version of the assignment for each student, and collect their responses, how can I do that?
I found one possible solution that might work using the RGoogleDocs package. I am posting this as an answer only because it is long. I am still interested in better approaches, and hence will keep the question open. Here is the gist of the idea, which is still untested. Create multiple versions of each assignment using knitr/Sweave. Upload them to GoogleDocs using uploadDoc. Share one document per student using setAccess which modifies access controls. Create a common Google Form to capture final answers for each student. The advantage I see is two-fold. One, since all final answers get captured on a spreadsheet, I can access them with R and grade them automatically. Two, since I have access to all the completed assignments on Google Docs, I can skim through them and provide individual comments as required (or let some of my TAs do it). I will provide an update, if I manage to get this working, and maybe even create an R package if it would be useful for others.
0.201295
false
2
1,579
2011-12-18 20:27:54.363
Personalizing Online Assignments for a Statistics Class
I teach undergraduate statistics, and am interested in administering personalized online assignments. I have already solved one portion of the puzzle, the generation of multiple version of a question using latex/markdown + knitr/sweave, using seeds. I am now interested in developing a web-based system, that would use the various versions generated, and administer a different one for each student, online. I have looked into several sites related to forms (google docs, wufoo, formsite etc.), but none of them allow programmatic creation of questionnaires. I am tagging this with R since that is the language I am most familiar with, and is key to solving the first part of the problem. I know that there are several web-based frameworks for R, and was wondering whether any of them are suitable for this job. I am not averse to solutions in other languages like Ruby, Python etc. But the key consideration is the ability to programatically deliver online assignments. I am aware of tools like WebWork, but they require the use of Perl and the interfaces are usually quite clunky. Feel free to add tags to the post, if you think I have missed a framework that would be more suitable. EDIT. Let me make it clear by giving an example. Currently, if I want to administer an assignment online, I could simply create a Google Form, send the link to my students, and collect all responses in a spreadsheet, and automatically grade it. This works, if I just have one version of the assignment. My questions is, if I want to administer a different version of the assignment for each student, and collect their responses, how can I do that?
The way you have worded your question it's not really clear why you have to mark the students' work online. Especially since you say that you generate assignments using sweave. If you use R to generate the (randomised) questions, then you really have to use R to mark them (or output the data set). For my courses, I use a couple of strategies. For the end of year exam (~500 students), each student gets a unique data set. The students log on to a simple web-site (we use blackboard since the University already has it set up). All students answer the same questions, but use their own unique data set. For example, "What is the mean". The answers are marked offline using an R script. In my introductory R course, students upload their R functions and I run and mark them off line. I use sweave to generate a unique pdf for each student. Their pdf shows where they lost marks. For example, they didn't use the correct named arguments. Coupling a simple web-form with marking offline gives you a lot of flexibility and is fairly straightforward.
0.725124
false
2
1,579
2011-12-19 09:37:12.580
python string parsing using regular expression
Given a string #abcde#jfdkjfd, how can I get the string between two # ? And I also want that if no # pair(means no # or only one #), the function will return None.
Use (?<=#)(\w+)(?=#) and capture the first group. You can even cycle through a string which contains several embedded strings and it will work. This uses both a positive lookbehind and positive lookahead.
0.265586
false
1
1,580
2011-12-20 05:57:10.960
Running .py files
I have .ui, .py and .pyc files generated. Now, when I edit the .py file, how will it reflect changes in the .ui file? How do I connect .the ui and .py files together as QT designer allows only .ui files for running purposes?
the .py file generated from pyuic is not supposed to be edited by the user, best way it is intended to use is to subclass the class in .py file into your own class and do the necessary changes in your class..
0
false
1
1,581
2011-12-21 10:08:50.210
SQLAlchemy or psycopg2?
I am writing a quick and dirty script which requires interaction with a database (PG). The script is a pragmatic, tactical solution to an existing problem. however, I envisage that the script will evolve over time into a more "refined" system. Given the fact that it is currently being put together very quickly (i.e. I don't have the time to pour over huge reams of documentation), I am tempted to go the quick and dirty route, using psycopg. The advantages for psycopg2 (as I currently understand it) is that: written in C, so faster than sqlAlchemy (written in Python)? No abstraction layer over the DBAPI since works with one db and one db only (implication -> fast) (For now), I don't need an ORM, so I can directly execute my SQL statements without having to learn a new ORM syntax (i.e. lightweight) Disadvantages: I KNOW that I will want an ORM further down the line psycopg2 is ("dated"?) - don't know how long it will remain around for Are my perceptions of SqlAlchemy (slow/interpreted, bloated, steep learning curve) true - IS there anyway I can use sqlAlchemy in the "rough and ready" way I want to use psycopg - namely: execute SQL statements directly without having to mess about with the ORM layer, etc. Any examples of doing this available?
To talk with database any one need driver for that. If you are using client like SQL Plus for oracle, MysqlCLI for Mysql then it will direct run the query and that client come with DBServer pack. To communicate from outside with any language like java, c, python, C#... We need driver to for that database. psycopg2 is driver to run query for PostgreSQL from python. SQLAlchemy is the ORM which is not same as database driver. It will give you flexibility so you can write your code without any database specific standard. ORM provide database independence for programmer. If you write object.save in ORM then it will check, which database is associated with that object and it will generate insert query according to the backend database.
0.999967
false
2
1,582
2011-12-21 10:08:50.210
SQLAlchemy or psycopg2?
I am writing a quick and dirty script which requires interaction with a database (PG). The script is a pragmatic, tactical solution to an existing problem. however, I envisage that the script will evolve over time into a more "refined" system. Given the fact that it is currently being put together very quickly (i.e. I don't have the time to pour over huge reams of documentation), I am tempted to go the quick and dirty route, using psycopg. The advantages for psycopg2 (as I currently understand it) is that: written in C, so faster than sqlAlchemy (written in Python)? No abstraction layer over the DBAPI since works with one db and one db only (implication -> fast) (For now), I don't need an ORM, so I can directly execute my SQL statements without having to learn a new ORM syntax (i.e. lightweight) Disadvantages: I KNOW that I will want an ORM further down the line psycopg2 is ("dated"?) - don't know how long it will remain around for Are my perceptions of SqlAlchemy (slow/interpreted, bloated, steep learning curve) true - IS there anyway I can use sqlAlchemy in the "rough and ready" way I want to use psycopg - namely: execute SQL statements directly without having to mess about with the ORM layer, etc. Any examples of doing this available?
SQLAlchemy is a ORM, psycopg2 is a database driver. These are completely different things: SQLAlchemy generates SQL statements and psycopg2 sends SQL statements to the database. SQLAlchemy depends on psycopg2 or other database drivers to communicate with the database! As a rather complex software layer SQLAlchemy does add some overhead but it also is a huge boost to development speed, at least once you learned the library. SQLAlchemy is a excellent library and will teach you the whole ORM concept, but if you don't want to generate SQL statements to begin with then you don't want SQLAlchemy.
1.2
true
2
1,582
2011-12-21 14:43:37.823
deploying a python web application
hi every one I was wondering how to go about deploying a small time python database web application. Is buying some cheap hardware and installing a server good on it a good route to go?
You can get a virtual server instance on amazon or rackspace or many others for a very small fee per month, $20 - $60. This will give you a clean install of the OS of your choice. No need to invest in hardware. From there you can follow any of the many many tutorials on deploying a django app.
1.2
true
1
1,583
2011-12-23 13:10:01.587
How to keep global variables persistent over multiple google appengine instances?
Our situation is as follows: We are working on a schoolproject where the intention is that multiple teams walk around in a city with smarthphones and play a city game while walking. As such, we can have 10 active smarthpones walking around in the city, all posting their location, and requesting data from the google appengine. Someone is behind a webbrowser,watching all these teams walk around, and sending them messages etc. We are using the datastore the google appengine provides to store all the data these teams send and request, to store the messages and retrieve them etc. However we soon found out we where at our max limit of reads and writes, so we searched for a solution to be able to retrieve periodic updates(which cost the most reads and writes) without using any of the limited resources google provides. And obviously, because it's a schoolproject we don't want to pay for more reads and writes. Storing this information in global variables seemed an easy and quick solution, which it was... but when we started to truly test we noticed some of our data was missing and then reappearing. Which turned out to be because there where so many requests being done to the cloud that a new instance was made, and instances don't keep these global variables persistent. So our question is: Can we somehow make sure these global variables are always the same on every running instance of google appengine. OR Can we limit the amount of instances ever running, no matter how many requests are done to '1'. OR Is there perhaps another way to store this data in a better way, without using the datastore and without using globals.
Interesting question. Some bad news first, I don't think there's a better way of storing data; no, you won't be able to stop new instances from spawning and no, you cannot make seperate instances always have the same data. What you could do is have the instances perioidically sync themselves with a master record in the datastore, by choosing the frequency of this intelligently and downloading/uploading the information in one lump you could limit the number of read/writes to a level that works for you. This is firmly in the kludge territory though. Despite finding the quota for just about everything else, I can't find the limits for free read/write so it is possible that they're ludicrously small but the fact that you're hitting them with a mere 10 smartphones raises a red flag to me. Are you certain that the smartphones are being polled (or calling in) at a sensible frequency? It sounds like you might be hammering them unnecessarily.
0
false
3
1,584
2011-12-23 13:10:01.587
How to keep global variables persistent over multiple google appengine instances?
Our situation is as follows: We are working on a schoolproject where the intention is that multiple teams walk around in a city with smarthphones and play a city game while walking. As such, we can have 10 active smarthpones walking around in the city, all posting their location, and requesting data from the google appengine. Someone is behind a webbrowser,watching all these teams walk around, and sending them messages etc. We are using the datastore the google appengine provides to store all the data these teams send and request, to store the messages and retrieve them etc. However we soon found out we where at our max limit of reads and writes, so we searched for a solution to be able to retrieve periodic updates(which cost the most reads and writes) without using any of the limited resources google provides. And obviously, because it's a schoolproject we don't want to pay for more reads and writes. Storing this information in global variables seemed an easy and quick solution, which it was... but when we started to truly test we noticed some of our data was missing and then reappearing. Which turned out to be because there where so many requests being done to the cloud that a new instance was made, and instances don't keep these global variables persistent. So our question is: Can we somehow make sure these global variables are always the same on every running instance of google appengine. OR Can we limit the amount of instances ever running, no matter how many requests are done to '1'. OR Is there perhaps another way to store this data in a better way, without using the datastore and without using globals.
You should be using memcache. If you use the ndb (new database) library, you can automatically cache the results of queries. Obviously this won't improve your writes much, but it should significantly improve the numbers of reads you can do. You need to back it with the datastore as data can be ejected from memcache at any time. If you're willing to take the (small) chance of losing updates you could just use memcache. You could do something like store just a message ID in the datastore and have the controller periodically verify that every message ID has a corresponding entry in memcache. If one is missing the controller would need to reenter it.
1.2
true
3
1,584
2011-12-23 13:10:01.587
How to keep global variables persistent over multiple google appengine instances?
Our situation is as follows: We are working on a schoolproject where the intention is that multiple teams walk around in a city with smarthphones and play a city game while walking. As such, we can have 10 active smarthpones walking around in the city, all posting their location, and requesting data from the google appengine. Someone is behind a webbrowser,watching all these teams walk around, and sending them messages etc. We are using the datastore the google appengine provides to store all the data these teams send and request, to store the messages and retrieve them etc. However we soon found out we where at our max limit of reads and writes, so we searched for a solution to be able to retrieve periodic updates(which cost the most reads and writes) without using any of the limited resources google provides. And obviously, because it's a schoolproject we don't want to pay for more reads and writes. Storing this information in global variables seemed an easy and quick solution, which it was... but when we started to truly test we noticed some of our data was missing and then reappearing. Which turned out to be because there where so many requests being done to the cloud that a new instance was made, and instances don't keep these global variables persistent. So our question is: Can we somehow make sure these global variables are always the same on every running instance of google appengine. OR Can we limit the amount of instances ever running, no matter how many requests are done to '1'. OR Is there perhaps another way to store this data in a better way, without using the datastore and without using globals.
Consider jabber protocol for communication between peers. Free limits are on quite high level for it.
0
false
3
1,584
2011-12-24 18:25:48.433
Removing cocos2d-python from Mac
I installed cocos2d today on OS X Lion, but whenever I try to import cocos in the Python interpreter, I get a bunch of import errors. File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/site-packages/cocos2d-0.5.0-py2.7.egg/cocos/init.py", line 105, in import_all() File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/site-packages/cocos2d-0.5.0-py2.7.egg/cocos/init.py", line 89, in import_all import actions File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/site-packages/cocos2d-0.5.0-py2.7.egg/cocos/actions/ init.py", line 37, in from basegrid_actions import * File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/site-packages/cocos2d-0.5.0-py2.7.egg/cocos/actions/ basegrid_actions.py", line 62, in from pyglet.gl import * File "build/bdist.macosx-10.6-intel/egg/pyglet/gl/init.py", line 510, in File "build/bdist.macosx-10.6-intel/egg/pyglet/window/init.py", line 1669, in File "build/bdist.macosx-10.6-intel/egg/pyglet/window/carbon/ init.py", line 69, in File "build/bdist.macosx-10.6-intel/egg/pyglet/lib.py", line 90, in load_library File "build/bdist.macosx-10.6-intel/egg/pyglet/lib.py", line 226, in load_framework File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/ctypes/init.py", line 431, in LoadLibrary return self._dlltype(name) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/ python2.7/ctypes/init.py", line 353, in init self._handle = _dlopen(self._name, mode) OSError: dlopen(/System/Library/Frameworks/QuickTime.framework/ QuickTime, 6): no suitable image found. Did find: /System/Library/Frameworks/QuickTime.framework/QuickTime: mach-o, but wrong architecture /System/Library/Frameworks/QuickTime.framework/QuickTime: mach-o, but wrong architecture Since I can't fix it, I'd like to remove cocos2d entirely. The problem is that I can't seem to find a guide anywhere that details how to remove it from the Python installation. Any help regarding either of these problems is greatly appreciated.
You could fix it. The problem comes from the fact that cocos2D is built on top of Pyglet, and the stable release of pyglet does not yet support Mac OS X 64 bits architecture. You have to use the 1.2 release of pyglet or later, which by now is not released yet. A workaround is to remove any existing Pyglet install: pip uninstall piglet Then install the latest Pyglet from the mercurial repository pip install hg+https://pyglet.googlecode.com/hg/
1.2
true
1
1,585
2011-12-24 23:22:13.187
Python's math module & "Think Python"
I'm stucked on the chapter 3.3 "Math functions" of "Think Python". It tells me to import math (through the interpreter). Then print math and that I should get something like this: <module 'math' from '/usr/lib/python2.5/lib-dynload/math.so'> Instead I get <module 'math' <built-in>> Anyway that's not the problem. Though I wasn't able to find a 'math.so' file in my python folder. The most similar file is named test_math. The problem is that I'm supposed to write this: >>> ratio = signal_power / noise_power >>> decibels = 10 * math.log10(ratio) >>> radians = 0.7 >>> height = math.sin(radians) When I write the first line it tells me this: Traceback <most recent call last>: File "<stdin>", line 1, in <module> NameError: name 'signal_power' is not defined On the book says "The first example uses log10 to compute a signal-to-noise ratio in decibels (assuming that signal_power and noise_power are defined)." So I assume that the problem might be that I didn't defined 'signal_power', but I don't know how to do it and what to assign to it... This is the first time that I feel that this book is not holding my hand and I'm already lost. To be honest I don't understand this whole chapter. By the way, I'm using Python2.7 and Windows XP. I may copy and paste the whole chapter if anyone feels that I should do it. Python is my first language and I already tried to learn it using "Learn Python the hard way" but got stucked on chapter 16. So I decided to use "Think Python" and then go back to "Learn Python the hard way".
You've figured it out - you have to set signal_power's value before using it. As to what you have to set it to - it's not really a Python related question, but 1 is always a safe choice :) While you are at it, don't forget to define noise_power.
0.296905
false
1
1,586
2011-12-25 09:05:56.223
How to delay/postpone Gearman job?
I'm using gearman for synchronizing data on different servers. We have 1 main server and, for example, 10 local servers. Let me describe one of possible situations. Say, gearman started working, and 5 jobs are done, data on that 5 servers is synced. When doing the next job is started, say, we lost the connection with server and it's not available right now. By the logic of gearman it retries again and again. So the remaining jobs (for servers 7, 8, 9, 10) will not be executed until the 6th is not done. The best solution would be postponing the job and putting it to the end of queue and continuing work of jobs 7-10. If someone do know how to do that, please post the way. PS: I'm using python.
Gearman jobs can run in parallel or series. Instead of using single jobs, you should use concurrent jobs (one for each server).
0
false
1
1,587
2011-12-25 20:50:13.417
Click the javascript popup through webdriver
I am scraping a webpage using Selenium webdriver in Python The webpage I am working on, has a form. I am able to fill the form and then I click on the Submit button. It generates an popup window( Javascript Alert). I am not sure, how to click the popup through webdriver. Any idea how to do it ? Thanks
that depends on the javascript function that handles the form submission if there's no such function try to submit the form using post
0
false
1
1,588
2011-12-27 03:07:15.233
Completely lost: Python configure script runs with errors
I downloaded tarball of python 2.7.2 to install on Suse Linux server--it comes with 2.6 and 3.1. Untarred it (I know--wrong lingo, sorry) to a directory. When trying to run ./configure, which should create a valid makefile I can't get past the step one: the script reports that it can't find a compiler on the path. But, when I run the shell in the same directory and type "make", make runs. I am really unfamiliar with Linux, but this just seems so basic that I can't even begin to see what's wrong. I also downloaded what appears to be an RPM file for python 2.7.2 for SUSE Linux but I can't for the life of me figure out how to get "import" this package into Yast2 or "Install Software." These two tools seem impenetrable and hostile to packages saved in the file system rather than accessed from specific distribution web sites. Really, this should be just trivial but it is not. Suse uses Gnome and Gnome seems to have its own view of what the directory structure should be for desktop end user-y kinds of files. That is where I put my downloaded tar file. Might I do better if I put somewhere in usr? Sorry to be so much more clueless than most stackoverflow participants but I am just not a Linux guy.
Suse has a package manager called Yast. It would do your installation with no fuss.
0
false
1
1,589
2011-12-27 21:20:01.587
How do I transfer data between two computers ?
I have two computers on a network. I'll call the computer I'm working on computer A and the remote computer B. My goal is to send a command to B, gather some information, transfer this information to computer A and use this in some meaningfull way. My current method follows: Establish a connection to B with paramiko. Use paramiko to execute a remote command e.g. paramiko.exec_command('python file.py'). Write the information to a file with pickle, and use paramiko.ftp to transfer the file to computer A. Open this file and parse the information back into a usable form, like a class or dictionary. This seems very ad-hoc. My question is, is there a better way to transfer data between computers using Python? I know how to transfer files, this is a different question. I want to make an object on B and use it on A.
I have been doing something similar on a larger scale (with 15 clients). I use the pexpect module to do essentially ssh commands on the remote machine (computer B). Another module to look into would be the subprocess module.
0
false
1
1,590
2011-12-28 01:50:26.573
How to cache using Pyramid?
I've looked in the documentation and haven't seen (from first sight) anything about cache in Pyramid. Maybe I missed something... Or maybe there are some third party packages to help with this. For example, how to cache db query (SQLAlchemy), how to cache views? Could anyone give some link to examples or documentation? Appreciate any help! EDITED: How to use memcache or database type cache or filebased cache?
Your options are pyramid_beaker and dogpile.cache pyramid_beaker was written to offer beaker caching for sessions. it also lets you configure beaker cache regions, which can be used elsewhere. dogpile.cache is a replacement for beaker. it hasn't been integrated to offer session support or environment.ini based setup yet. however it addresses a lot of miscellaneous issues and shortcomings with beaker. you can't/shouldn't cache a SqlAlchemy query or results. weird and bad things will happen, because the SqlAlchemy objects are bound to a database session. it's much better to convert the sqlalchemy results into another object/dict and cache those.
0.995055
false
1
1,591
2011-12-28 01:51:48.053
Thermostat Control Algorithms
I am wondering exactly how a thermostat program would work and wanted to see if anyone had a better opinion on it. From what I know, there are a few control algorithms that could be used, some being Bang-Bang (On/Off), Proportional Control Algorithms, and PID Control. Looking on Wikipedia, there is a great deal of explanations for all three in which I understand completely. However, when trying to implement a proportional control algorithm, I feel that I am missing the need or the use of the proportional gain (K) and the output. Since today's thermostats do not include the need to vary power or current, how do I manipulate the output so that I can trigger the controls ON/OFF of the thermostat? Also, what is the value of the proportional gain or K?
The issue is overshooting the setpoint temperature. If you simply run the device until the set point temperature is reached, you will overshoot, wasting energy (and possibly doing damage, depending on what the thermostat controls.) You need to "ease up to" the setpoint so that you arrive at the set point just as the device is shutting down so that no more energy goes in to rise above the set point.
1.2
true
1
1,592
2011-12-28 03:54:46.327
OpenERP - Report Creation
I am trying to create a new report with report plugin and openoffice but I don't know how to assign it in the OpenERP system. Is there someone who can give me exact steps for creation of new report and integration with openerp? Thanks in advance!
First you save .odt file then connect with server and select open new report and then send it ti server with proper report name and then keep on editing your report by selecting the option modify existing report.
0.101688
false
1
1,593
2011-12-28 23:36:06.063
Plotting Complex Numbers in Python?
For a math fair project I want to make a program that will generate a Julia set fractal. To do this i need to plot complex numbers on a graph. Does anyone know how to do this? Remember I am using complex numbers, not regular coordinates. Thank You!
You could plot the real portion of the number along the X axis and plot the imaginary portion of the number along the Y axis. Plot the corresponding pixel with whatever color makes sense for the output of the Julia function for that point.
0.386912
false
2
1,594
2011-12-28 23:36:06.063
Plotting Complex Numbers in Python?
For a math fair project I want to make a program that will generate a Julia set fractal. To do this i need to plot complex numbers on a graph. Does anyone know how to do this? Remember I am using complex numbers, not regular coordinates. Thank You!
Julia set renderings are generally 2D color plots, with [x y] representing a complex starting point and the color usually representing an iteration count.
0
false
2
1,594
2011-12-29 02:33:05.313
How do I transform every doc in a large Mongodb collection without map/reduce?
Apologies for the longish description. I want to run a transform on every doc in a large-ish Mongodb collection with 10 million records approx 10G. Specifically I want to apply a geoip transform to the ip field in every doc and either append the result record to that doc or just create a whole other record linked to this one by say id (the linking is not critical, I can just create a whole separate record). Then I want to count and group by say city - (I do know how to do the last part). The major reason I believe I cant use map-reduce is I can't call out to the geoip library in my map function (or at least that's the constraint I believe exists). So I the central question is how do I run through each record in the collection apply the transform - using the most efficient way to do that. Batching via Limit/skip is out of question as it does a "table scan" and it is going to get progressively slower. Any suggestions? Python or Js preferred just bec I have these geoip libs but code examples in other languages welcome.
Actually I am also attempting another approach in parallel (as plan B) which is to use mongoexport. I use it with --csv to dump a large csv file with just the (id, ip) fields. Then the plan is to use a python script to do a geoip lookup and then post back to mongo as a new doc on which map-reduce can now be run for count etc. Not sure if this is faster or the cursor is. We'll see.
0
false
2
1,595
2011-12-29 02:33:05.313
How do I transform every doc in a large Mongodb collection without map/reduce?
Apologies for the longish description. I want to run a transform on every doc in a large-ish Mongodb collection with 10 million records approx 10G. Specifically I want to apply a geoip transform to the ip field in every doc and either append the result record to that doc or just create a whole other record linked to this one by say id (the linking is not critical, I can just create a whole separate record). Then I want to count and group by say city - (I do know how to do the last part). The major reason I believe I cant use map-reduce is I can't call out to the geoip library in my map function (or at least that's the constraint I believe exists). So I the central question is how do I run through each record in the collection apply the transform - using the most efficient way to do that. Batching via Limit/skip is out of question as it does a "table scan" and it is going to get progressively slower. Any suggestions? Python or Js preferred just bec I have these geoip libs but code examples in other languages welcome.
Since you have to go over "each record", you'll do one full table scan anyway, then a simple cursor (find()) + maybe only fetching few fields (_id, ip) should do it. python driver will do the batching under the hood, so maybe you can give a hint on what's the optimal batch size (batch_size) if the default is not good enough. If you add a new field and it doesn't fit the previously allocated space, mongo will have to move it to another place, so you might be better off creating a new document.
1.2
true
2
1,595
2011-12-29 22:08:12.297
Website Links to Downloadable Files Don't Seem to Update
I have a problem with links on my website. Please forgive me if this is asked somewhere else, but I have no idea how to search for this. A little background on the current situation: I've created a python program that randomly generates planets for a sci-fi game. Each created planet is placed in a text file to be viewed at a later time. The program asks the user how many planets he/she wants to create and makes that many text files. Then, after all the planets are created, the program zips all the files into a file 'worlds.zip'. A link is then provided to the user to download the zip file. The problem: The first time I run this everything works perfectly fine. When run a second time, however, and I click the link to download the zip file it gives me the exact same zip file as I got the first time. When I ftp in and download the zip file directly I get the correct zip file, despite the link still being bad. Things I've tried: When I refresh the page the link is still bad. When I delete all my browser history the link is still bad. I've tried a different browser and that didn't work. I've attempted to delete the file from the web server and that didn't solve the problem. Changing the html file providing the link worked once, but didn't work a second time. Simplified Question: How can I get a link on my web page to update to the correct file? I've spent all day trying to fix this. I don't mind looking up information or reading articles and links, but I don't know how to search for this, so even if you guys just give me links to other sites I'll be happy (although directly answering my question will always be appreciated :)).
I don't know anything about Python, but in PHP, in some fopen modes, if a file is trying to be made with the same name as an existing file, it will cancel the operation.
0
false
1
1,596
2011-12-30 06:05:55.800
getting 64 bit integer in python
So I am thinking of writing a bitboard in python or lisp. But I don't know how to ensure I would get a 64 bit integer in python. I have been reading documentation and found that mpz library returns a unsigned 32 bit integer. Is this true? If not what should I do?
Python 2 has two integer types: int, which is a signed integer whose size equals your machine's word size (but is always at least 32 bits), and long, which is unlimited in size. Python 3 has only one integer type, which is called int but is equivalent to a Python 2 long.
1
false
1
1,597
2011-12-30 20:00:00.603
Robocode + Python
The question is, how do you make a robot for Robocode using Python? There seem to be two options: Robocode + Jython Robocode for .NET + Iron Python There's some info for the first, but it doesn't look very robust, and none for the latter. Step by step, anyone?
As long as your java-class extends robocode.Robot everything is recognized as robot. It doesn't matter where you put the class.
0.386912
false
1
1,598
2011-12-31 14:51:11.027
how to decompile user password in django
I have a SECRET_KEY, how do i decompile a user password using python? I assume that the encryption method is sha1. thanks.
You are asking the impossible. The passwords are salted and hashed. The way they're validated is by performing the same process on the re-supplied password. There's no way to 'decrypt' it.
1.2
true
1
1,599
2012-01-02 14:39:08.840
How would I create "Omegle"-like random chat with gevent?
I have searched tutorials and documentation for gevent, but seems that there isn't lots of it. I have coded Python for several years, also I can code PHP + JavaScript + jQuery. So, how would I create Omeglish chat, where one random person connects and then waits for another one to connect? I have understood that Omegle uses gevent, but my site would have to hold 200 - 1000 people simultaneously. Besides the server side, there should be fully functional client side too and I think it should be created with jQuery/JavaScript. I would need little help with the coding part. I can code Python well, but I have no idea how I would make that kind of chat system nor what would be the best Python library for it. The library doesn't have to be gevent but I have heard that it's very good for stuff like this. Thanks.
If I've understood you right, you just need to link the second person with someone connected before. Think it's simple. The greenlet working with a person who comes first ('the first greenlet') just register somewhere it's inbound and outbound queues. The greenlet working with the second person gets this queues, unregister them and use for chat messages exchange. The next person's greenlet finds out that there is no registered in/out queues, registers its own and waits for the fourth. An so on. Is it what you need?
1.2
true
1
1,600
2012-01-03 00:53:28.813
Generating IDs how to make IDs same length using zeroes?
Update: The requirement is "fixed length 9 digits" so 460 000 000 138 should be 460 000 138 I want to generate IDs on a special form such as 460 000 000 138 where 46 is the country code and the rest is the ID and this number always(?) has the same number of digits ie four pairs of threes. My input is this ID that can be expected to be lower than some largest number. When starting the project from scratch the ID could be 1 and then just autoincrement as long as IDs don't collide (I probably would want a sequential count but that is tricky in distributed environment where actions can happen at the same time. So input could be for example 138. I now want to fill with zeros and the country number, in this case Sweden (46) so the output should be 460 000 000 138 Similarly, if input is 1138 the output should be on the same form an fill with less zeroes ie 460 000 001 138 So I don't know how many zeroes I need. Can you help me? The solution should be in python. I will probably use the entity ID, fill with zeroes and add the country code and can you help me find an algorithm in python for that? Any help is greatly appreciated
"always(?)" indeed. Sweden = 46 looks like you mean the telephone not-necessarily-a-country code which is VARIABLE LENGTH ... for example CHINA = 86, HONG KONG (not a country) = 852, CANADA = USA = 1. Is your ID allowed to be variable length or not? If it is allowed to be variable, you would need do str(countrycode) + str(n).zfill(10) ... this allows n to be up to 10 digits. Otherwise, for a fixed total length of 12 digits, you would need str(countrycode) + str(n).zfill(12 - len(str(countrycode)))
0.135221
false
1
1,601
2012-01-03 03:42:56.000
how to copy an executable file with python?
How can i copy a .exe file through python? I tried to read the file and then write the contents to another but everytime i try to open the file it say ioerror is directory. Any input is appreciated. EDIT: ok i've read through the comments and i'll edit my code and see what happens. If i still get an error i'll post my code.
Use shutil.copyfile(src, dst) or shutil.copy(src, dst). It may not work in case of files in the C:\Program Files\ as they are protected by administrator rights by default.
0
false
2
1,602
2012-01-03 03:42:56.000
how to copy an executable file with python?
How can i copy a .exe file through python? I tried to read the file and then write the contents to another but everytime i try to open the file it say ioerror is directory. Any input is appreciated. EDIT: ok i've read through the comments and i'll edit my code and see what happens. If i still get an error i'll post my code.
Windows Vista and 7 will restrict your access to files installed into the Programs directories. Unless you run with UAC privileges you will never be able to open them. I hope I'm interpreting your error properly. In the future it is best to copy and paste the actual error message into your question.
0.081452
false
2
1,602
2012-01-03 16:37:26.713
how to share variable between python and go language?
i need to know how share variable between two program, basically the go program have to write a variable ,like a string, and the python program have to read this variable. Please help me, thank you in advance.
use standard streams. use a simple printf type command to print the string to stdout. then read it with a raw_input() in python. run the two programs like so: ./output | ./read.py
1.2
true
2
1,603
2012-01-03 16:37:26.713
how to share variable between python and go language?
i need to know how share variable between two program, basically the go program have to write a variable ,like a string, and the python program have to read this variable. Please help me, thank you in advance.
In Windows, most common way to do communication between two processes is "Named Pipe" (could also be tcp/ip, web service, etc...). A ugly but lighter way is to write the value to a file, and read it from python.
0.201295
false
2
1,603
2012-01-04 04:03:34.797
How to transfer a file between two connected computers in python?
I don't know if this has been answered before(i looked online but couldn't find one), but how can i send a file (.exe if possible) over a network to another computer that is connected to the network? I tried sockets but i could only send strings and i've tried to learn ftplib but i don't understand it at all or if ftp is even what i am looking for, so i am at a complete standstill. Any input is appreciated (even more so if someone can explain FTP, is it like socket? All the examples i've seen don't have a server program where the client can connect to.)
ZeroMQ helps to replace sockets. You can send an entire file in one command. A ZMQ 'party' can be written in any major language and for a given ZMQ-powered software, it doesnt matter what the other end it written in. From their site: It gives you sockets that carry whole messages across various transports like in-process, inter-process, TCP, and multicast. You can connect sockets N-to-N with patterns like fanout, pub-sub, task distribution, and request-reply.
0.296905
false
1
1,604
2012-01-04 17:26:46.170
How to start a script based on an event?
I'm in a red hat environment. I need to move a file from server A to server B when a file is available in a folder F. THere's no constraint on the method used. Is it possible to trigger this event in python or any other scripts? It could be run as a daemon but I'm not sure how to do that. Any advices?
This is a job for cron. Cron man, man cron!
-0.067922
false
1
1,605
2012-01-05 01:13:52.487
Parallel computing
I have a two dimensional table (Matrix) I need to process each line in this matrix independently from the others. The process of each line is time consuming. I'd like to use parallel computing resources in our university (Canadian Grid something) Can I have some advise on how to start ? I never used parallel computing before. Thanks :)
Like the commentators have said, find someone to talk to in your university. The answer to your question will be specific to what software is installed on the grid. If you have access to a grid, it's highly likely you also have access to a person whose job it is to answer your questions (and they will be pleased to help) - find this person!
0
false
1
1,606
2012-01-05 20:13:13.497
Python: import symbolic link of a folder
I have a folder A which contains some Python files and __init__.py. If I copy the whole folder A into some other folder B and create there a file with "import A", it works. But now I remove the folder and move in a symbolic link to the original folder. Now it doesn't work, saying "No module named foo". Does anyone know how to use symlink for importing?
This kind of behavior can happen if your symbolic links are not set up right. For example, if you created them using relative file paths. In this case the symlinks would be created without error but would not point anywhere meaningful. If this could be the cause of the error, use the full path to create the links and check that they are correct by lsing the link and observing the expected directory contents.
0.999329
false
1
1,607
2012-01-07 18:46:22.470
how long can i store data in cPickle?
I'm storing a list of dictionaries in cPickle, but need to be able to add and remove to/from it occasionally. If I store the dictionary data in cPickle, is there some sort of limit on when I will be able to load it again?
You can store it for as long as you want. It's just a file. However, if your data structures start becoming complicated, it can become tedious and time consuming to unpickle, update and pickle the data again. Also, it's just file access so you have to handle concurrency issues by yourself.
1.2
true
3
1,608
2012-01-07 18:46:22.470
how long can i store data in cPickle?
I'm storing a list of dictionaries in cPickle, but need to be able to add and remove to/from it occasionally. If I store the dictionary data in cPickle, is there some sort of limit on when I will be able to load it again?
cPickle is just a faster implementation of pickle. You can use it to convert a python object to its string equivalent and retrieve it back by unpickling. You can do one of the two things with a pickled object: Do not write to a file In this case, the scope of your pickled data is similar to that of any other variable. Write to a file We can write this pickled data to a file and read it whenever we want and get back the python objects/data structures. Your pickled data is safe as long as your pickled file is stored on the disk.
0
false
3
1,608
2012-01-07 18:46:22.470
how long can i store data in cPickle?
I'm storing a list of dictionaries in cPickle, but need to be able to add and remove to/from it occasionally. If I store the dictionary data in cPickle, is there some sort of limit on when I will be able to load it again?
No. cPickle just writes data to files and reads it back; why would you think there would be a limit?
0
false
3
1,608