text
stringlengths
226
34.5k
Call a functor in Python Question: Should be easy, but somehow I don't get it. I want to apply a given function. Background is copy a class and applying a given method on the newly created copy. **Major Edit. Sorry for that.** import copy class A: def foo(self,funcName): print 'foo' funcName() def Bar(self): print 'Bar' def copyApply(self,funcName): cpy = copy.copy() # apply funcName to cpy?? a = A() func = a.Bar() a.foo(func) # output 'Bar' b = a.copyApply(foo) # new copy with applied foo Answer: Note that your `A.foo` does not take the _name_ of a function, but the function itself. class A: def bar(self): print 'Bar' def apply(self, func): func() # call it like any other function def copyApply(self, func): cpy = copy.copy(self) func(cpy) # cpy becomes the self parameter a = A() func = a.bar # don't call the function yet a.apply(func) # call the bound method `a.bar` a.apply(a.bar) # same as the line above a.copyApply(A.bar) # call the unbound method `A.bar` on a new `A` In python, `a.foo()` is the same as `A.foo(a)`, where `a` is of type `A`. Therefore, your `copyApply` method takes the _unbound_ bar method as its argument, whereas `foo` takes a _bound_ method.
Importing scrapy.conf.settings generates error Question: Using scrapy version 0.16. Trying to create a standalone spider runnable from a script as per [this gist](https://gist.github.com/484009). Importing the above generates this: Traceback (most recent call last): File "./spiderctl.py", line 8, in <module> from scrapy.conf import settings File "/usr/local/lib/python2.6/dist-packages/scrapy/conf.py", line 4, in <module> from scrapy.project import crawler ImportError: cannot import name crawler The file conf.py contains the line: from scrapy.project import crawler But the file scrapy.project is only a comment that the module is deprecated and users should instead implement the from_crawler class method. How do I implement this in the context of the code above? Answer: You can find the answer to this question in [this new FAQ](http://doc.scrapy.org/en/latest/faq.html#i-m-getting-an-error-cannot- import-name-crawler): Basically, you need to access the crawler differently, using the `from_crawler` class method instead of importing `crawler` from `scrapy.project`.
How do I make a regular expression that matches the following in Python? Question: Assuming I have many strings that are like this: <a id="1.1"> <a id="1.1.1"> <a id="1.1.2"> <a id="1.1.3"> <a id="1.2"> <a id="1.2.1"> I want to make a regular expression that catches only the ones with "#.#", so I am trying to match it such that it ends with a double quote, but I am getting no matches. re.compile("[1-9].[1-9]\"") The resulting output I expect is a list containing: <a id="1.1"> <a id="1.2"> What am I doing wrong? (Please suggest if there is a better way to match than the method I am using.) Answer: Put double quotes on both sides: `r'"\d\.\d"'`. For example, In [109]: import re In [110]: text = '''<a id="1.1"> <a id="1.1.1"> <a id="1.1.2"> <a id="1.1.3"> <a id="1.2"> <a id="1.2.1">'''.splitlines() In [111]: [line for line in text if re.search(r'"\d\.\d"', line)] Out[111]: ['<a id="1.1">', '<a id="1.2">']
os.waitpid doesn't return a tuple containing pid, and its exit status Question: I try to come up with a workaround for this [python bug](http://bugs.python.org/issue3905) calling `subprocess`. I figured the way to go is using `os.system` in combination with `os.waitpid`. To test this I wrote the code below. `system_call_test.py` writes the `pid` and lot's of text to the file `f`. But calling `os.waitpid()` always get me this error: `OSError: [Errno 10] No child processes`. So I'm having a hard time to check if this construct is working properly. How can I ensure that the script waits for the termination of the other. I'm on windows XP/ python 2.7. import os f = r'D:\temp\called.txt' s = os.system('C:\Python27\python.exe D:\python_spullen\system_call_test.py') with open(f, 'r') as f_in: i = f_in.readline()[-4:] print i rr = os.waitpid(int(i),0) print rr Answer: `os.system` returns [the exit code of the process](http://stackoverflow.com/questions/6466711/what-is-return-of-os- system-in-python). So `s` above is already populated and the process has exited. `os.waitpid` has nothing to wait on.
Using multiprocessing.Manager.list instead of a real list makes the calculation take ages Question: I wanted to try different ways of using `multiprocessing` starting with this example: $ cat multi_bad.py import multiprocessing as mp from time import sleep from random import randint def f(l, t): # sleep(30) return sum(x < t for x in l) if __name__ == '__main__': l = [randint(1, 1000) for _ in range(25000)] t = [randint(1, 1000) for _ in range(4)] # sleep(15) pool = mp.Pool(processes=4) result = pool.starmap_async(f, [(l, x) for x in t]) print(result.get()) Here, `l` is a list that gets copied 4 times when 4 processes are spawned. To avoid that, the documentation page offers using queues, shared arrays or proxy objects created using `multiprocessing.Manager`. For the last one, I changed the definition of `l`: $ diff multi_bad.py multi_good.py 10c10,11 < l = [randint(1, 1000) for _ in range(25000)] --- > man = mp.Manager() > l = man.list([randint(1, 1000) for _ in range(25000)]) The results still look correct, but the execution time has increased so dramatically that I think I'm doing something wrong: $ time python multi_bad.py [17867, 11103, 2021, 17918] real 0m0.247s user 0m0.183s sys 0m0.010s $ time python multi_good.py [3609, 20277, 7799, 24262] real 0m15.108s user 0m28.092s sys 0m6.320s The docs do say that this way is slower than shared arrays, but this just feels wrong. I'm also not sure how I can profile this to get more information on what's going on. Am I missing something? P.S. With shared arrays I get times below 0.25s. P.P.S. This is on Linux and Python 3.3. Answer: Linux uses [copy-on-write](http://en.wikipedia.org/wiki/Copy-on-write) when subprocesses are `os.fork`ed. To demonstrate: import multiprocessing as mp import numpy as np import logging import os logger = mp.log_to_stderr(logging.WARNING) def free_memory(): total = 0 with open('/proc/meminfo', 'r') as f: for line in f: line = line.strip() if any(line.startswith(field) for field in ('MemFree', 'Buffers', 'Cached')): field, amount, unit = line.split() amount = int(amount) if unit != 'kB': raise ValueError( 'Unknown unit {u!r} in /proc/meminfo'.format(u = unit)) total += amount return total def worker(i): x = data[i,:].sum() # Exercise access to data logger.warn('Free memory: {m}'.format(m = free_memory())) def main(): procs = [mp.Process(target = worker, args = (i, )) for i in range(4)] for proc in procs: proc.start() for proc in procs: proc.join() logger.warn('Initial free: {m}'.format(m = free_memory())) N = 15000 data = np.ones((N,N)) logger.warn('After allocating data: {m}'.format(m = free_memory())) if __name__ == '__main__': main() which yielded [WARNING/MainProcess] Initial free: 2522340 [WARNING/MainProcess] After allocating data: 763248 [WARNING/Process-1] Free memory: 760852 [WARNING/Process-2] Free memory: 757652 [WARNING/Process-3] Free memory: 757264 [WARNING/Process-4] Free memory: 756760 This shows that initially there was roughly 2.5GB of free memory. After allocating a 15000x15000 array of `float64`s, there was 763248 KB free. This roughly makes sense since 15000**2*8 bytes = 1.8GB and the drop in memory, 2.5GB - 0.763248GB is also roughly 1.8GB. Now after each process is spawned, the free memory is again reported to be ~750MB. There is no significant decrease in free memory, so I conclude the system must be using copy-on-write. Conclusion: If you do not need to modify the data, defining it at the global level of the `__main__` module is a convenient and (at least on Linux) memory- friendly way to share it among subprocesses.
Python rock paper scissors score counter Question: I am working on a rock paper scissors game. Everything seems to be working well except the win/loss/tie counter. I have looked at some of the other games people have posted on here and I still cannot get mine to work. I feel like I am soooooo close but I just can't get it! thanks for any help guys. this is my first time posting in here so I am sorry if I messed up the formatting. I edited the code but still cannot get the program to recognize the counter without using global variables. at one point of my editing I managed to get it to count everything as a tie... i dont know how and I lost it somewhere along my editing. lol. -thanks again everyone! here is what I get when I run the program: Prepare to battle in a game of paper, rock, scissors! Please input the correct number according to the object you want to choose. Select rock(1), paper(2), or scissors(3): 1 Computer chose PAPER . You chose ROCK . You lose! Play again? Enter 'y' for yes or 'n' for no. y Prepare to battle in a game of paper, rock, scissors! Please input the correct number according to the object you want to choose. Select rock(1), paper(2), or scissors(3): 2 Computer chose PAPER . You chose PAPER . It's a tie! Play again? Enter 'y' for yes or 'n' for no. y Prepare to battle in a game of paper, rock, scissors! Please input the correct number according to the object you want to choose. Select rock(1), paper(2), or scissors(3): 3 Computer chose SCISSORS . You chose SCISSORS . It's a tie! Play again? Enter 'y' for yes or 'n' for no. n Your total wins are 0 . Your total losses are 0 . Your total ties are 0 . * * * #import the library function "random" so that you can use it for computer #choice import random #define main def main(): #assign win, lose, and tie to zero for tallying win = 0 lose = 0 tie = 0 #control loop with 'y' variable play_again = 'y' #start the game while play_again == 'y': #make a welcome message and give directions print('Prepare to battle in a game of paper, rock, scissors!') print('Please input the correct number according') print('to the object you want to choose.') #Get the player and computers choices and #assign them to variables computer_choice = get_computer_choice() player_choice = get_player_choice() #print choices print('Computer chose', computer_choice, '.') print('You chose', player_choice, '.') #determine who won winner_result(computer_choice, player_choice) #ask the user if they want to play again play_again = input("Play again? Enter 'y' for yes or 'n' for no. ") #print results print('Your total wins are', win, '.') print('Your total losses are', lose, '.') print('Your total ties are', tie, '.') #define computer choice def get_computer_choice(): #use imported random function from library choice = random.randint(1,3) #assign what the computer chose to rock, paper, or scissors if choice == 1: choice = 'ROCK' elif choice == 2: choice = 'PAPER' else: choice = 'SCISSORS' #return value return choice #define player choice def get_player_choice(): #assign input to variable by prompting user choice = int(input("Select rock(1), paper(2), or scissors(3): ")) #Detect invalid entry while choice != 1 and choice != 2 and choice != 3: print('The valid numbers are rock(type in 1), paper(type in 2),') print('or scissors(type in 3).') choice = int(input('Enter a valid number please: ')) #assign what the player chose based on entry if choice == 1: choice = 'ROCK' elif choice == 2: choice = 'PAPER' else: choice = 'SCISSORS' #return value return choice #determine the winner from the variables def winner_result(computer_choice, player_choice): #if its a tie, add 1 to tie variable and display message if computer_choice == player_choice: result = 'tie' print("It's a tie!") #if its a win, add to win tally and display message elif computer_choice == 'SCISSORS' and player_choice == 'ROCK': result = 'win' print('ROCK crushes SCISSORS! You win!') elif computer_choice == 'PAPER' and player_choice == 'SCISSORS': result = 'win' print('SCISSORS cut PAPER! You win!') elif computer_choice == 'ROCK' and player_choice == 'PAPER': result = 'win' print('PAPER covers ROCK! You win!') #if it does not match any of the win criteria then add 1 to lose and #display lose message else: result = 'lose' print('You lose!') def result(winner_result,player_choice, computer_choice): # accumulate the appropriate winner of game total if result == 'win': win += 1 elif result == 'lose': lose += 1 else: tie += 1 return result main() Answer: Your `winner_result` function returns before it increments the win counters. If you remove all the `return` statements from it, the counters should be updated. The `return` statements aren't needed anyway because the `if/elif/else` structure ensures that only one of the possible outcomes will be executed. As Junuxx says in a comment, you also need to assign values to the `winner_result` variable properly, i.e. `winner_result = 'win'` instead of `winner_result == 'win'`. I'd also rename the `winner_result` variable or the function, because it's confusing to have both use the same name. And the `win/lose/tie` variables are currently local, which means that `main` and `winner_result` will have their own copies of these variables, so `main`'s values will always be zero. What you can do is make them global variables: Assign them to zero in the global scope (outside any function), and add the line `global win, lose, tie` inside the function `winner_result`.
A fast way to extract all ANCHORs from HTML in python Question: Is there any simple, robust and fast way to extract all anchors' href attributes HTML in python? I know there is a solution using BeautifulSoup, but the problem with BeautifulSoup is that it's too heavy, and consumes a lot of memory on some URLs. The task that I'm talking about is very simple - just run over an HTML and return all the HREFs of all the anchors. Anybody knows? Thanks! Answer: You could use the [`HTMLParser`](http://docs.python.org/2/library/htmlparser.html). from HTMLParser import HTMLParser class extract_href(HTMLParser): def handle_starttag(self, tag, attrs): if tag == "a": for key, val in attrs: if key == 'href': print val parser = extract_href() parser.feed("""<p><a href='www.stackoverflow.com'>link</a></p>""")
How can I run Instruments from Python? Question: Anyone know how to run Instruments from Python? I tired to use os.system and it didn't work. If I run Instruments from a command line, I only need to run: instruments -w id -t xxxxxxxxxxxxxx xx.js I will need to run the above in python. I suppose the following will work import os os.system('instruments -w id -t xxxxx xx.js') I also tried with os.system ('open -a instruments xxxxxx') Neither way worked. Anyone have a better idea? I expected it to run instruments just like running it from command line. And start to run javascritps using instruments. It didn't happen. What happened was just a 256 printed out. Answer: It's hard to tell from your code snippet because you might have cut a lot out to be brief, but it looks like you are invoking the command for instruments incorrectly. Here's a line-broken example: instruments \ -D [trace document to write] \ -t [Automation Trace Template] \ [Your App Bundle] \ -e UIARESULTSPATH [where results should be written] \ -e UIASCRIPT [your actual script file For a full example of how to run Instruments from the command line, check out my [demo repo](https://github.com/jonathanpenn/AutomationExample/blob/master/run_automation.sh). That has a shell script that walks through how it works to invoke Instruments from the command line. You can use that as the basis for launching from Python. Also, I include a copy of my `unix_instruments` wrapper script. Instruments doesn't return a non-zero status code if automation scripts log failures, so this wrapper script keeps an eye on all the log output and returns a non-zero status code for you. How to use it is all in the repo, too.
Logging variable data with new format string Question: I use logging facility for python 2.7.3. [Documentation for this Python version say](http://docs.python.org/2/howto/logging.html): > the logging package pre-dates newer formatting options such as str.format() > and string.Template. These newer formatting options are supported... I like 'new' format with curly braces. So i'm trying to do something like: log = logging.getLogger("some.logger") log.debug("format this message {0}", 1) And get error: > TypeError: not all arguments converted during string formatting What I miss here? P.S. I don't want to use log.debug("format this message {0}".format(1)) because in this case the message is always being formatted regardless of logger level. Answer: **EDIT:** take a look at the [`StyleAdapter` approach in @Dunes' answer](http://stackoverflow.com/a/24683360/4279) unlike this answer; it allows to use alternative formatting styles without the boilerplate while calling logger's methods (debug(), info(), error(), etc). * * * From the docs — [Use of alternative formatting styles](http://docs.python.org/3/howto/logging-cookbook.html#use-of- alternative-formatting-styles): > Logging calls (logger.debug(), logger.info() etc.) only take positional > parameters for the actual logging message itself, with keyword parameters > used only for determining options for how to handle the actual logging call > (e.g. the exc_info keyword parameter to indicate that traceback information > should be logged, or the extra keyword parameter to indicate additional > contextual information to be added to the log). So you cannot directly make > logging calls using str.format() or string.Template syntax, because > internally the logging package uses %-formatting to merge the format string > and the variable arguments. There would no changing this while preserving > backward compatibility, since all logging calls which are out there in > existing code will be using %-format strings. And: > There is, however, a way that you can use {}- and $- formatting to construct > your individual log messages. Recall that for a message you can use an > arbitrary object as a message format string, and that the logging package > will call str() on that object to get the actual format string. Copy-paste this to `wherever` module: class BraceMessage(object): def __init__(self, fmt, *args, **kwargs): self.fmt = fmt self.args = args self.kwargs = kwargs def __str__(self): return self.fmt.format(*self.args, **self.kwargs) Then: from wherever import BraceMessage as __ log.debug(__('Message with {0} {name}', 2, name='placeholders')) Note: actual formatting is delayed until it is necessary e.g., if DEBUG messages are not logged then the formatting is not performed at all.
Python: How I can define in sphinx which .rst files and directories should be used? Question: **How I can define in sphinx which .rst files and directories should be used?** I want to include an automatic documentation generator in my testing/building/documentation script. _sphinx-quickstart_ was executed in my workspace and created an index.rst-file. As sphinx uses restructured text files for documentation I navigated through the workspace and create them manually with _sphinx-autogen_. It resulted into the tasks.rst file (see below). When I use 'make html' I get several warnings: > **WARNING** : invalid signature for automodule (u'tasks/add_to_config') > > **WARNING** : autodoc can't import/find module 'tasks.add_to_config', it > reported error: "No module named wl_build.tasks", please check your spelling > and sys.path > > **WARNING** : don't know which module to import for autodocumenting > u'tasks/add_to_config' (try placing a "module" or "currentmodule" directive > in the document, or giving an explicit module name) > > ... **My index.rst** Welcome to build's documentation! ==================================== Contents: .. toctree:: :maxdepth: 2 .. automodule:: tasks/add_to_config :members: .. automodule:: tasks/build_egg :members: **tasks.rst** tasks Package ============= :mod:`tasks` Package -------------------- .. automodule:: tasks.__init__ :members: :undoc-members: :show-inheritance: :mod:`add_to_config` Module --------------------------- .. automodule:: tasks.add_to_config :members: :undoc-members: :show-inheritance: :mod:`build_egg` Module ----------------------- .. automodule:: tasks.build_egg :members: :undoc-members: :show-inheritance: Answer: Try replacing the `/` characters in your index.rst file with periods (`.`) like this: Welcome to build's documentation! ==================================== Contents: .. toctree:: :maxdepth: 2 .. automodule:: tasks.add_to_config :members: .. automodule:: tasks.build_egg :members: See if that helps. If Sphinx still can't find the code to document, then you'll probably need to modify your `PYTHONPATH` or alter `sys.path` in your `conf.py` file in order to help Sphinx find what it's looking for.
Python - How do you detect that a module has been loaded by custom loader? Question: Before Python-3.3, I detected that a module was loaded by a custom loader with `hasattr(mod, '__loader__')`. After Python-3.3, all modules have the `__loader__` attribute regardless of being loaded by a custom loader. Python-2.7, 3.2: >>> import xml >>> hasattr(xml, '__loader__') False Python-3.3: >>> import xml >>> hasattr(xml, '__loader__') True >>> xml.__loader__ <_frozen_importlib.SourceFileLoader object at ...> How do I detect that a module was loaded by a custom loader? Answer: The simple check for the existence of the `__loader__` attribute is no longer sufficient in Python 3.3. [PEP 302](http://www.python.org/dev/peps/pep-0302) requires that all loaders store their information in the `__loader__` attribute of a module. I would add an additional check for the `type(module.__loader__)`to see if the module was loaded with the custom loader (or in a list of loaders) you are searching for: >>> CUSTOM_LOADERS = [MyCustomLoader1, MyCustomLoader2] >>> type(xml.__loader__) in CUSTOM_LOADERS True This may be bad from a maintenance point-of-view, in that you will have to keep the list of custom loaders up to date. Another similar approach may be creating a list of the standard built-in loaders and change the check to be `not in` STANDARD_LOADERS. This will still have the maintenance issue though.
Haven't started thread but Python throws exception 'Thread already start' Question: I am writing a producer-consumer application. The producer threads start fine but when I try to start the consumer threads, I get an exception. Here's the relevant code: #Producer threads for i in range(nThreads): self.producerThreads.append(MailThread(i, settings.MAX_EMAILS_PERPASS, self.toFetchQueue, self.rawEmailsQueue, self.stopEvent)) self.producerThreads[i].start() logging.info('Started producer thread %d', i) #Consumer threads #for i in range(settings.MAX_CONS_THREADS): try: self.consumerThreads.append(ProcessThread(i, settings.STORE_DIRECTORY, settings.DELETE_ONPIPE, self.rawEmailsQueue, self.stopEvent)) self.consumerThreads[i].start() logging.info('Started consumer thread %d', i) except Exception, e: logging.error('Failed to start consumer thread %s', str(e)) Here's the consumer class: import logging, commands, threading, uuid, os, settings, Queue class ProcessThread(threading.Thread): """ Class to process the emails. """ def __init__(self, threadCount, storeDirectory, deleteOnPipe, rawEmailsQueue, stopEvent): self.threadCount = threadCount self.rawEmailsQueue = rawEmailsQueue self.stopEvent = stopEvent self.storeDirectory = storeDirectory self.deleteOnPipe = deleteOnPipe threading.Thread.__init__(self) def run(self): logging.info('Run process for consumer thread %d', self.threadCount) while not self.stopEvent.is_set(): try: emailContainer = rawEmailsQueue.get(False) logging.debug('Got a new email') except Queue.Empty: logging.debug('No emails in queue, going to sleep for a while') sleep(0.1) continue #Rest of processing code I couldn't get correct indentation, it's fine in my code Answer: It was a stupid mistake(probably because I've been coding in PHP a lot). I had initialized the arrays like this: self.producerThreads = self.consumerThreads = [] Both arrays referred to the same memory.
benchmarks: does python have a faster way of walking a network folder? Question: I need to walk through a folder with approximately ten thousand files. My old vbscript is very slow in handling this. Since I've started using Ruby and Python since then, I made a benchmark between the three scripting languages to see which would be the best fit for this job. The results of the tests below on a subset of 4500 files on a shared network are Python: 106 seconds Ruby: 5 seconds Vbscript: 124 seconds That Vbscript would be slowest was no surprise but I can't explain the difference between Ruby and Python. Is my test for Python not optimal? Is there a faster way to do this in Python? The test for thumbs.db is just for the test, in reality there are more tests to do. I needed something that checks every file on the path and doesn't produce too much output to not disturb the timing. The results are a bit different each run but not by much. #python2.7.0 import os def recurse(path): for (path, dirs, files) in os.walk(path): for file in files: if file.lower() == "thumbs.db": print (path+'/'+file) if __name__ == '__main__': import timeit path = '//server/share/folder/' print(timeit.timeit('recurse("'+path+'")', setup="from __main__ import recurse", number=1)) 'vbscript5.7 set oFso = CreateObject("Scripting.FileSystemObject") const path = "\\server\share\folder" start = Timer myLCfilename="thumbs.db" sub recurse(folder) for each file in folder.Files if lCase(file.name) = myLCfilename then wscript.echo file end if next for each subfolder in folder.SubFolders call Recurse(subfolder) next end Sub set folder = oFso.getFolder(path) recurse(folder) wscript.echo Timer-start #ruby1.9.3 require 'benchmark' def recursive(path, bench) bench.report(path) do Dir["#{path}/**/**"].each{|file| puts file if File.basename(file).downcase == "thumbs.db"} end end path = '//server/share/folder/' Benchmark.bm {|bench| recursive(path, bench)} EDIT: since i suspected the print caused a delay i tested the scripts with printing all 4500 files and also printing none, the difference remains, R:5 P:107 in the first case and R:4.5 P:107 in the latter EDIT2: based on the answers and comments here a Python version that in some cases could run faster by skipping folders import os def recurse(path): for (path, dirs, files) in os.walk(path): for file in files: if file.lower() == "thumbs.db": print (path+'/'+file) def recurse2(path): for (path, dirs, files) in os.walk(path): for dir in dirs: if dir in ('comics'): dirs.remove(dir) for file in files: if file.lower() == "thumbs.db": print (path+'/'+file) if __name__ == '__main__': import timeit path = 'f:/' print(timeit.timeit('recurse("'+path+'")', setup="from __main__ import recurse", number=1)) #6.20102692 print(timeit.timeit('recurse2("'+path+'")', setup="from __main__ import recurse2", number=1)) #2.73848228 #ruby 5.7 Answer: The Ruby implementation for `Dir` is in C (the file `dir.c`, according to [this documentation](http://www.ruby-doc.org/core-1.9.3/Dir.html)). However, the Python equivalent is implemented [in Python](http://hg.python.org/cpython/file/abe8a2908f08/Lib/os.py#l209). It's not surprising that Python is less performant than C, but the approach used in Python gives a little more flexibility - for example, you could skip entire subtrees named e.g. `'.svn'`, `'.git'`, `'.hg'` while traversing a directory hierarchy. Most of the time, the Python implementation is fast enough. **Update:** The skipping of files/subdirs doesn't affect the traversal _rate_ at all, but the overall time taken to process a directory tree could certainly be reduced because you avoid having to traverse potentially large subtrees of the main tree. The time saved is of course proportional to how much you skip. In your case, which looks like folders of images, it's unlikely you would save much time (unless the images were under revision control, when skipping subtrees owned by the revision control system might have some impact). **Additional update:** Skipping folders is done by changing the `dirs` value in place: for root, dirs, files in os.walk(path): for skip in ('.hg', '.git', '.svn', '.bzr'): if skip in dirs: dirs.remove(skip) # Now process other stuff at this level, i.e. # in directory "root". The skipped folders # won't be recursed into.
Django advanced query Question: I'm using Django ORM to handle my database queries. I have the following db tables: * resource * resource_pool * resource_pool_elem * reservation and the following models: class Resource(models.Model): name = models.CharField(max_length=200) class Reservation(models.Model): pass class ResourcePool(models.Model): reservation = models.ForeignKey(Reservation, related_name="pools", db_column="reservation") resources = models.ManyToManyField(Resource, through="ResourcePoolElem") mode = models.IntegerField() class ResourcePoolElem(models.Model): resPool = models.ForeignKey(ResourcePool) resource = models.ForeignKey(Resource) Currently, I need to query the resources used in a set of reservations. I use the following query: resourcesNames = [] reservations = [] resources = models.Resource.objects.filter( name__in=resourcesNames, resPool__reservation__in=reservations).all() which I think matches to a sql query similar to this one: select * from resource r join resource_pool rp join resource_pool_elem rpe join reservation reserv where r.id = rpe.resource and rpe.pool = rp.id and reserv.id = rp.reservation and r.name in (resourcesNames[0], ..., resourcesNames[n-1]) reserv.id in (reservations[0], ..., reservations[n-1]) Now, I want to add a restriction to this query. Each pool may have a exclusive mode boolean flag. There will be an extra input list with the requested exclusive flags of each pool and I only want to query the resources of pools which exclusive flag match the requested exclusive flag if exclusive = true OR resources of pools which exclusive flag is false. I could build the SQL query using Python with a code similar to this: query = "select * from resource r join resource_pool rp join resource_pool_elem rep join reservation reserv where r.id = rpe.resource and rpe.pool = rp.id and reserv.id = rp.reservation and reserv.id in (reservations[0], ..., reservations[n-1]) and (" for i in resourcesNames[0:len(resourcesNames)] if i > 0: query += " or " query += "r.name = " + resourcesNames[i] if (exclusive[i]) query += " and p.mode == 0" query += ")" Is there a way to express this sql query in a Django query? Answer: Perhaps you can do this with [Q objects](https://docs.djangoproject.com/en/dev/topics/db/queries/#complex- lookups-with-q-objects). I have some issues wrapping my head around your example, but lets look at it with a simpler model. class Garage(models.Model): name = models.CharField() class Vehicle(models.Model): wheels = models.IntegerField() gears = models.IntegerField() garage = models.ForeignKey(Garage) Say you want to get all "multiple-wheeled" vehicles in the garage (e.g. all motorcycles and cars, but no unicycles), but for cars, you only want those with a CVT transmission, meaning they only have a single gear. (How this came up, no clue, but bear with me... ;) The following should give you that: from django.db.models import Q garage = Garage.objects.all()[0] query = Vehicle.objects.filter(Q(garage=garage)) query = query.filter(Q(wheels=2) | (Q(wheels=4) & Q(gears=1))) Given the following available data: for v in Vehicle.objects.filter(garage=garage): print 'Wheels: {}, Gears: {}'.format(v.wheels, v.gears) Wheels: 1, Gears: 1 Wheels: 2, Gears: 4 Wheels: 2, Gears: 5 Wheels: 4, Gears: 1 Wheels: 4, Gears: 5 Running the query will give us: for v in query: print 'Wheels: {}, Gears: {}'.format(v.wheels, v.gears) Wheels: 2, Gears: 4 Wheels: 2, Gears: 5 Wheels: 4, Gears: 1 Finally, to adapt it to your case, you might be able to use something along the following lines: query = models.Resource.objects.filter(Q(resPool__reservation__in=reservations)) query = query.filter(Q(name__in(resourcesNames)) query = query.filter(Q(resPool__exclusive=True) & Q(resPool__mode=0))
Categorize different images Question: I have a number of images from Chinese genealogies, and I would like to be able to programatically categorize them. Generally speaking, one type of image has primarily line-by-line text, while the other type may be in a grid or chart format. Example photos * 'Desired' type: <http://www.flickr.com/photos/63588871@N05/8138563082/> * 'Other' type: <http://www.flickr.com/photos/63588871@N05/8138561342/in/photostream/> Question: Is there a (relatively) simple way to do this? I have experience with Python, but little knowledge of image processing. Direction to other resources is appreciated as well. Thanks! Answer: Assuming that at least some of the grid lines are exactly or almost exactly vertical, a fairly simple approach might work. I used [PIL](http://www.pythonware.com/products/pil/) to find all the columns in the image where more than half of the pixels were darker than some threshold value. ## Code import Image, ImageDraw # PIL modules withlines = Image.open('withgrid.jpg') nolines = Image.open('nogrid.jpg') def findlines(image): w,h, = image.size s = w*h im = image.point(lambda i: 255 * (i < 60)) # threshold d = im.getdata() # faster than per-pixel operations linecolumns = [] for col in range(w): black = sum( (d[x] for x in range(col, s, w)) )//255 if black > 450: linecolumns += [col] # return an image showing the detected lines im2 = image.convert('RGB') draw = ImageDraw.Draw(im2) for col in linecolumns: draw.line( (col,0,col,h-1), fill='#f00', width = 1) return im2 findlines(withlines).show() findlines(nolines).show() ## Results showing detected vertical lines in red for illustration ![image 1](http://i.stack.imgur.com/1TDXH.jpg)![enter image description here](http://i.stack.imgur.com/IRjDm.jpg) As you can see, four of the grid lines are detected, and, with some processing to ignore the left and right sides and the center of the book, there should be no false positives on the desired type. This means that you could use the above code to detect black columns, discard those that are near to the edge or the center. If any black columns remain, classify it as the "other" undesired class of pictures.
Lazy transform in C++ Question: I have the following Python snippet that I would like to reproduce using C++: from itertools import count, imap source = count(1) pipe1 = imap(lambda x: 2 * x, source) pipe2 = imap(lambda x: x + 1, pipe1) sink = imap(lambda x: 3 * x, pipe2) for i in sink: print i I've heard of [Boost Phoenix](http://www.boost.org/doc/libs/1_51_0/libs/phoenix/doc/html/index.html), but I couldn't find an example of a lazy `transform` behaving in the same way as Python's `imap`. Edit: to clarify my question, the idea is not only to apply functions in sequence using a `for`, but rather to be able to use algorithms like `std::transform` on infinite generators. The way the functions are composed (in a more functional language like dialect) is also important, as the next step is function composition. Update: thanks bradgonesurfing, David Brown, and Xeo for the amazing answers! I chose Xeo's because it's the most concise and it gets me right where I wanted to be, but David's was very important into getting the concepts through. Also, bradgonesurfing's tipped Boost::Range :). Answer: Employing [Boost.Range](http://www.boost.org/libs/range/): int main(){ auto map = boost::adaptors::transformed; // shorten the name auto sink = generate(1) | map([](int x){ return 2*x; }) | map([](int x){ return x+1; }) | map([](int x){ return 3*x; }); for(auto i : sink) std::cout << i << "\n"; } [Live example including the `generate` function.](http://liveworkspace.org/code/841508d3b54bed4181d4e9fb6058200f)
How do I fairly assign tasks to workers in Python? - Splitting iterable into similarly sized chunks Question: I have workers and tasks to do: workers = ['peter', 'paul', 'mary'] tasks = range(13) Now I want to split the tasks into chunks or batches of work, so that each worker can work on one batch and does about the same amount of work as everybody else. In my real life I want to schedule batch jobs to a compute farm. The batch jobs are supposed to run in parallel. The actual schedule&dispatch is done by a commercial grade tool such as lsf or grid. Some examples of what I would expect: >>> distribute_work(['peter', 'paul', 'mary'], range(3)) [('peter', [0]), ('paul', [1]), ('mary', [2])] >>> distribute_work(['peter', 'paul', 'mary'], range(6)) [('peter', [0, 3]), ('paul', [1, 4]), ('mary', [2, 5])] >>> distribute_work(['peter', 'paul', 'mary'], range(5)) [('peter', [0, 3]), ('paul', [1, 4]), ('mary', [2])] This question is very similar to the questions [here](http://stackoverflow.com/q/12285115/923794), [here](http://stackoverflow.com/q/5850536/923794), and [here](http://stackoverflow.com/q/434287/923794) The difference is that I want these features, in the order or precedence: 1. No use of `len`, if possible no build-up of long data structures internally 2. Accept a generator 3. Return generators 4. As much use of stdlib components as possible Some side notes on requirements: * No dicts on purpose: I've workers with the same name that can do multiple batches (unix hostnames). If your solution uses dicts, that's fine because we can always do worker lookup by a batch enumeration. * Arbitrary length: Both workers and tasks can be iterables of any length >= 1. And they do not have to split evenly as shown in the example above where Mary only gets one task. * Order: To me is not important. I guess others may prefer some order like [0,1], [2,3], [5], but I don't care. If your solution can keep or switch the order, maybe that's worthwhile pointing out to others. I have tried to wrap my head around `itertools` and this particular problem and came up with the following code to illustrate the question: from itertools import * def distribute_work(workers, tasks): batches = range(len(workers)) return [ ( workers[k], [t[1] for t in i] ) for (k,i) in groupby(sorted(zip(cycle(batches), tasks), key=lambda t: t[0]), lambda t: t[0]) ] This satisfies 4., but the sort very likely violates 1.. And 2./3. are not even thought about. Probably there's some easy solution to this, combining some stdlib components in a way I haven't thought of. But maybe not. Any takers? Answer: Do you have to pre-batch? Why not just have a queue, and have each worker pop off the queue when it finishes a work unit?
python - argparse one argument instead of several others Question: For example I have options: parser.add_argument('-b', action="store_true") parser.add_argument('-c', action="store_true") parser.add_argument('-d', action="store_true") I want to enable all of them with synonym option "-a". Is it possible? Answer: I think it is simplest to do this after `parse_args()`: import argparse parser = argparse.ArgumentParser() parser.add_argument('-b', action="store_true") parser.add_argument('-c', action="store_true") parser.add_argument('-d', action="store_true") parser.add_argument('-a', action="store_true") args = parser.parse_args() if args.a: args.b = args.c = args.d = True Alternatively, you could do it with a custom action, but I think it is overkill: import argparse class AllAction(argparse.Action): def __call__(self, parser, args, values, option_string = None): # print '{n} {v} {o}'.format(n = args, v = values, o = option_string) for param in ['a', 'b', 'c', 'd']: setattr(args, param, True) parser = argparse.ArgumentParser() parser.add_argument('-b', action="store_true") parser.add_argument('-c', action="store_true") parser.add_argument('-d', action="store_true") parser.add_argument('-a', action = AllAction, nargs = '?') args = parser.parse_args() print(args)
Index out of range error Python greedy method Question: Hello I am currently getting an index out of range error from the following code: (I will post the code first and then the error) # Main File: import Knapsack_Test size = 10 W = 2*size knapsack = Knapsack_Test.Knapsack_Test() for i in range(1, 10): knapsack.greedy_knapsack_test(size, W) size = size + 10*i W = 2*size # Class File (Only the greedy function): def greedy_knap(self, v, w, W): knap_array = [] for i in range(1, len(v)): #The index out of range error occurs here: knap_array[i] = [v[i],w[i]] sort_order = self.sort.merge_sort(knap_array) weight = 0 value = 0 knap_sac= [] n = len(knap_array) j = 0 profit = 0 while weight < W and j < n: if weight + knap_array[i][1] <= W: knap_sac.append(knap_array[i]) weight = weight + knap_array[i][1] profit = profit + knap_array[i][0] j = j + 1 return profit # The test File (for greedy function): def greedy_knapsack_test(self, size, W): v = [] w = [] for i in range(1,size): v.append(random.randint(1,1000)) for i in range(1,size): w.append(random.randint(1,1000)) start = time.time() self.knapsack.greedy_knap(v, w, W) end = time.time() - start return end # The Error: Traceback (most recent call last): File "\\minerfiles.mst.edu\dfs\users\asadmb\Desktop\Programming 3\Knapsack_Main.py", line 10, in <module> knapsack.greedy_knapsack_test(size, W) File "\\minerfiles.mst.edu\dfs\users\asadmb\Desktop\Programming 3\Knapsack_Test.py", line 31, in greedy_knapsack_test self.knapsack.greedy_knap(v, w, W) File "\\minerfiles.mst.edu\dfs\users\asadmb\Desktop\Programming 3\KnapsackClass.py", line 30, in greedy_knap knap_array[i] = [v[i],w[i]] IndexError: list assignment index out of range Answer: knap_array = [] for i in range(1, len(v)): #The index out of range error occurs here: knap_array.append([v[i],w[i]]) you can't create list element by referencing them.
Random sampling from a set of integers Question: I am working with python 3.2 and I spent a lot of time trouble shooting this, and I still can't seem to wrap my brain around it. `number = random.randint ( x0 ,xn )` I'm generating a random number. It's purpose is to make my code come at me differently everytime. For example I have 10 variables of text that I have written. I have solved the problem of not having these variables appear in the same order at each program run. The issue I have is that they now appear randomly everytime. It picks one out of 10 everytime, instead the first time 10 and next 9. I can't seem to find out how to exclude the previous ones. thelist = [0] while i < x if number in thelist: >>>repeat<<< else: thelist.append (number) if ( number == x0 ): >>>something<<< elif ( number == x1 ): >>>something<<< This is what I would imagine the code would look like, everytime you loop one more number gets appended to the list, so that everytime it picks a number already in the list it repeats the loop again until it then has used all the numbers that random.randint can pull. Answer: Here's a shuffle function: import random max = 15 x = list(range(max+1)) for i in range(max, 0, -1): n = random.randint(0, i) x[n], x[i] = x[i], x[n] This starts with a sorted list of numbers `[0, 1, ... max]`. Then, it chooses a number from index 0 to index max, and swaps it with index max. Then, it chooses a number from index 0 to index max-1, and swaps it with index max-1. And so on, for max-2, max-3, ... 1 As yosukesabai rightly notes, this has the same effect as calling `random.sample(range(max+1), max+1)`. This picks `max + 1` unique random values from `range(max+1)`. In other words, it just shuffles the order around. Docs: <http://docs.python.org/2/library/random.html#random.sample> If you wanted something more along the lines of your proposed algorithm, you could do: import random max = 15 x = range(max+1) l = [] for _ in range(max+1): n = random.randint(0,max) while n in l: n = random.randint(0,max) l.append(n)
reorder byte order in hex string (python) Question: I want to build a small formatter in python giving me back the numeric values embedded in lines of hex strings. It is a central part of my formatter and should be reasonable fast to format more than 100 lines/sec (each line about ~100 chars). The code below should give an example where I'm currently blocked. 'data_string_in_orig' shows the given input format. It has to be byte swapped for each word. The swap from 'data_string_in_orig' to 'data_string_in_swapped' is needed. In the end I need the structure access as shown. The expected result is within the comment. Thanks in advance Wolfgang R #!/usr/bin/python import binascii import struct ## 'uint32 double' data_string_in_orig = 'b62e000052e366667a66408d' data_string_in_swapped = '2eb60000e3526666667a8d40' print data_string_in_orig packed_data = binascii.unhexlify(data_string_in_swapped) s = struct.Struct('<Id') unpacked_data = s.unpack_from(packed_data, 0) print 'Unpacked Values:', unpacked_data ## Unpacked Values: (46638, 943.29999999943209) exit(0) Answer: `array.arrays` have a [byteswap method](http://docs.python.org/2/library/array.html#array.array.byteswap): import binascii import struct import array x = binascii.unhexlify('b62e000052e366667a66408d') y = array.array('h', x) y.byteswap() s = struct.Struct('<Id') print(s.unpack_from(y)) # (46638, 943.2999999994321) The `h` in `array.array('h', x)` was chosen because it tells `array.array` to regard the data in `x` as an array of 2-byte shorts. The important thing is that each item be regarded as being 2-bytes long. `H`, which signifies 2-byte unsigned short, works just as well.
Paramiko hanging during authentication, when runned by dint of unittest runner Question: Good day. I have a strange problem with `paramiko` ssh client. `Connect paramiko` method hangs when it's called outside `unittest2` classes/functions and code was run by unittest runner. There is a piece of code, where problem appears: import paramiko import unittest2 ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect('172.18.67.10', username='root', password='secrete') _, stdout, _ = ssh.exec_command('date') class TestTest(unittest2.TestCase): def setUp(self): pass If I move `ssh.connect` into `TestTest` class or `setUpModule` function, connection will be successful. Also everything is OK when code is run by original Python interpreter. When I try to debug `paramiko`, I figure out this problem inside `while True` loop in `paramiko/auth_handler.py:AuthHandler.wait_for_response method`. Any suggestions? Answer: According to [this SO answer](http://stackoverflow.com/a/450895/109807), it seems to be a thread-related bug in paramiko and can be avoided by not calling connect() during import.
Copying value of cell (X,Y) to cell (A,B) in same sheet of an Excel file using Python Question: I am using the modules `xlwd`, `xlwt` and `xlutil` to do some Excel manipulations in Python. I am not able to figure out how to copy the value of cell `(X,Y)` to cell `(A,B)` in the same sheet of an Excel file in Python. Could someone let me know how to do that? Answer: import xlrd import xlwt import xlutils from xlutils.copy import copy r_book = xlrd.open_workbook(fname) w_book = xlutils.copy.copy(r_book) r_sheet = r_book.sheets()[0] # Taking the first sheet w_sheet = w_book.get_sheet(0) x_y_value = r_sheet.cell(X, Y).value w_sheet.write(A, B, x_y_value) w_book.save(fname)
My Google App Engine appspot URL wont work Question: I recently created my Google App Engine account, and uploaded my application, and have an instance of said app running. I can access my app via localhost:8080 but when I try to use myappid.appspot.com I get a 500 Server Error (Of course I replace "myappid" with my apps name). This is what it says: "Error: Server Error The server encountered an error and could not complete your request. If the problem persists, please report your problem and mention this error message and the query that caused it." Can anyone help me get the URL working? I need my team to be able to access this app from anywhere. I have no idea what could be wrong with it, I am very new to GAE. After a lot of searching all I find is people saying their appspot URL works, and want other options. But I just want my appspot URL to work first! Some more info: This is a Python app, using the GAE Python SDK, I am running Windows 7, and using the GAE Launcher GUI to deploy and run the app. Thanks in advance! EDIT: Here is the error in my Log: : No module named flask Traceback (most recent call last): File "/base/data/home/apps/s~luxtestapp/1.362824400913245138/bootstrap.py", line 19, in from app import create_app File "/base/data/home/apps/s~luxtestapp/1.362824400913245138/app/**init**.py", line 10, in from flask import Flask Apparently the app uses Flask instead of Webbapp2. Honestly I'm not too sure about it all, because this is a pre-built app that I downloaded and deployed. I didn't write it. Answer: In the control panel for your app on appspot go to the log section. [Appengine](https://appengine.google.com) The see what the latest entry says. Filter to "error". Also when you create your app (python webapp2 example) turn debugging on: app = webapp2.WSGIApplication([ ('/', MainHandler)], debug=True) You'll get a much more informative error screen then instead of '500'.
Different encoding in Jython's Java and Python level Question: I'm using Sikuli (see sikuli.org) which uses jython2.5.2. Here is a summary of the class Region on the Java level: public class Region { // < other class methods > public int type(String text) { System.out.println("javadebug: "+text); // debug output // do actual typing } } On the Pythonlevel there is a Wrapperclass: import Region as JRegion // import java class class Region(JRegion): # < other class methods > def type(self, text): print "pythondebug: "+text // debug output JRegion.type(self, text) This works as intended for ascii chars, but when I use ö, ä or ü as text, this happens: // python input: # -*- encoding: utf-8 -*- someregion = Region() someregion.type("ä") // output: pythondebug: ä javadebug: ä The character seems to be converted wrongly when passed to the Java object. I would like to know what exactly is going wrong here and how to fix this, so that the characters entered in the pythonmethod are the same in the javamethod. Thanks for your help Answer: Looking from the Jython code you have to tell Java, that the string is UTF-8 encoded: def type(self, text): jtext = java.lang.String(text, "utf-8") print "pythondebug: " + text // debug output JRegion.type(self, jtext)
Setting up default pylint config.rc file in Windows Question: I'm using Pylint under Windows, and it's not reading my pylint-config.rc file. Is there a way to set up a default .rc file for Python within windows so that I don't have to keep typing it into the command line? Thanks. Answer: I don't have a windows box at hand to test, but the code uses `os.path.expanduser('~')` to find the current user's home directory, and looks for a file calle `.pylintrc` in that directory. According to the [python documentation](http://docs.python.org/3/library/os.path.html?highlight=expanduser#os.path.expanduser), on Windows, `expanduser` uses HOME and USERPROFILE if set, otherwise a combination of HOMEPATH and HOMEDRIVE. So my advice is to check in a Python interactive session what the following script outputs: import os print os.path.expanduser('~') and put the configuration file as `.pylintrc` in that folder. Alternatively, if you want to use different configuration files on a per project basis, you should know that if there is a file called `pylintrc` (without a leading dot) in the current working directory, then Pylint will use this one. If there is a file called `__init__.py` in the current working directory, Pylint will look in the parent directory until there is no such file and then look for a `pylintrc` configuration file. This is done so that you can maintain a per project config file together with you source code, and lauch Pylint from any directory in your source tree.
Why am I getting an IndexError for a (seemingly) properly-split string? Question: I currently have a script that is supposed to fetch and return the number of clicks a Bit.ly link has. I start out by gathering and reading the data from a Bitly url, which I appear to be doing correctly. bitly_data = "https://api-ssl.bitly.com/v3/link/clicks?access_token=ACCESS_TOKEN&link=http://bit.ly/"+link src = urllib2.urlopen(bitly_data) src = src.read() When `link` is something such as `TY8lnd`, `src` is a string that looks something like > {"status_code": 200, "data": {"units": -1, "tz_offset": -4, "unit": "day", > "link_clicks": 535}, "status_txt": "OK"} I now want to parse this string to get just the numerical value after `link_clicks`. I figured the best way to do this was by making two splits. src=src.split('clicks": ') src = str(src[1]) clicks = src.split('}, "status') clicks = clicks[0] When I run this, clicks does, ultimately, equal the correct number and only that. However, Terminal returns an IndexError for the line `src = str(src[1])`. I tried getting rid of the `str()` but this had no effect. An understanding as to why I am getting this error despite the end value being corrected would be greatly appreciated. Here is the Traceback in its entirety: Traceback (most recent call last): File "/Library/Python/2.7/site-packages/Flask-0.9-py2.7.egg/flask/app.py", line 1701, in __call__ return self.wsgi_app(environ, start_response) File "/Library/Python/2.7/site-packages/Flask-0.9-py2.7.egg/flask/app.py", line 1689, in wsgi_app response = self.make_response(self.handle_exception(e)) File "/Library/Python/2.7/site-packages/Flask-0.9-py2.7.egg/flask/app.py", line 1687, in wsgi_app response = self.full_dispatch_request() File "/Library/Python/2.7/site-packages/Flask-0.9-py2.7.egg/flask/app.py", line 1360, in full_dispatch_request rv = self.handle_user_exception(e) File "/Library/Python/2.7/site-packages/Flask-0.9-py2.7.egg/flask/app.py", line 1358, in full_dispatch_request rv = self.dispatch_request() File "/Library/Python/2.7/site-packages/Flask-0.9-py2.7.egg/flask/app.py", line 1344, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/Users/Zach/Dropbox/bitly/bit.py", line 35, in settings src = str(src[1]) IndexError: list index out of range Thank you in advance. Answer: This response is json, as such, decode the json instead of trying to parse the string. >>> import json >>> resp = '{"status_code": 200, "data": {"units": -1, "tz_offset": -4, "unit": "day", "link_clicks": 535}, "status_txt": "OK"}' >>> resp_object = json.loads(resp) >>> resp_object and resp_object.get('data', {}).get('link_clicks', 0) or 0 535
python multiprocessing.Process.Manager not producing consistent results? Question: I've written the following code to illustrate the problem I'm seeing. I'm trying to use a Process.Manager.list() to keep track of a list and increment random indices of that list. Each time there are 100 processes spawned, and each process increments a random index of the list by 1. Therefore, one would expect the SUM of the resulting list to be the same each time, correct? I get something between 203 and 205. from multiprocessing import Process, Manager import random class MyProc(Process): def __init__(self, A): Process.__init__(self) self.A = A def run(self): i = random.randint(0, len(self.A)-1) self.A[i] = self.A[i] + 1 if __name__ == '__main__': procs = [] M = Manager() a = M.list(range(15)) print('A: {0}'.format(a)) print('sum(A) = {0}'.format(sum(a))) for i in range(100): procs.append(MyProc(a)) map(lambda x: x.start(), procs) map(lambda x: x.join(), procs) print('A: {0}'.format(a)) print('sum(A) = {0}'.format(sum(a))) Answer: As **millimoose** points out, the problem here is a race condition occurring in `self.A[i] = self.A[i] + 1`. By the time `self.A[i] + 1` has been calculated, `self.A[i]` could have already been changed by another process. A possible solution to your problem is to your problem is to pass the index back to the parent, which then performs the addition. from multiprocessing import Process, Manager import random class MyProc(Process): def __init__(self, B, length): Process.__init__(self) self.B = B self.length = length def run(self): i = random.randint(0, self.length-1) self.B.append(i) if __name__ == '__main__': procs = [] M = Manager() a = range(15) b = M.list() print('A: {0}'.format(a)) print('sum(A) = {0}'.format(sum(a))) for i in range(100): procs.append(MyProc(b, len(a))) map(lambda x: x.start(), procs) map(lambda x: x.join(), procs) for i in b: a[i] = a[i] + 1 print('A: {0}'.format(a)) print('sum(A) = {0}'.format(sum(a))) Appending an element to an array is only one operation, thus the race condition is avoided.
Django GeoIP import Question: > **Possible Duplicate:** > [Error setting up geoip on > Django](http://stackoverflow.com/questions/4896996/error-setting-up-geoip- > on-django) I get the "cannot import name GeoIP" error from the browser but not on python terminal. for example for geodata in /tmp/geo. the following works in the python terminal. from django.contrib.gis.geoip import GeoIP GeoIP(path='/tmp/geo/') However the following in a django view gives the error from django.contrib.gis.geoip import GeoIP return HttpResponse (GeoIP(path='/tmp/geo/')) Any pointer will be helpfull. I'm using django 1.4 , python 2.6. here is the trace. Thanks. Traceback: File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py" in get_response 101. request.path_info) File "/usr/lib/python2.6/site-packages/django/core/urlresolvers.py" in resolve 300. sub_match = pattern.resolve(new_path) File "/usr/lib/python2.6/site-packages/django/core/urlresolvers.py" in resolve 209. return ResolverMatch(self.callback, args, kwargs, self.name) File "/usr/lib/python2.6/site-packages/django/core/urlresolvers.py" in callback 216. self._callback = get_callable(self._callback_str) File "/usr/lib/python2.6/site-packages/django/utils/functional.py" in wrapper 27. result = func(*args) File "/usr/lib/python2.6/site-packages/django/core/urlresolvers.py" in get_callable 92. lookup_view = getattr(import_module(mod_name), func_name) File "/usr/lib/python2.6/site-packages/django/utils/importlib.py" in import_module 35. __import__(name) File "/x/y/z/views.py" in <module> 12. from django.contrib.gis.utils import GeoIP Exception Type: ImportError at / Exception Value: cannot import name GeoIP Answer: The two statements seem to differ (look at the stacktrace): from django.contrib.gis.utils import GeoIP vs from django.contrib.gis.geoip import GeoIP Looking at [the source](https://github.com/django/django/blob/master/django/contrib/gis/geoip/__init__.py), `GeoIP` is defined in [`django.contrib.gis.geoip.base`](https://github.com/django/django/blob/master/django/contrib/gis/geoip/base.py) and imported in [`django.contrib.gis.geoip`](https://github.com/django/django/blob/master/django/contrib/gis/geoip/__init__.py), which explain why it works in the console, and not in the view, where you're using `django.contrib.gis.utils.GeoIP`. You should therefore use `from django.contrib.gis.geoip import GeoIP` everywhere. * * * Your problem probably arises from the fact that the `django.contrib.gis.utils` module was [removed in Django 1.4](https://docs.djangoproject.com/en/dev/ref/contrib/gis/geoip/)
strange memory leak with python + paramiko Question: I have an (apparent) memory leak in a python script that I can't quite explain (the resident memory just keeps growing). It started off with about 6MB resident, I left it running overnight and it had gotten to over 200MB (I did that to rule out a sawtooth memory usage pattern due to gc). I've condensed it down to this script: import sys import time import paramiko def update(): ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) try: ssh.connect(hostname='localhost') finally: ssh.close() def main(): while(True): update() time.sleep(0.001) if __name__ == '__main__': sys.exit(main()) I thought the problem might be that I keep instantiating a new SSHClient and they somehow weren't getting thrown out, but this version leaks memory even faster! import sys import time import paramiko ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) def update(): global ssh try: ssh.connect(hostname='localhost') finally: ssh.close() def main(): while(True): update() time.sleep(0.001) if __name__ == '__main__': sys.exit(main()) If anyone could shed some light on this, or if I'm just being dumb and someone can point out why I'd be most appreciative. Thanks Answer: I managed to reproduce. When investigating, i found that most probably, the leak is connected to libssl, because there is growing allocation Before: 35aaa53000-35aaa5b000 rw-p 00053000 00:11 3360939 /usr/lib64/libssl.so.1.0.0j ... 7f4530000000-7f453013b000 rw-p 00000000 00:00 0 Size: 1260 kB Rss: 1012 kB Pss: 1012 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 0 kB Private_Dirty: 1012 kB Referenced: 1012 kB Anonymous: 1012 kB AnonHugePages: 0 kB Swap: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB 7f453013b000-7f4534000000 ---p 00000000 00:00 0 Size: 64276 kB Rss: 0 kB Pss: 0 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 0 kB Private_Dirty: 0 kB Referenced: 0 kB Anonymous: 0 kB AnonHugePages: 0 kB Swap: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB After some time: 35aaa53000-35aaa5b000 rw-p 00053000 00:11 3360939 /usr/lib64/libssl.so.1.0.0j ... 7f4530000000-7f4530250000 rw-p 00000000 00:00 0 Size: 2368 kB Rss: 2120 kB Pss: 2120 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 0 kB Private_Dirty: 2120 kB Referenced: 2120 kB Anonymous: 2120 kB AnonHugePages: 0 kB Swap: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB 7f4530250000-7f4534000000 ---p 00000000 00:00 0 Size: 63168 kB Rss: 0 kB Pss: 0 kB Shared_Clean: 0 kB Shared_Dirty: 0 kB Private_Clean: 0 kB Private_Dirty: 0 kB Referenced: 0 kB Anonymous: 0 kB AnonHugePages: 0 kB Swap: 0 kB KernelPageSize: 4 kB MMUPageSize: 4 kB Locked: 0 kB Seems like bug in libssl or paramiko itself, because the gc.garbage is empty and len(gc.get_objects()) is constant, meaning there are no unbreakable cycles and no new python objects (using your first version). BTW, you can run gc.collect() each iteration to avoid the sawtooth.
Call Python function from Javascript code Question: I'd like to call a `Python` function from `Javascript` code, because there isn't an alternative in `Javascript` for doing what I want. Is this possible? Could you adjust the below snippet to work? Javascript part: var tag = document.getElementsByTagName("p")[0]; text = tag.innerHTML; // Here I would like to call the Python interpreter with Python function arrOfStrings = openSomehowPythonInterpreter("~/pythoncode.py", "processParagraph(text)"); ~/pythoncode.py contains functions using advanced libraries that don't have an easy to write equivalent in Javascript import nltk # is not in Javascript def processParagraph(text): ... nltk calls ... return lst # returns a list of strings (will be converted to `Javascript` array) Answer: All you need is to make an ajax request to your pythoncode. You can do this with jquery <http://api.jquery.com/jQuery.ajax/>, or use just javascript $.ajax({ type: "POST", url: "~/pythoncode.py", data: { param: text} }).done(function( o ) { // do something });
Can't pickle <type 'cStringIO.StringO'>: attribute lookup cStringIO.StringO failed error while using multithreading module Question: def execute_on_host((hostname, command), username=config['username'], keyfile=config['keyfile']): print hostname ssh_client = paramiko.SSHClient() ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh_client.connect(hostname=hostname, username=username, key_filename=keyfile) stdin, stdout, stderr = ssh_client.exec_command(command) print stdout.read() ssh_client.close() return stdout So, after the `hostname` and `stdout.read()` is printed, I get errors like these Process PoolWorker-1: Traceback (most recent call last): File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python2.7/multiprocessing/pool.py", line 99, in worker put((job, i, result)) File "/usr/lib/python2.7/multiprocessing/queues.py", line 390, in put return send(obj) PicklingError: Can't pickle <type 'cStringIO.StringO'>: attribute lookup cStringIO.StringO failed I am using the following code to execute the command. from multiprocessing import Pool pool = Pool(len(host_cmds_list)) pool.map(execute_on_host, host_cmds_list) pool.close() pool.join() I am not sure how to fix this. print host_cmds_list [('hostname1', '/bin/date'), ('hostname2', '/bin/date')] Answer: Your `stdout` is a `cStringIO` object which can't be pickled. Try returning the contents of it instead of the object itself.
Comparing speed of non-matching regexp Question: The following Python code is incredibly slow: import re re.match( '([a]+)+c', 'a' * 30 + 'b' ) and it gets worse if you replace 30 with a larger constant. I suspect that the parsing ambiguity due to the consecutive `+` is the culprit, but I'm not very expert in regexp parsing and matching. Is this a bug of the Python regexp engine, or any reasonable implementation will do the same? I'm not a Perl expert, but the following returns quite fast perl -e '$s="aaaaaaaaaaaaaaaaaaaaaaaaaaaaaab"; print "ok\n" if $s =~ m/([a]+)+c/;' and increasing the number of 'a' does not alter substantially the execution speed. Answer: I assume that Perl is clever enough to collapse the two `+`s into one, while Python is not. Now let's imagine what the engine does, if this is not optimized away. And remember that capturing is generally expensive. Note also, that both `+`s are greedy, so the engine will try to use as many repetitions as possible in one backtracking step. Each bullet point represents one backtracking step: * The engine uses as many `[a]` as possible, and consumes all thirty `a`s. Then it can not go any further, so it leaves the first repetition and **captures** 30 `a`s. Now the next repetition is on and it tries to consume some more with another `([a]+)` but that doesn't work of course. And then the `c` fails to match `b`. * Backtrack! Throw away the last `a` consumed by the inner repetition. After this we leave the inner repetition again, so the engine will **capture** 29 `a`s. Then the other `+` kicks in, the inner repetition is tried out again (consuming the 30th `a`). Then we leave the inner repetition once again, which also leaves the capturing group, so the first capture is thrown away and the engine **captures** the last `a`. `c` fails to match `b`. * Backtrack! Throw away another `a` inside. We **capture** 28 `a`s. The second (outer repetition) of the capturing group consumes the last 2 `a`s which are **captured**. `c` fails to match `b`. * Backtrack! Now we can backtrack in the second other repetition and throw away the second of two `a`s. The one that is left will be **captured**. Then we enter the capturing group for the third time and **capture** the last `a`. `c` fails to match `b`. * Backtrack! Down to 27 `a`s in the first repetition. Here is a simple visualization. Each line represents one backtracking step, and each set of parentheses shows one consumption of the inner repetition. The curly brackets represent those that are _newly_ captured for that step of backtracking, while normal parentheses are not revisited in this particular backtracking step. And I leave out the `b`/`c` because it will never be matched: {aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} {aaaaaaaaaaaaaaaaaaaaaaaaaaaaa}{a} {aaaaaaaaaaaaaaaaaaaaaaaaaaaa}{aa} (aaaaaaaaaaaaaaaaaaaaaaaaaaaa){a}{a} {aaaaaaaaaaaaaaaaaaaaaaaaaaa}{aaa} (aaaaaaaaaaaaaaaaaaaaaaaaaaa){aa}{a} (aaaaaaaaaaaaaaaaaaaaaaaaaaa){a}{aa} (aaaaaaaaaaaaaaaaaaaaaaaaaaa)(a){a}{a} {aaaaaaaaaaaaaaaaaaaaaaaaaa}{aaaa} (aaaaaaaaaaaaaaaaaaaaaaaaaa){aaa}{a} (aaaaaaaaaaaaaaaaaaaaaaaaaa){aa}{aa} (aaaaaaaaaaaaaaaaaaaaaaaaaa)(aa){a}{a} (aaaaaaaaaaaaaaaaaaaaaaaaaa){a}{aaa} (aaaaaaaaaaaaaaaaaaaaaaaaaa)(a){aa}{a} (aaaaaaaaaaaaaaaaaaaaaaaaaa)(a){a}{aa} (aaaaaaaaaaaaaaaaaaaaaaaaaa)(a)(a){a}{a} And. so. on. Note that in the end the engine will also try all combinations for subsets of `a` (backtracking just through the first 29 `a`s then through the first 28 `a`s) just to discover, that `c` does also not match `a`. The explanation of regex engine internals is based on information scattered around [regular-expressions.info](http://www.regular-expressions.info/). To solve this. Simply remove one of the `+`s. Either `r'a+c'` or if you **do** want to capture the amount of `a`s use `r'(a+)s'`. Finally, to answer your question. I would not consider this a bug in Python's regex engine, but only (if anything) a lack in optimization logic. This problem is not generally solvable, so it is not too unreasonably for an engine to assume, that you have to take care of catastrophic backtracking yourself. If Perl is clever enough to recognize sufficiently simple cases of it, so much the better.
PyPy displaying inaccurate benchmark results? Question: I was working on [Project Euler](http://projecteuler.net/problem=204) and wondered if I could speed up my solution using PyPy. However, I found results quite disappointing, as it took more time to compute. d:\projeuler>pypy problem204.py 3462.08630405 mseconds d:\projeuler>python problem204.py 1823.91602542 mseconds Since mseconds output were calculated using python's `time` modules, so I ran it again using builtin benchmark commands. d:\projeuler>pypy -mtimeit -s "import problem204" "problem204._main()" 10 loops, best of 3: 465 msec per loop d:\projeuler>python -mtimeit -s "import problem204" "problem204._main()" 10 loops, best of 3: 1.87 sec per loop PyPy reports that it took about half second to finish running. However, I tried running pypy problem204 several times and outputs were never even close to benchmarked .5 seconds. unlike pypy, python's mtimeit results are consistent with outputs. Is pypy giving me inaccurate benchmarks, or is there some magic I don't understand? Answer: Note that timeit 1. runs the statement several times (10 in your case), and 2. does that several times (3 by default) and gives the minimum of that, for reasons [outlined in the documentation](http://docs.python.org/3/library/timeit.html#timeit.Timer.repeat). It depends on your code, but it's entirely possible that the JIT compiler is to blame for this confusing result. The JIT warmup overhead is incurred every time you launched a new pypy process, but only once during the timeit benchmark (because that one runs `_main` several times in the same process). Moreover, if some part of your code is run so often that it's not compiled when `_main` runs once, but only when it runs, say, three times, subsequent runs will also be faster, which further removes the best result from the first one (i.e. the one for running `pypy problem204.py` once). The `timeit` result is correct in that it (roughly) matches how fast the code will be in the best case - warmed-up JIT compiler, rarely losing the CPU to other programs, etc. Your problem is that you want to know something different - the time including JIT warmup.
Plot really big file in python (5GB) with x axis offset Question: I am trying to plot a very big file (~5 GB) using python and matplotlib. I am able to load the whole file in memory (the total available in the machine is 16 GB) but when I plot it using simple imshow I get a segmentation fault. This is most probable to the ulimit which I have set to 15000 but I cannot set higher. I have come to the conclusion that I need to plot my array in batches and therefore made a simple code to do that. My main isue is that when I plot a batch of the big array the x coordinates start always from 0 and there is no way I can overlay the images to create a final big one. If you have any suggestion please let me know. Also I am not able to install new packages like "Image" on this machine due to administrative rights. Here is a sample of the code that reads the first 12 lines of my array and make 3 plots. import os import sys import scipy import numpy as np import pylab as pl import matplotlib as mpl import matplotlib.cm as cm from optparse import OptionParser from scipy import fftpack from scipy.fftpack import * from cmath import * from pylab import * import pp import fileinput import matplotlib.pylab as plt import pickle def readalllines(file1,rows,freqs): file = open(file1,'r') sizer = int(rows*freqs) i = 0 q = np.zeros(sizer,'float') for i in range(rows*freqs): s =file.readline() s = s.split() #print s[4],q[i] q[i] = float(s[4]) if i%262144 == 0: print '\r ',int(i*100.0/(337*262144)),' percent complete', i += 1 file.close() return q parser = OptionParser() parser.add_option('-f',dest="filename",help="Read dynamic spectrum from FILE",metavar="FILE") parser.add_option('-t',dest="dtime",help="The time integration used in seconds, default 10",default=10) parser.add_option('-n',dest="dfreq",help="The bandwidth of each frequency channel in Hz",default=11.92092896) parser.add_option('-w',dest="reduce",help="The chuncker divider in frequency channels, integer default 16",default=16) (opts,args) = parser.parse_args() rows=12 freqs = 262144 file1 = opts.filename s = readalllines(file1,rows,freqs) s = np.reshape(s,(rows,freqs)) s = s.T print s.shape #raw_input() #s_shift = scipy.fftpack.fftshift(s) #fig = plt.figure() #fig.patch.set_alpha(0.0) #axes = plt.axes() #axes.patch.set_alpha(0.0) ###plt.ylim(0,8) plt.ion() i = 0 for o in range(0,rows,4): fig = plt.figure() #plt.clf() plt.imshow(s[:,o:o+4],interpolation='nearest',aspect='auto', cmap=cm.gray_r, origin='lower') if o == 0: axis([0,rows,0,freqs]) fdf, fdff = xticks() print fdf xticks(fdf+o) print xticks() #axis([o,o+4,0,freqs]) plt.draw() #w, h = fig.canvas.get_width_height() #buf = np.fromstring(fig.canvas.tostring_argb(), dtype=np.uint8) #buf.shape = (w,h,4) #buf = np.rol(buf, 3, axis=2) #w,h,_ = buf.shape #img = Image.fromstring("RGBA", (w,h),buf.tostring()) #if prev: # prev.paste(img) # del prev #prev = img i += 1 pl.colorbar() pl.show() Answer: If you plot any array with more than ~2k pixels across something in your graphics chain _will_ down sample the image in some way to display it on your monitor. I would recommend down sampling in a controlled way, something like data = convert_raw_data_to_fft(args) # make sure data is row major def ds_decimate(row,step = 100): return row[::step] def ds_sum(row,step): return np.sum(row[:step*(len(row)//step)].reshape(-1,step),1) # as per suggestion from tom10 in comments def ds_max(row,step): return np.max(row[:step*(len(row)//step)].reshape(-1,step),1) data_plotable = [ds_sum(d) for d in data] # plug in which ever function you want or [interpolation](http://stackoverflow.com/questions/13167040/interpolate-in- one-direction).
How to append functions to current module from unrelated script in Python Question: I have two python scripts in different locations. I try to import the functions of the second one in such a way that they are integrated with the globals of the first one. That works fine. However when I call functions in the first script from the second one they cant be found. foo.py def run(): glob = {} loc = {} execfile("/path/to/bar.py", glob, loc) currmodule = globals() currmodule["func_in_bar"] = glob["func_in_bar"] func_in_bar() def func_in_foo_A(): print("fooA") def func_in_foo_B(): print("fooB") if __name__ == "__main__": run() bar.py def func_in_bar(): func_in_foo_A() func_in_foo_B() When foo is run it fails with: `NameError: global name 'func_in_foo_A' is not defined` Answer: In `bar.py` you need to add an `import foo` and then reference `foo.func_in_foo()`, etc. Alternatively, use the form: `from foo import func_in_foo`
use beautiful soup to parse a href from given html structure Question: I have the following given html structure <li class="g"> <div class="vsc"> <div class="alpha"></div> <div class="beta"></div> <h3 class="r"> <a href="http://www.stackoverflow.com"></a> </h3> </div> </li> The above html structure keeps repeating, what can be the easiest way to parse all the links(stackoverflow.com) from the above html structure using **BeautifulSoup** and **Python**? Answer: [BeautifulSoup 4](http://www.crummy.com/software/BeautifulSoup/bs4/doc/) offers a convenient way of accomplishing this, using CSS selectors: from bs4 import BeautifulSoup soup = BeautifulSoup(html) print [a["href"] for a in soup.select('h3.r a')] This also has the advantage of constraining the selection by context: it selects only those anchor nodes that are children of a h3 node with class r. Omitting the constraint or choosing one most suitable for the need is easy by just tweaking the selector; see the [CSS selector docs](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#css-selectors) for that.
Sleep Program Until Keypress Question: I have the following code #!/usr/bin/python import keybinder def Mark(args): print "Why, hello!" keybinder.bind("<Super>m", Mark, "junk") KEYBINDER.MAIN_LOOP_KEYPRESS() In other words, I would like to make a program which sleeps silently in the background until a key combination is pressed anywhere in the system. Keybinder seems like a good way of getting the keypress, but I'm not sure how to do the sleeping part implied by the final line. It seems as though importing a large framework like GTk would be overkill for this application and I'd prefer to avoid a busy loop. Any thoughts? Answer: Maybe just: while not key_pressed: time.sleep(0.2)
UDP Server frame gap Question: I'm new to python and trying to get some help here. I've written a code to transmit UDP data through the socket. I wanted to re-transmit the data in a loop every 50 microsecond but I can only send it every 3 seconds! I'm sure I'm doing something wrong , can you help me out ? I've pasted the code below: import socket,codecs,binascii,re ,sched, time UDP_IP = "XXX.XXX.XXX.XXX" UDP_PORT = 30001 MESSAGE = '\x00\x01\x02\x03\x04\x05\x06\x07\x08\t\n\x0b\x0c\r\x0e\x0f\x10\ x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f\x20'# !"#$%' #"\x00\x01\x02 " s = sched.scheduler(time.time, time.sleep) def send_data(sc): sock = socket.socket(socket.AF_INET, # Internet socket.SOCK_DGRAM) # UDP sock.sendto(MESSAGE, (UDP_IP, UDP_PORT)) print"" print"" print"" print"" print"" sc.enter(0.000050, 1, send_data, (sc,)) print time.time() print"" print"" s.enter(0.0000050, 1, send_data, (s,)) s.run() Answer: First of all, creating a new socket every time you send data will create quite some overhead. Scheduling a new task over and over adds a lot of overhead as well, slowing your program down even further. The print command can add a little overhead especially if you output a lot of data. Other things to consider include the precision of the system timers involved, interacting with the hardware, that python is an interpreted language, etc but they are all minor in comparison so you can ignore that. If you wanted to write something real-time critical, C would be a better choice. So anyway, to speed up your program i would get rid of the time consuming parts: import socket, time # ... def send_data(): sock = socket.socket(socket.AF_INET,socket.SOCK_DGRAM) while True: #TODO: would require an abort condition sock.sendto(MESSAGE, (UDP_IP, UDP_PORT)) time.sleep(0.00005) # don't count on this to be 100% accurate You can put that into a thread if you don't want your main program to block: from threading import Thread t = Thread(target=send_data) t.start()
How to install python-levenshtein on Windows? Question: After searching for days I'm about ready to give up finding precompiled binaries for Python 2.7 (Windows 64-bit) of the [Python Levenshtein library](http://pypi.python.org/pypi/python-Levenshtein/), so not I'm attempting to compile it myself. I've installed the most recent version of _MinGW32_ (version 0.5-beta-20120426-1) and set it as the default compiler in _distutils_. Here we go: C:\Users\tomas>pip install python-levenshtein Downloading/unpacking python-levenshtein Running setup.py egg_info for package python-levenshtein warning: no files found matching '*' under directory 'docs' warning: no previously-included files matching '*pyc' found anywhere in distribution warning: no previously-included files matching '.project' found anywhere in distribution warning: no previously-included files matching '.pydevproject' found anywhere in distribution Requirement already satisfied (use --upgrade to upgrade): setuptools in c:\python27\lib\site-packages\setuptools-0.6c11-py2.7.egg (from python-levenshtein) Installing collected packages: python-levenshtein Running setup.py install for python-levenshtein building 'Levenshtein' extension C:\MinGW\bin\gcc.exe -mno-cygwin -mdll -O -Wall -IC:\Python27\include -IC:\Python27\PC -c Levenshtein.c -o build\temp.win-amd64-2.7\Release\levenshtein.o cc1.exe: error: unrecognized command line option '-mno-cygwin' error: command 'gcc' failed with exit status 1 Complete output from command C:\Python27\python.exe -c "import setuptools;__file__='c:\\users\\tomas\\appdata\\local\\temp\\pip-build\\python-levenshtein\\setup.py';exec(compile(open(__file__).rea d().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\tomas\appdata\local\temp\pip-7txyhp-record\install-record.txt --single-version-externally-managed: running install running build running build_ext building 'Levenshtein' extension C:\MinGW\bin\gcc.exe -mno-cygwin -mdll -O -Wall -IC:\Python27\include -IC:\Python27\PC -c Levenshtein.c -o build\temp.win-amd64-2.7\Release\levenshtein.o cc1.exe: error: unrecognized command line option '-mno-cygwin' error: command 'gcc' failed with exit status 1 And now I'm stuck. I'm assuming that the `-mno-cygwin` option is outdated and no longer valid for the version of `gcc` that I have. If that is the case, I still have no clue how to fix that. Thanks for any help anybody can offer on this issue. * * * EDIT: I ran the compile line manually after removing the bad option: C:\MinGW\bin\gcc.exe -mdll -O -Wall -IC:\Python27\include -IC:\Python27\PC -c Levenshtein.c -o build\temp.win-amd64-2.7\Release\levenshtein.o Which successfully provided _levenshtein.o_ in the build folder, but when I try to run `python setup.py install` then it just tries to build again and fails. Where can I remove `-mno-cygwin`? I assume it's somewhere in the source of _distutils_ but I can't find it. Answer: download vcsetup.exe from <http://www.microsoft.com/en- us/download/details.aspx?id=6506> (sorry this link is now broken it was for VC++ 2008 ... ) run it after it finishes open your command.exe type :`easy_install python-Levenshtein` (this assumes you have setuptools already) sit back and let it install done
Can't install JPype on Ubuntu 12.04 Question: I've consulted this: [Cannot install JPype on ubuntu 12.04 64 bit](http://stackoverflow.com/questions/11949160/cannot-install-jpype-on- ubuntu-12-04-64-bit) And I'm following the tutorial here: <https://github.com/johanlundberg/neo4j-django-tutorial> It seems I'm still having a problem installing JPype, despite having done both the things in that answer: `sudo apt-get install python-jpype` and `sudo apt-get install python-dev` The error I'm getting, when I run `python neo4jtut/manage.py syncdb` tells me the module doesn't exist, with `home/username/djangoenv/local/lib/python2.7/site-packages/neo4j/_backend.py", line 83, in <module> import jpype, os ImportError: No module named jpype` Can anyone tell what's happening here? Answer: Did you try installing it like in the tutorial? Eg, without a virtualenv, `sudo pip install /path/to/JPype-0.5.4.2.zip`? Aside- have you considered [neo4django](https://github.com/scholrly/neo4django)?
compiled pig move output to input Question: I'm trying to run a an embedded Pig script (embeded in Python) where I need to take the output/result of the script and feed it back into script as the input. I'm sure there is an easy way to do this but all the examples seem overly simplistic and are using one column examples. My input looks like this: networkMap.csv: NodeH,4,-0.4 NodeH,5,0.2 NodeO,6,0.1 Link,W_1_4,0.2,1,4 Link,W_1_5,-0.3,1,5 Link,W_2_4,0.4,2,4 Link,W_2_5,0.1,2,5 Link,W_3_4,-0.5,3,4 Link,W_3_5,-0.2,3,5 Link,W_4_6,-0.3,4,6 Link,W_5_6,-0.2,5,6 LR,LR,0.9 Target,Target,1 And lets take a super simple example of what I want to do striping out all of the application logic to just focus on the input/output problem: #!/usr/bin/python from org.apache.pig.scripting import * P = Pig.compile(""" A = LOAD '$input' using PigStorage(',') AS (type:chararray, name:chararray, val:double,iName:chararray,jName:chararray); STORE A INTO '$outFile' USING PigStorage (','); """) params = { 'input': 'networkMap.csv'} for i in range(2): outDir = "out_" + str(i + 1) inputString = "" params["outFile"] = "out_" + str(i + 1) bound = P.bind(params) stats = bound.runSingle() if not stats.isSuccessful(): raise 'failed' params["input"] = stats.result("Output1") I was hoping that I could just say input = output but that doesn't work. I've also tried: input = ""; iter = stats.result("A").iterator() while iter.hasNext(): tuple = iter.next() input = input + "(" +tuple.toDelimitedString(",") + ")" params["input"] = input This did push the output back into the input but then the LOAD function couldn't read it. since it looked like one big reccord - A = LOAD '(NodeI,1,1.0,,)(NodeI,2,0.0,,)(NodeI,3,1.0,,)(NodeH,4,-0.4,,)(NodeH,5,0.2,,)(NodeO,6,0.1,,)(Link,W_1_4,0.2,1,4)(Link,W_1_5,-0.3,1,5)(Link,W_2_4,0.4,2,4)(Link,W_2_5,0.1,2,5)(Link,W_3_4,-0.5,3,4)(Link,W_3_5,-0.2,3,5)(Link,W_4_6,-0.3,4,6)(Link,W_5_6,-0.2,5,6)(LR,LR,0.9,,)(Target,Target,1.0,,)' using PigStorage(',') AS (type:chararray, name:chararray, val:double,iName:chararray,jName:chararray); I'm sure I am missing some simple way of doing this. Answer: Quick answer: change params["input"] = stats.result("Output1") to params["input"] = params["outFile"] Explanation: Remember, your params array is for parameter substitution within your Pig script. That's why your next LOAD statement looked the way it did. You took the output of the previous run and said "take these results, put them into a string, and then interpret this string as the filename of the input data". You are almost there. You have two elements in your params dictionary: input and outFile. Your script LOADs from input and STOREs into outFile. So after you have run the script, set input = outFile. Then your next iteration will LOAD from outFile. Just be sure to specify a new outFile, or you will be unable to STORE because the directory will already exist.
How to include python files of Java project into .war with maven? Question: I have Java project made with maven. So I have typical maven project layout. And I use Jython. So I got few python files. Wich I use through PythonInterpreter in Java classes. I place my python files in src/main/py folder. And I use this path to import the modules by interpreter. It works fine on my laptop. The problem is: When I do mvn install, this folder does not goes to the war. I read about maven resources plugin and added this folder as a resource. Like this: <resources> <resource> <directory>src/main/py</directory> </resource> <resource> <directory>src/main/resources</directory> </resource> </resources> I that case it adds everything that folder contents, to web-inf/ directly, but not in the src/main/py. So that path is invalid for application in war archive. Question is: How should I place this python resource and what I should write in pom.xml, to be able to use the same path on the laptop, and the server? Answer: Can you try including targetPath in the resource element as below: <resource> <targetPath>../</targetPath> <directory>src/main/py</directory> <includes> <include>**/*.py</include> </includes> </resource>
basic function not working -> 'name 'happyBirthdayEmily' is not defined' Question: I'm following [this guide](http://anh.cs.luc.edu/python/hands- on/3.1/handsonHtml/functions.html) and I can't get a basic function to work. `birthday2.py` def happyBirthdayEmily(): #program does nothing as written print("Happy Birthday to you!") print("Happy Birthday to you!") print("Happy Birthday, dear Emily.") print("Happy Birthday to you!") So following the guide I type this: >>>import birthday2 >>>happyBirthdayEmily This is the error I get: Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'happyBirthdayEmily' is not defined What Am I missing here? Answer: you should do: >>>import birthday2 >>>birthday2.happyBirthdayEmily() or: >>>from birthday2 import happyBirthdayEmily >>>happyBirthdayEmily() or: >>>from birthday2 import * >>>happyBirthdayEmily() Read more about modules [here](http://docs.python.org/2/tutorial/modules.html)
How to reconfigure tkinter canvas items? Question: I don't know if this question has duplicates , but i haven't found one yet. when using python you can create GUI fastly , but sometimes you cannot find a method to do what you want. for example i have the following problem: let's suppose that there is a canvas called K with a rectangle with ID=1(canvas item id , not memory id) in it. if i want to redraw the item i can delete it and then redraw it with new settings. K.delete(1) K.create_rectangle(x1,y1,x2,y2,options...) here is the problem:the object id changes; how can i redraw or move or resize the rectangle or simply change it without changing its id with a method?for example: K.foo(1,options....) if there isn't such a method , then i should create a list with the canvas object ids , but it is not elegant and not fast.for example: ItemIds=[None,None,etc...] ItemIds[0]=K.create_rectangle(old options...) K.delete(ItemIds[0]) ItemIds[0]=K.create_rectangle(new options...) Answer: You can use [`Canvas.itemconfig`](http://effbot.org/tkinterbook/canvas.htm): item = K.create_rectangle(x1,y1,x2,y2,options...) K.itemconfig(item,options) To move the item, you can use [`Canvas.move`](http://effbot.org/tkinterbook/canvas.htm#Tkinter.Canvas.move- method) * * * import Tkinter as tk root = tk.Tk() canvas = tk.Canvas(root) canvas.pack() item = canvas.create_rectangle(50, 25, 150, 75, fill="blue") def callback(): canvas.itemconfig(item,fill='red') button = tk.Button(root,text='Push me!',command=callback) button.pack() root.mainloop()
Ideas or algorithms when programming an NAT Question: I'm working on a Python tunneling project using TUNTAP. The data received on a TUNTAP interface contains the original IP packet including all headers. I can do one of two things. On the incoming side I am listening with Twisted. On the outgoing side I will have a raw socket which dumps the IP packet. Before dumping the packet the program swaps the source address with that of the server. It also recomputes the TCP and UDP checksums. It also swaps the ports using one of the following methods. This information is tracked in the NAT table 1) Use a single port per user such as US.ER.01.IP:10000 ----> SE.RV.ER.IP:3000 ----> facebook.com:80 US.ER.01.IP:10001 ----> SE.RV.ER.IP:3000 ----> facebook.com:80 US.ER.02.IP:3000 ----> SE.RV.ER.IP:3001 ----> facebook.com:80 Could this cause issues if the second with user's 1s simultaneous requests for facebook? How would the system know how to route facebook's reply. It is incoming on port 3000 so it belongs to user1 but does it get mapped back to 10000 or 10001? 2) Use a unique port for each connection such as US.ER.01.IP:10000 ----> SE.RV.ER.IP:3000 ----> facebook.com:80 US.ER.01.IP:10001 ----> SE.RV.ER.IP:3001 ----> facebook.com:80 US.ER.02.IP:3000 ----> SE.RV.ER.IP:3002 ----> remoteHost.com:22 How would I know when to remove entries from the NAT table? I could see the NAT table filling up very quickly using this method. The solutions to this are: I could wit for FIN packets from the server. This will not work with UDP though. I could age the NAT entry on each hit. I could then run garbage collection every N seconds. I see this being an issue if garbage collection runs and how would a server's delayed response get to the proper host if it gets deleted from the table. There is also the issue of reading from a raw socket. I know how to send on one but would it be possible to receive individual IP packets. Could the raw socket receive one packet per sock.recieve(65535) call possibly receive more than one IP packet? Which implementation is best? Any other tips or things I should be watching out for? EDITS: Ok so I have N many clients. If you misunderstood me the enitre /30 is used between the client and itself. It is just an abstraction to make the tunnel possible. I also didn't think it mattered but the websocket actaully goes through a "proxy" on the LAN (the IPdata is simply repackaged into a new websocket, the mappings are unique however). I did not want to make the explanation so confusing. I do not see how this changes anything. Client PC CLIENT PC Client PC----->LAN INTERNET Client 1: 10.1.1.2 ----> 10.1.1.1 ----> Websocket(IPdata) ----> Browser ---> newWebSocket(IPData) ----> SE.RV.ER.IP Client 2: 10.1.1.4 ----> 10.1.1.3 ----> Websocket(IPdata) ----> Browser ---> newWebSocket(IPData) ----> SE.RV.ER.IP Client 3: 10.1.1.6 ----> 10.1.1.5 ----> Websocket(IPdata) ----> Browser ---> newWebSocket(IPData) ----> SE.RV.ER.IP Each client set it's default route to be the tunnel endpoint (10.1.1.1 for example). The client gets the IP datagram, puts it into a websocket, sends the websocket to a browser on the LAN, which then sends it to the server (or perhaps another proxy). The inside of the websocket contains the original IP datagram (with the source of 10.1.1.2 or some other internal IP). It is important to note that the server recieves a websocket message from the internet CONTAINING the goodes (with the private source address). How would the python server use this? Create a new tunnel with itself then dump the packet raw into the tunnel and route appropriately? Or perhaps I could use a mapping? How would I be able to "map" a tunnel abstraction over this chain of websockets? The client does not have a route to the internet but can reach the "Browser" which can get to the internet. This seems to be the same case with VPN tunnels. The abstraction would be as follows: Client 1: 10.1.1.2 ----> 10.1.1.1 ----> Websocket(IPdata) ----> Browser ---> newWebSocket(IPData) ----> SE.RV.ER.IP -> Internet 10.1.2.2------------------------------------------------------------------------------------> 10.1.2.1 ----> Internet If you know any resources to get me on the right track that would be great! Answer: ### Implementing NAT You must use a unique port for each connection, not a single port per user, for exactly the reason that you outline in your question: if you don't then you can (and will!) end up with multiple connections using the same 5-tuple (protocol,local-address,local-port,remote-address,remote-port) and you won't be able to disambiguate them. Moreover, if you want to play nice with some protocols that do NAT traversal then you should try to _not_ remap the original source port if possible, that is, only remap it (to a new random port) if it conflicts with an existing connection which you are tracking. To implement a NAT correctly, you must track the state of each connection. For TCP this means watching the flags, setting up new state when you see a `SYN`, and tearing down the state when you see `FIN`s from both sides. The state you track must contain at least the original source port and the remapped source port (which might be the same, see above). If you want to support FTP then you will also have to sniff the contents of FTP TCP control connections and rewrite IP addresses contained therein (and this means you will need to track a lot more state because you may sometimes need to enlarge a TCP segment which means you need to start remapping sequence numbers). You should also have a time out associated with each tracked connection so that you get rid of it in case the endpoints disappear without closing the connection properly. For UDP this means watching the combinations of local and remote port numbers and creating state for each unique combination (of 4-tuple of addresses and ports) that you see. Because UDP is connectionless you have to expire this state information based on a timeout. This timeout will be much shorter than the one you use for TCP (on the order of minutes instead of hours) in order to prevent your state table from getting too large. For ICMP echo request you should proceed in a manner similar to UDP with the icmp_id playing the role of port number. For other types of ICMP like destination unreachable you must inspect the ICMP packet to see if it is part of a TCP or UDP connection you are tracking and attempt to translate it back to the original source. In order to prevent routing loops you should also be decrementing the IP TTL as you forward translated packets. There are probably some more important bits which I'm forgetting. In short, implementing NAT is a lot like implementing an IP stack for a router! That's why NAT is virtual always bolted on to an IP stack in the kernel, not implemented in userspace. ### Sending and receiving packets So the architecture as I understand it is this: 1. Client originates a packet which goes into the TUNTAP interface 2. Your software gets this packet, encapsulates it in a Websocket message, and sends it off 3. Your Twisted server gets it and does its magic 4. The translated packet goes out from the server through a raw socket The return path: 1. The reply comes back to your server somehow (perhaps [libpcap](http://www.tcpdump.org/pcap3_man.html)) 2. Your code does the reverse magic 3. Your server transits the result back to the client over Websocket 4. Client sees the resulting backet come back through the TUNTAP interface. I think the easiest way to handle the last step in the forward path and the first step in the return path is a second TUNTAP device: a `tun` interface on the server.
How do I make Python, QT, and Webkit work on a headless server? Question: I have Debian Linux server that I use for a variety of things. I want it to be able to do some web-scraping jobs I need done regularly. This code can be [found here](http://bit.ly/QeqvzX). import sys from PyQt4.QtGui import * from PyQt4.QtCore import * from PyQt4.QtWebKit import * class Render(QWebPage): def __init__(self, url): self.app = QApplication(sys.argv, False) # Line updated based on mata's answer QWebPage.__init__(self) self.loadFinished.connect(self._loadFinished) self.mainFrame().load(QUrl(url)) self.app.exec_() def _loadFinished(self, result): self.frame = self.mainFrame() self.app.quit() A simple test of it would look like this: url = 'http://example.com' print Render(url).frame.toHtml() On the call to the constructor it dies with this message (it's printed to stdout, not an uncaught exception). : cannot connect to X server How can I use Python (2.7), QT4, and Webkit on a headless server? Nothing ever needs to be displayed, so I can tweek any settings or anything that need to be tweeked. I've looked into alternatives, but this is the best fit for me and my projects. If I did have to install an X server, how could I do it with minimal overhead? Answer: One of the constructors of `QApplication` takes a boolean argument [`GUIenabled`](http://www.riverbankcomputing.co.uk/static/Docs/PyQt4/html/qapplication.html#QApplication-2). If you use that, you can instantiante QAppliaction without an X server, but you can't create QWidgets. So in this case the only option is to use a virtual X server like [Xvfb](http://www.xfree86.org/4.0.1/Xvfb.1.html) to render the GUI. Xvfb can be installed and run using these commands (assuming you have apt-get installed). The code in the original question is in a file called `render.py`. sudo apt-get install xvfb xvfb-run python render.py
Turning a string into a list with specifications Question: I want to create a list out of my string in python that would show me how many times a letter is shown in a row inside the string. for example: my_string= "google" i want to create a list that looks like this: [['g', 1], ['o', 2], ['g', 1], ['l', 1], ['e', 1]] Thanks! Answer: You could use [groupby](http://docs.python.org/2/library/itertools.html#itertools.groupby) from [itertools](http://docs.python.org/2/library/itertools.html): from itertools import groupby my_string= "google" [(c, len(list(i))) for c, i in groupby(my_string)]
Python lxml.html XPath "attribute not equal" operator not working as expected Question: I'm trying to run the following script: #!python from urllib import urlopen #urllib.request for python3 from lxml import html url = 'http://mpk.lodz.pl/rozklady/1_11_D2D3/00d2/00d2t001.htm?r=KOZINY'+\ '%20-%20Srebrzy%F1ska,%20Cmentarna,%20Legion%F3w,%20pl.%20Wolno%B6ci'+\ ',%20Pomorska,%20Kili%F1skiego,%20Przybyszewskiego%20-%20LODOWA' raw_html = urlopen(url).read() tree = html.fromstring(raw_html) #need to .decode('windows-1250') in python3 ret = tree.xpath('//td [@class!="naglczas"]') print ret assert(len(ret)==1) I expect it to select the one td that doesn't have its class set to 'naglczas'. Instead, it returns me an empty list. Why is that? I guess there's some silly reason, but I tried googling and found nothing that would explain it. Answer: Your xpath expression will find > a td element that has a class which is not "naglczas" You seem to want(since the only 3 td-s with a class have the same class you don't want) > a td element which does not have a class of "naglczas" Those might sound similar, but they are different. Something like tree.xpath('//td[not(@class="naglczas")]') should get you what you want. Also, you don't need to use urllib to open the url, lxml can do that for you, using `lxml.html.parse()`.
Setting a property doesn't work - dumb syntax error? Question: I'm probably making some elementary mistake... When I initialize and look at a property of an object, fine. But if I try to set it, the object doesn't update itself. I'm trying to define a property which I can set and get. To make it interesting, this rectangle stores twice its width instead of the width, so the getter and setter have something to do besides just copying. class Rect: """simple rectangle (size only) which remembers double its w,h as demo of properties """ def __init__(self, name, w): self.name=name self.dwidth=2*w def dump(self): print "dwidth = %f" % (self.dwidth,) def _width(self): return self.dwidth/2.0 def _setwidth(self,w): print "setting w=", w self.dwidth=2*w print "now have .dwidth=", self.dwidth width =property(fget=_width, fset=_setwidth) The .dwidth member variable would normally be private, but I want to peek at it easily in an interactive session. In a Python command line, I try it out: bash 0=> python Python 2.7.3 (default, Aug 1 2012, 05:14:39) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from rectzzz import * >>> a = Rect("ack", 10.0) >>> a.dump() dwidth = 20.000000 >>> a.width 10.0 >>> a.width=100 >>> a.width 100 >>> a.dump() dwidth = 20.000000 >>> a.dwidth 20.0 >>> Why does .width seem to update, but the object's actual state as told by dump() and .dwidth not change? I'm especially puzzled why I never see "setting w=" followed by a number. Answer: class Rect: """simple rectangle (size only) which remembers double its w,h as demo of properties """ Should be: class Rect(object): """simple rectangle (size only) which remembers double its w,h as demo of properties """ In python 2.x, `property` only works properly if you inherit from `object`, so that you get the new style class. By default you get old-style classes for backwards compatibility. This has been fixed in python 3.x.
python split a pandas data frame by week or month and group the data based on these sp Question: DateOccurred CostCentre TimeDifference 03/09/2012 2073 28138 03/09/2012 6078 34844 03/09/2012 8273 31215 03/09/2012 8367 28160 03/09/2012 8959 32037 03/09/2012 9292 30118 03/09/2012 9532 34200 03/09/2012 9705 27240 03/09/2012 10085 31431 03/09/2012 10220 22555 04/09/2012 6078 41126 04/09/2012 7569 31101 04/09/2012 8273 30994 04/09/2012 8959 30064 04/09/2012 9532 34655 04/09/2012 9705 26475 04/09/2012 10085 31443 04/09/2012 10220 33970 05/09/2012 2073 28221 05/09/2012 6078 27894 05/09/2012 7569 29012 05/09/2012 8239 42208 05/09/2012 8273 31128 05/09/2012 8367 27993 05/09/2012 8959 20669 05/09/2012 9292 33070 05/09/2012 9532 8189 05/09/2012 9705 27540 05/09/2012 10085 28798 05/09/2012 10220 23164 06/09/2012 2073 28350 06/09/2012 6078 35648 06/09/2012 7042 27129 06/09/2012 7569 31546 06/09/2012 8239 39945 06/09/2012 8273 31107 06/09/2012 8367 27795 06/09/2012 9292 32974 06/09/2012 9532 30320 06/09/2012 9705 37462 06/09/2012 10085 31703 06/09/2012 10220 7807 06/09/2012 14573 186 07/09/2012 0 0 07/09/2012 0 0 07/09/2012 2073 28036 07/09/2012 6078 31969 07/09/2012 7569 32941 07/09/2012 8273 30073 07/09/2012 8367 29391 07/09/2012 9292 31927 07/09/2012 9532 30127 07/09/2012 9705 27604 07/09/2012 10085 28108 08/09/2012 2073 28463 10/09/2012 6078 31266 10/09/2012 8239 16390 10/09/2012 8273 31140 10/09/2012 8959 30858 10/09/2012 9532 30794 10/09/2012 9705 28752 11/09/2012 0 0 11/09/2012 0 0 11/09/2012 0 0 11/09/2012 0 0 11/09/2012 0 0 11/09/2012 2073 28159 11/09/2012 6078 36835 11/09/2012 8239 45354 11/09/2012 8273 30922 11/09/2012 8367 31382 11/09/2012 8959 29670 11/09/2012 9292 33582 11/09/2012 9705 29394 11/09/2012 10085 17140 12/09/2012 2073 28283 12/09/2012 6078 31139 12/09/2012 7042 35063 12/09/2012 8273 31075 12/09/2012 8367 29795 12/09/2012 9292 33496 12/09/2012 9532 31669 12/09/2012 9705 26166 12/09/2012 10085 29889 12/09/2012 10220 35656 13/09/2012 2073 28144 13/09/2012 6078 30544 13/09/2012 7097 30866 13/09/2012 8273 30772 13/09/2012 8367 32387 13/09/2012 8959 29307 13/09/2012 9292 32348 13/09/2012 9532 28137 13/09/2012 9705 28823 13/09/2012 10085 31543 13/09/2012 10220 28293 14/09/2012 0 12433 14/09/2012 0 12434 14/09/2012 0 12434 14/09/2012 0 12434 14/09/2012 0 12434 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 12433 14/09/2012 0 0 14/09/2012 0 12433 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 1720 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 384 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 0 14/09/2012 0 383 14/09/2012 2073 28438 14/09/2012 6078 27255 14/09/2012 8273 29989 14/09/2012 8959 26892 14/09/2012 9292 33202 14/09/2012 9532 30862 14/09/2012 9705 26857 14/09/2012 10085 32657 14/09/2012 10220 27296 15/09/2012 6078 3832 17/09/2012 6078 30004 17/09/2012 7569 30390 17/09/2012 8239 41421 17/09/2012 8273 26337 17/09/2012 8367 31631 17/09/2012 8959 17989 17/09/2012 9292 35703 17/09/2012 9532 36542 17/09/2012 9705 27488 17/09/2012 10085 30849 17/09/2012 10220 32575 18/09/2012 2073 28293 18/09/2012 6078 27450 18/09/2012 7569 30323 18/09/2012 8239 38481 18/09/2012 8273 31154 18/09/2012 8367 27944 18/09/2012 8959 28196 18/09/2012 9292 30844 18/09/2012 9532 33128 18/09/2012 9705 32100 19/09/2012 2073 28227 19/09/2012 6078 32243 19/09/2012 7569 29041 19/09/2012 8239 42791 19/09/2012 8273 30966 19/09/2012 8367 26420 19/09/2012 8959 29394 19/09/2012 9292 14865 19/09/2012 9532 23618 19/09/2012 10085 31614 19/09/2012 10220 8686 20/09/2012 2073 28260 20/09/2012 6078 30446 20/09/2012 7097 34909 20/09/2012 7569 30869 20/09/2012 8273 31079 20/09/2012 8367 30162 20/09/2012 9292 13104 20/09/2012 9532 36614 20/09/2012 9705 35617 20/09/2012 10085 31821 20/09/2012 10220 30055 20/09/2012 14573 468 21/09/2012 0 0 21/09/2012 0 0 21/09/2012 0 0 21/09/2012 0 0 21/09/2012 0 0 21/09/2012 0 0 21/09/2012 0 0 21/09/2012 0 0 21/09/2012 0 0 21/09/2012 0 3 21/09/2012 0 0 21/09/2012 0 0 21/09/2012 0 3 21/09/2012 2073 28308 21/09/2012 6078 33833 21/09/2012 7569 32335 21/09/2012 9292 33824 21/09/2012 9532 33376 21/09/2012 10220 21002 22/09/2012 2073 28402 23/09/2012 2073 28109 24/09/2012 2073 28431 24/09/2012 6078 30027 24/09/2012 7097 31914 24/09/2012 8239 35617 24/09/2012 8273 30670 24/09/2012 8367 29084 24/09/2012 8959 31023 24/09/2012 9292 34394 24/09/2012 9532 31255 24/09/2012 9705 18758 24/09/2012 10085 29290 24/09/2012 10220 33230 25/09/2012 2073 28506 25/09/2012 6078 32043 25/09/2012 7042 34953 25/09/2012 7569 30898 25/09/2012 8239 41297 25/09/2012 8273 31012 25/09/2012 8367 29645 25/09/2012 8959 29904 25/09/2012 9532 37875 25/09/2012 9705 13280 25/09/2012 10085 35023 25/09/2012 10220 31359 26/09/2012 2073 28388 26/09/2012 6078 29765 26/09/2012 7097 31561 26/09/2012 7569 29151 26/09/2012 8239 40369 26/09/2012 8367 28174 26/09/2012 8959 26554 26/09/2012 9292 32104 26/09/2012 9532 33194 26/09/2012 9705 30377 26/09/2012 10085 31503 26/09/2012 10220 28310 27/09/2012 0 0 27/09/2012 0 0 27/09/2012 0 0 27/09/2012 0 0 27/09/2012 0 0 27/09/2012 0 0 27/09/2012 0 0 27/09/2012 0 0 27/09/2012 2073 28491 27/09/2012 6078 31137 27/09/2012 8239 38403 27/09/2012 8273 31117 27/09/2012 8367 28462 27/09/2012 9292 32387 27/09/2012 9532 23023 27/09/2012 9705 32790 27/09/2012 10085 33460 27/09/2012 10220 31782 28/09/2012 0 161 28/09/2012 2073 28381 28/09/2012 7569 32322 28/09/2012 8239 38362 28/09/2012 8273 30533 28/09/2012 8959 17128 28/09/2012 9292 32484 28/09/2012 9532 18586 28/09/2012 9705 27902 29/09/2012 2073 28583 1. Above is a sample of a dataframe which has a million records 2. _**How can I slice or group it by Week or Month and sum seconds column by cost centre.?_ *** 3. I have read/tried 30 of the articles on this site which appear by doing a search for List item pandas, python, groupby, split, dataframe, week with out success. 4. I am using python 2.7 and pandas 0.9. 5. I've read the Time Series / Date functionality section in the pandas 0.9 tutorial but couldn't make anything work with a dataframe. I would like to use the features in there such as Business week **Expected Output** DateOccurred CostCentre TimeDifference 2012-03-11 0 500000 2012-03-11 2073 570000 2012-03-18 0 650000 2012-03-18 2073 425000 2012-03-25 0 378000 2012-04-25 2073 480000 Answer: Here's a way to take your input (as text) and group it the way you want. The key is to use a dictionary for each grouping (date, then centre). import collections import datetime import functools def delta_totals_by_date_and_centre(in_file): # Use a defaultdict instead of a normal dict so that missing values are # automatically created. by_date is a mapping (dict) from a tuple of (year, week) # to another mapping (dict) from centre to total delta time. by_date = collections.defaultdict(functools.partial(collections.defaultdict, int)) # For each line in the input... for line in in_file: # Parse the three fields of each line into date, int ,int. date, centre, delta = line.split() date = datetime.datetime.strptime(date, "%d/%m/%Y").date() centre = int(centre) delta = int(delta) # Determine the year and week of the year. year, week, weekday = date.isocalendar() year_and_week = year, week # Add the time delta. by_date[year_and_week][centre] += delta # Yield each result, in order. for year_and_week, by_centre in sorted(by_date.items()): for centre, delta in sorted(by_centre.items()): yield year_and_week, centre, delta For your sample input, it produces this output (where the first column is `year-week_of_the_year`). 2012-36 0 0 2012-36 2073 141208 2012-36 6078 171481 2012-36 7042 27129 2012-36 7569 124600 2012-36 8239 82153 2012-36 8273 154517 2012-36 8367 113339 2012-36 8959 82770 2012-36 9292 128089 2012-36 9532 137491 2012-36 9705 146321 2012-36 10085 151483 2012-36 10220 87496 2012-36 14573 186 2012-37 0 89522 2012-37 2073 113024 2012-37 6078 160871 2012-37 7042 35063 2012-37 7097 30866 2012-37 8239 61744 2012-37 8273 153898 2012-37 8367 93564 2012-37 8959 116727 2012-37 9292 132628 2012-37 9532 121462 2012-37 9705 139992 2012-37 10085 111229 2012-37 10220 91245 2012-38 0 6 2012-38 2073 169599 2012-38 6078 153976 2012-38 7097 34909 2012-38 7569 152958 2012-38 8239 122693 2012-38 8273 119536 2012-38 8367 116157 2012-38 8959 75579 2012-38 9292 128340 2012-38 9532 163278 2012-38 9705 95205 2012-38 10085 94284 2012-38 10220 92318 2012-38 14573 468 2012-39 0 161 2012-39 2073 170780 2012-39 6078 122972 2012-39 7042 34953 2012-39 7097 63475 2012-39 7569 92371 2012-39 8239 194048 2012-39 8273 123332 2012-39 8367 115365 2012-39 8959 104609 2012-39 9292 131369 2012-39 9532 143933 2012-39 9705 123107 2012-39 10085 129276 2012-39 10220 124681
django-hstore on Heroku Question: I've got a Django (v 1.3.3) project deployed on Heroku (cedar stack). It uses the recommended dj_database_url for configuring settings.DATABASES. Everything works great (to this point). However, I want to start using [django- hstore](https://github.com/jordanm/django-hstore) for part of the application. According to the docs, you have to change the database engine in settings.py to: 'ENGINE': 'django_hstore.postgresql_psycopg2', As a result, in my settings.py file, I do the following: DATABASES = {'default': dj_database_url.config()} DATABASES['default']['ENGINE'] = 'django_hstore.postgresql_psycopg2' Everything works fine for me, locally. And my models that have hstore fields work great (values are dictionaries). However, when I deploy to Heroku, the database engine gets reset/overridden to: ENGINE: 'django.db.backends.postgresql_psycopg2' In an attempt at debugging it, I have put a print after setting the engine in my settings file. Then, I run bash: heroku run bash and then: python myapp/manage.py shell when I run this, my print statement shows me the correct (desired) database settings with the desired engine (django_hstore.postgresql_psycopg2). However, if I then do: from django.conf import settings print settings.DATABASES I can see the database engine is no longer django_hstore, but set back to the normal (non-hstore) value. And if I import one of my models and do a get to load an object, the value in the hstore field is a string, and any attempt to access a key will throw and error: TypeError: string indices must be integers, not str Please keep in mind that this works find locally. But, after deploying to heroku, any attempt at accessing values as dictionaries throws the TypeError above. My questions are: * Does anyone know why my engine is getting overridden? And if so, how do I fix this? or * Is there another way to use the hstore field with Django 1.3.3 that might not require having to change the engine (and therefore be a bit more Heroku friendly) Answer: [SQLAlchemy 0.8](http://www.sqlalchemy.org/) includes utility methods that can be used to create a [custom model](http://justcramer.com/2008/08/08/custom- fields-in-django/) for handling the conversion between Python `dict` and Postgres `hstore`. from django.db import models from sqlalchemy.dialects.postgresql.hstore import _parse_hstore, _serialize_hstore class HStoreField (models.TextField): __metaclass__ = models.SubfieldBase def __init__(self, *args, **kwargs): super(HStoreField, self).__init__(*args, **kwargs) def to_python(self, value): if value is None: return None if isinstance(value, dict): return value return _parse_hstore(value) def get_db_prep_save(self, value, connection): if value is None: return None if isinstance(value, str): return value return _serialize_hstore(value) def db_type (self, connection): return "hstore" This model is portable, but if you want to run queries based on hstore keys or values you'll have to write them in raw SQL. I use an SQLite in-memory database for running tests, which works fine as long as you use the `text` type for non-PostgreSQL backends: def db_type (self, connection): from django.db import connection if connection.settings_dict['ENGINE'] == \ 'django.db.backends.postgresql_psycopg2': return "hstore" else: return "text"
Boto script to download latest file from s3 bucket Question: I like to write a boto python script to download the recent most file from the s3 bucket i.e. for eg I have 100 files in a s3 bucket I need to download the recent most uploaded file in it. Is there a way to download the recent most modified file from S3 using python boto THanks in advance Answer: You could list all of the files in the bucket and find the one with the most recent one (using the last_modified attribute). >>> import boto >>> c = boto.connect_s3() >>> bucket = c.lookup('mybucketname') >>> l = [(k.last_modified, k) for k in bucket] >>> key_to_download = sorted(l, cmp=lambda x,y: cmp(x[0], y[0]))[-1][1] >>> key_to_download.get_contents_to_filename('myfile') Note, however, that this would be quite inefficient in you had lots of files in the bucket. In that case, you might want to consider using a database to keep track of the files and dates to make querying more efficient.
PDF bleed detection Question: I'm currently writing a little tool (Python + pyPdf) to test PDFs for printer conformity. Alas I already get confused at the first task: Detecting if the PDF has at least 3mm 'bleed' (border around the pages where nothing is printed). I already got that I can't detect the bleed for the complete document, since there doesn't seem to be a global one. On the pages however I can detect a total of five different boxes: * `mediaBox` * `bleedBox` * `trimBox` * `cropBox` * `artBox` I read the [pyPdf documentation](http://pybrary.net/pyPdf/pythondoc- pyPdf.pdf.html#pyPdf.pdf.PageObject-class) concerning those boxes, but the only one I understood is the `mediaBox` which seems to represent the overall page size (i.e. the paper). The `bleedBox` pretty obviously _ought_ to define the bleed, but that doesn't always seem to be the case. Another thing I noted was that for instance with the [PDF](http://www.cs.cmu.edu/~rwh/theses/okasaki.pdf), all those boxes have the exact same size (implying no bleed at all) on each page, but when I open it there's a huge amount of bleed; This leads me to think that the individual text elements have their own offset. So, obviously, just calculating the bleed from `mediaBox` and `bleedBox` is not a viable option. **I would be more than delighted if anyone could shed some light on what those boxes actually are and what I can conclude from that (e.g. is one box always smaller than another one).** Bonus question: Can someone tell me what exactly the _"default user space unit"_ mentioned in the [documentation](http://pybrary.net/pyPdf/pythondoc- pyPdf.pdf.html#pyPdf.pdf.PageObject-class)? I'm pretty sure this refers to `mm` on my machine, but I'd like to enforce `mm` everywhere. Answer: Quoting from the PDF specification [ISO 32000-1:2008](http://www.adobe.com/content/dam/Adobe/en/devnet/acrobat/pdfs/PDF32000_2008.pdf) as published by Adobe: > 14.11.2 Page Boundaries > > 14.11.2.1 General > > A PDF page may be prepared either for a finished medium, such as a sheet of > paper, or as part of a prepress process in which the content of the page is > placed on an intermediate medium, such as film or an imposed reproduction > plate. In the latter case, it is important to distinguish between the > intermediate page and the finished page. The intermediate page may often > include additional production-related content, such as bleeds or printer > marks, that falls outside the boundaries of the finished page. To handle > such cases, a PDF page maydefine as many as five separate boundaries to > control various aspects of the imaging process: > > * The media box defines the boundaries of the physical medium on which the > page is to be printed. It may include any extended area surrounding the > finished page for bleed, printing marks, or other such purposes. It may also > include areas close to the edges of the medium that cannot be marked because > of physical limitations of the output device. Content falling outside this > boundary may safely be discarded without affecting the meaning of the PDF > file. > > * The crop box defines the region to which the contents of the page shall > be clipped (cropped) when displayed or printed. Unlike the other boxes, the > crop box has no defined meaning in terms of physical page geometry or > intended use; it merely imposes clipping on the page contents. However, in > the absence of additional information (such as imposition instructions > specified in a JDF or PJTF job ticket), the crop box determines how the > page’s contents shall be positioned on the output medium. The default value > is the page’s media box. > > * The bleed box (PDF 1.3) defines the region to which the contents of the > page shall be clipped when output in a production environment. This may > include any extra bleed area needed to accommodate the physical limitations > of cutting, folding, and trimming equipment. The actual printed page may > include printing marks that fall outside the bleed box. The default value is > the page’s crop box. > > * The trim box (PDF 1.3) defines the intended dimensions of the finished > page after trimming. It may be smaller than the media box to allow for > production-related content, such as printing instructions, cut marks, or > colour bars. The default value is the page’s crop box. > > * The art box (PDF 1.3) defines the extent of the page’s meaningful > content (including potential white space) as intended by the page’s creator. > The default value is the page’s crop box. > > > > The page object dictionary specifies these boundaries in the MediaBox, > CropBox, BleedBox, TrimBox, and ArtBox entries, respectively (see Table 30). > All of them are rectangles expressed in default user space units. The crop, > bleed, trim, and art boxes shall not ordinarily extend beyond the boundaries > of the media box. If they do, they are effectively reduced to their > intersection with the media box. Figure 86 illustrates the relationships > among these boundaries. (The crop box is not shown in the figure because it > has no defined relationship with any of the other boundaries.) Following that there is a nice graphic showing those boxes in relation to each other: ![PDF boxes illustrated](http://i.stack.imgur.com/hwpSx.png) The reasons why in many cases only the media box is set, are 1. that in case of PDFs meant for electronic consumption (i.e. reading on a computer) the other boxes hardly matter; and 2. that even in the prepress context they aren't as necessary anymore as they used to be, cf. the [article](http://www.prepressure.com/pdf/basics/page_boxes) Pedro refers to in his comment. Concerning your "bonus question": The user space unit is 1⁄72 inch by default; since PDF 1.6 it can be changed, though, to any (not necessary integer) multiple of that size using the UserUnit entry in the page dictionary. Changing it in an existing PDF essentially scales it as the user space unit is the basic unit in the device independent coordinate system of a page. Therefore, unless you want to update each and every command in the page descriptions refering to coordinates to keep the page dimensions, you won't want to enforce a millimeter user space unit... ;)
How to parse single file using Python bindings to Clang? Question: I am writing a simple tool to help with refactoring the source code of our application. I would like to parse C++ code based on wxWidgets library, which defines GUI and produce XML `.ui` file to use with Qt. I need to get all function calls and value of arguments. Currently I am toying with Python bindings to Clang, using the example code below I get the tokens and their kind and location, but the cursor kind is always `CursorKind.INVALID_FILE`. import sys import clang.cindex def find_typerefs(node): """ Find all references to the type named 'typename' """ for t in node.get_tokens(): if not node.location.file != sys.argv[1]: continue if t.kind.value != 0 and t.kind.value != 1 and t.kind.value != 4: print t.spelling print t.location print t.cursor.kind print t.kind print "\n" index = clang.cindex.Index.create() tu = index.parse(sys.argv[1]) print 'Translation unit:', tu.spelling find_typerefs(tu.cursor) What is the correct way to determine the cursor kind? I couldn't find any documentation except few blog posts, but they were outdated or not covering this topic. I was neither unable to work it out from examples that came with Clang . Answer: For cursor objects, it should be ok to just use cursor.kind. Maybe the problem is that you're walking tokens instead of child cursor objects (Not sure about that). Instead of get_tokens, you can use get_children to walk the AST. In order to see how the AST looks like, when I want to write an AST walking function, I use this script: <https://gist.github.com/2503232>. This just shows cursor.kind, and gives sensible outputs, on my system. No `CursorKind.INVALID_FILE.`
in python error while importing GeoIP on CentOs server Question: I am using Django, and when I am using the geoip package or importing GeoIP I am getting the following error on centos, while it is working well on ubuntu 12.04. The error is as follows from django.contrib.gis.utils.geoip import GeoIP Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.6/site-packages/django/contrib/gis/utils/geoip.py", line 67, in <module> 'Try setting GEOIP_LIBRARY_PATH in your settings.' % lib_name) django.contrib.gis.utils.geoip.GeoIPException: Could not find the GeoIP library (tried "GeoIP"). Try setting GEOIP_LIBRARY_PATH in your settings. please try to help me out, I can't go further without this. Answer: trying doing this on both of your systems: $ echo $GEOIP_LIBRARY_PATH and compare the output. From the error message, it sounds like you will get a directory path on Ubuntu, and make sure the same path is set up on CentOS, like so (on the centOS system): $ export GEOIP_LIBRARY_PATH=$GEOIP_LIBRARY_PATH:<path returned from Ubuntu system>
concurrent connections in tornado Question: I have a server running on tornado. I have a page that opens a `websocket` to the same server. Now I have observed that opening multiple instances of this page makes all of them wait except one. Only after that one has finished its websocket, does another one start. Is this normal tornado behaviour of I'm doing something wrong? Earlier my server was running with `django` but I migrated to `tornado` for the websocket support. For that I use fallback server as django. #!/usr/bin/env python # Run this with # PYTHONPATH=. DJANGO_SETTINGS_MODULE=testsite.settings testsite/tornado_main.py # Serves by default at # http://localhost:8080/hello-tornado and # http://localhost:8080/hello-django from tornado.options import options, define, parse_command_line import django.core.handlers.wsgi import tornado.httpserver import tornado.ioloop import tornado.web import tornado.wsgi define('port', type=int, default=8000) class HelloHandler(tornado.web.RequestHandler): def get(self): self.write('Hello from tornado') def main(): wsgi_app = tornado.wsgi.WSGIContainer( django.core.handlers.wsgi.WSGIHandler()) tornado_app = tornado.web.Application( [ ('/hello-tornado', HelloHandler), ('.*', tornado.web.FallbackHandler, dict(fallback=wsgi_app)), ]) server = tornado.httpserver.HTTPServer(tornado_app) server.listen(options.port) tornado.ioloop.IOLoop.instance().start() if __name__ == '__main__': main() Can I do something that can allow me to make multiple connections? Answer: You need to look into the Asych facilities in tornado to get this to work properly. Tornado in it's normal state is a single threaded stack and thus you can only handle one connection at a time. You can use the normal @asynchronous decorator or use their gen library to allow your code to handle multiple connections. Decorator: <http://www.tornadoweb.org/documentation/web.html#decorators> Gen: <http://www.tornadoweb.org/documentation/gen.html> Read the documentation carefully if you choose to use the @asynchronous decorator as you need to close the connection when you are done with it.
python/beautifulsoup to find all <a href> with specific anchor text Question: I am trying to use beautiful soup to parse html and find all href with a specific anchor tag <a href="http://example.com">TEXT</a> <a href="http://example.com/link">TEXT</a> <a href="http://example.com/page">TEXT</a> all the links I am looking for have the exact same anchor text, in this case TEXT. I am NOT looking for the word TEXT, I want to use the word TEXT to find all the different HREF edit: for clarification looking for something similar to using the class to parse for the links <a href="http://example.com" class="visible">TEXT</a> <a href="http://example.com/link" class="visible">TEXT</a> <a href="http://example.com/page" class="visible">TEXT</a> and then using findAll('a', 'visible') except the HTML I am parsing doesn't have a class but always the same anchor text Answer: Would something like this work? In [39]: from bs4 import BeautifulSoup In [40]: s = """\ ....: <a href="http://example.com">TEXT</a> ....: <a href="http://example.com/link">TEXT</a> ....: <a href="http://example.com/page">TEXT</a> ....: <a href="http://dontmatchme.com/page">WRONGTEXT</a>""" In [41]: soup = BeautifulSoup(s) In [42]: for link in soup.findAll('a', href=True, text='TEXT'): ....: print link['href'] ....: ....: http://example.com http://example.com/link http://example.com/page
Why does this work via the command line, but not via a web browser? Question: Why does this work via the command line, but not my via a web browser? (both files in python only the 2nd one loads) import cgi import cgitb; cgitb.enable() import BaseHTTPServer from SimpleHTTPServer import SimpleHTTPRequestHandler # get the info from the html form form = cgi.FieldStorage() #set up the html stuff reshtml = """Content-Type: text/html\n <html> <head><title>login</title></head> <body> """ print reshtml User_Name = form.getvalue('User_Name') password = form.getvalue('Pass_Word') log="info: " if User_Name == 'NAME' and password == 'passcode': log=log+"login passed " else: log=log+"login failed " print log print '</body>' print '</html>' I invoke it using a file that passes in the parameters "User_Name" and "Pass_Word": #!/Python27/python print "Content-type: text/html" print print """ <html><head> <title>log in</title> </head><body> """ import sha, time, Cookie, os, fileinput, cgi, cgitb import sys import BaseHTTPServer from SimpleHTTPServer import SimpleHTTPRequestHandler log = "info: " cookie = Cookie.SimpleCookie() string_cookie = os.environ.get('HTTP_COOKIE') string_cookie = str(string_cookie) infoform = """ <link rel="stylesheet" type="text/css" href="styles/home.css" media="screen" /> <div id="top" style=" height:50px;width:100%;position:absolute;left:0px;top:0px;" > <form action="router.py" method="post" target="_self"> <div id="form" style="position:absolute;left:15px;top:20px;" > <input type="text" name="User_Name" value="User Name"> <input type="password" name="Pass_Word" value="Password"> <input type="hidden" name="status_url"> <input type="submit" value="Log in"> </div> </form> <div ID"home"></div> </div> <div id="GBbody" style="position:absolute;left:1px;top:55px;height:100%;width:100%;" > <p id="disply"> <center><h2> Hi, and welcome! </h2></center> <h4> Did you know: just the movie player and code that goes with it takes<br> 474 lines of code so-far, and may not work many web browsers (for the time being), however <br> Google chrome is fully compatible at this point. </h4> </p> </div> """ loginas = """ <form action="home.py" method="post" target="_self"> <input type="submit" value="Continue last session" style="position:absolute;right:15px;top:20px;"> </form> """ if "sess" in string_cookie: infoform = infoform.replace('<div ID"home"></div>',loginas) print infoform else: print infoform print "</body></html>" but, it prints out an Internal Server Error page, help? Post script I have just reinstalled python 2.7.3 Answer: Try using Flask it not only has debug support but all the tool you need to make a web service, also if you don't want to use Flask. look at your server(Apache or nginx) logs
HttpError: <HttpError 400 when requesting https://www.googleapis.com/bigquery/v2/projects/ Question: Here is the error dat i'm getting while trying to make an authentication call to bigquery HttpError: <HttpError 400 when requesting https://www.googleapis.com/bigquery/v2/projects/ClientId/datasets/samples/tables/natality?alt=json returned "Invalid project ID 'ClientId'. Project IDs must contain 6-63 lowercase letters, digits, or dashes. IDs must start with a letter and may not end with a dash."> Here is my main.py import httplib2 import os from google.appengine.api import memcache from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from oauth2client.appengine import oauth2decorator_from_clientsecrets from bqclient import BigQueryClient PROJECT_ID = "########" this is the Client Id DATASET = "samples" TABLE = "natality" CLIENT_SECRETS = os.path.join(os.path.dirname(__file__), 'client_secrets.json') http = httplib2.Http(memcache) decorator = oauth2decorator_from_clientsecrets(CLIENT_SECRETS, 'https://www.googleapis.com/auth/bigquery') bq = BigQueryClient(http, decorator) class MainHandler(webapp.RequestHandler): @decorator.oauth_required def get(self): self.response.out.write("Hello Dashboard!\n") modTime = bq.getLastModTime(PROJECT_ID, DATASET, TABLE) if modTime is not None: msg = 'Last mod time = ' + modTime else: msg = "Could not find last modification time.\n" self.response.out.write(msg) application = webapp.WSGIApplication([ ('/', MainHandler), (decorator.callback_path, decorator.callback_handler()) ], debug=True) def main(): run_wsgi_app(application) if __name__ == '__main__': main() And here is the app.yaml application: hellomydashboard version: 1 runtime: python api_version: 1 handlers: - url: /favicon\.ico static_files: favicon.ico upload: favicon\.ico - url: .* script: main.py And here is the bqclient.py import httplib2 from apiclient.discovery import build from oauth2client.appengine import oauth2decorator_from_clientsecrets class BigQueryClient(object): def __init__(self, http, decorator): """Creates the BigQuery client connection""" self.service = build('bigquery', 'v2', http=http) self.decorator = decorator def getTableData(self, project, dataset, table): decorated = self.decorator.http() return self.service.tables().get(projectId=project, datasetId=dataset, tableId=table).execute(decorated) def getLastModTime(self, project, dataset, table): data = self.getTableData(project, dataset, table) if data is not None and 'lastModifiedTime' in data: return data['lastModifiedTime'] else: return None def Query(self, query, project, timeout_ms=10000): query_config = { 'query': query, 'timeoutMs': timeout_ms } decorated = self.decorator.http() result_json = (self.service.jobs() .query(projectId=project, body=query_config) .execute(decorated)) return result_json I also tried replacing the ClientId with Project Id as said in the error but it gives another error HttpError: <HttpError 404 when requesting https://www.googleapis.com/bigquery/v2/projects/hellodashboard87/datasets/samples/tables/natality?alt=json returned "Not Found: Dataset hellodashboard87:samples"> I'm following the tutorial on this page <https://developers.google.com/bigquery/articles/dashboard#firstcall> Answer: In order to use BigQuery, you must create a project in the [APIs Console](https://code.google.com/apis/console) that has BigQuery enabled (I'm assuming you have done this). Once you've created the project, you'll be able to get the **project number** from the URL, e.g. https://code.google.com/apis/console/#project:12345XXXXXXX In the example, the project number is `12345XXXXXXX` and this is the value you would use for `PROJECT_ID`.
Navigating JSON objects in a generic way in Haskell Question: My goal is to write a program in Haskell that takes the name of a json file and interprets the rest of the arguments as a path to navigate that json file by and print the value navigated to. The problem is because JSON can contain multiple value types, I don't know how to make Haskell's type system understand what I want. Here is the Haskell code with the "navigate" function I'm not able to implement correctly: import qualified Data.Aeson as A import qualified Data.ByteString.Char8 as BS import qualified Data.ByteString.Lazy.Char8 as BSL import Data.List import Data.Maybe import System.Environment parse :: String -> A.Value parse = fromJust . A.decode . BSL.pack isInteger xs = case reads xs :: [(Integer, String)] of [(_, "")] -> True _ -> False navigate :: A.Value -> String -> String navigate value [] = value navigate value [x:xs] | isInteger x = ??? -- value is an array, get the xth element of it. | otherwise = ??? -- value is an map, x is a key in it. main :: IO () main = do [filename:path] <- getArgs contents <- readFile filename let d = parse contents putStrLn (show (navigate d path)) For reference, here is how the same program would have been written in Python: from json import load from sys import argv def navigate(obj, path): if not path: return obj head, tail = path[0], path[1:] return navigate(obj[int(head) if head.isdigit() else head], tail) if __name__ == '__main__': fname, path = argv[1], argv[2:] obj = load(open(fname)) print navigate(obj, path) The program would be run like this: $ cat data.json {"foo" : [[1, 2, 3, {"bar" : "barf"}]]} $ python showjson.py data.json foo 0 3 bar barf Answer: You can simply pattern match on the constructors of `A.Value` in order to figure out what kind of JSON object you are dealing with: import qualified Data.HashMap.Strict as M import qualified Data.Vector as V import qualified Data.Text as T -- ... rest of the code more or less as before ... navigate :: A.Value -> [String] -> BSL.ByteString navigate value [] = A.encode value navigate (A.Array vs) (x : xs) = navigate (vs V.! read x) xs navigate (A.Object o) (x : xs) = navigate (o M.! T.pack x) xs Note that the definition of `A.Value` is as follows: data Value = Object !(HashMap Text Value) | Array !(Vector Value) | ... -- other constructors The code for `navigate` is thus making use of the lookup function (called `!` in both cases) on vectors and hash maps. The function `read` is used to interpret a command line argument as a number if needed (and will fail horribly if it isn't), whereas `T.pack` is reinterpreting the string as a value of type `Text`.
Acess modules from different directory Question: I have the following file structure python/ A/ a.py b.py B/ c.py So there are two folders A and B under the folder python and I want to import modules from a.py and b.py - import them in c.py ( B folder) I wrote an `__init__.py` with `__all__=['modulename']` and placed it in folder `B`. Now in my `c.py` I import the modules using `import` statement. This however does not seem to work. Any thoughts on how I can achieve the import of the modules in above case? Answer: 1. Add the full path for `python` to `PYTHONPATH` in your environment. 2. Add **empty** `__init__.py` file inside `A`. Then in `c.py`: from A.a import something from A.b import something # or import A A.a.something A.b.something
Why are Python builds suddenly not Framework builds when using virtualenv? Question: I've installed Python 2.7 as a Framework build on my Mac. I've installed and confirmed that wxPython works with this Python build. But when I create a virtual environment with virtualenv, the resulting Python executable isn't a Framework build, or at least not according to wxPython: This program needs access to the screen. Please run with a Framework build of python, and only when you are logged in on the main display of your Mac. Any ideas? Answer: Another solution is to add the following script to /Path/To/VirtualEnv/bin: ENV=`python -c "import sys; print sys.prefix"` PYTHON=`python -c "import sys; print sys.real_prefix"`/bin/python export PYTHONHOME=$ENV exec $PYTHON "$@" Then, whenever you want to run GUI (ex wxPython) use `my_script main.py` (make sure that virtualenv is _active_)
Redirecting the output of a python function from STDOUT to variable in Python Question: This is what I am trying to achieve def fun(): runner = InteractiveConsole() while(True): code = raw_input() code.rstrip('\n') # I want to achieve the following # By default the output and error of the 'code' is sent to STDOUT and STDERR # I want to obtain the output in two variables out and err out,err = runner.push(code) All the solution that I have looked at till now, use either pipes to issue separate script execution command (which is not possible in my case). Any other way I can achieve this? Answer: import StringIO, sys from contextlib import contextmanager @contextmanager def redirected(out=sys.stdout, err=sys.stderr): saved = sys.stdout, sys.stderr sys.stdout, sys.stderr = out, err try: yield finally: sys.stdout, sys.stderr = saved def fun(): runner = InteractiveConsole() while True: out = StringIO.StringIO() err = StringIO.StringIO() with redirected(out=out, err=err): out.flush() err.flush() code = raw_input() code.rstrip('\n') # I want to achieve the following # By default the output and error of the 'code' is sent to STDOUT and STDERR # I want to obtain the output in two variables out and err runner.push(code) output = out.getvalue() print output
Common pathname manipulations to save in a specific directory a file Question: I am writing a function where I read `inFile` in order to split it into two files (outFile1,outFile2). What I want is if `outFile1` and/or `outFile2` are specified without pathname directory (ex: `outFile1="result1.txt"` and `outFile2="result2.txt"`) both files are saved in the same directory as `inFile` (ex: inFile="C:\mydata\myfile.txt"). If the pathname directory for the output files is present, I wish to save the results in that directory. when I don't report the the `outFile` pathname directory, the files are saved in the same directory as my python script. def LAS2LASDivide(inFile,outFile1,outFile2,Parse,NumVal): inFile_path, inFile_name_ext = os.path.split(os.path.abspath(inFile)) outFile1_path, outFile1_name_ext = os.path.split(os.path.abspath(outFile1)) outFile2_path, outFile2_name_ext = os.path.split(os.path.abspath(outFile2)) outFile1_name = os.path.splitext(outFile1_name_ext)[0] outFile2_name = os.path.splitext(outFile2_name_ext)[0] example inFile="C:\\mydoc\\Area_18.las" outFile1="Area_18_overlap.las" outFile2="Area_18_clean.las" inFile_path, inFile_name_ext = os.path.split(os.path.abspath(inFile)) inFile_path, inFile_name_ext ('C:\\mydoc', 'Area_18.las') outFile1_path, outFile1_name_ext = os.path.split(os.path.abspath(outFile1)) outFile1_path, outFile1_name_ext ('C:\\Program Files\\PyScripter', 'Area_18_overlap.las') this is all my code (tested) modify with the suggestion of mgilson import os from os import path from liblas import file as lasfile def LAS2LASDivide(inFile,outFile1,outFile2,Parse,NumVal): inFile_path, inFile_name_ext = os.path.split(os.path.abspath(inFile)) outFile1_path, outFile1_name_ext = os.path.split(os.path.abspath(outFile1)) outFile2_path, outFile2_name_ext = os.path.split(os.path.abspath(outFile2)) outFile1_name = os.path.splitext(outFile1_name_ext)[0] outFile2_name = os.path.splitext(outFile2_name_ext)[0] if outFile1_name != outFile2_name: # function pesudo_switch def pseudo_switch(x): return { "i": p.intensity, "r": p.return_number, "n": p.number_of_returns, "s": p.scan_direction, "e": p.flightline_edge, "c": p.classification, "a": p.scan_angle, }[x] h = lasfile.File(inFile,None,'r').header # change the software id to libLAS h.software_id = "" if not os.path.split(outFile1)[0]: file_out1 = lasfile.File(os.path.abspath("{0}\\{1}.las".format(inFile_path,outFile1_name)),mode='w',header= h) else: file_out1 = lasfile.File(os.path.abspath("{0}\\{1}.las".format(outFile1_path,outFile1_name)),mode='w',header= h) if not os.path.split(outFile2)[0]: file_out2 = lasfile.File(os.path.abspath("{0}\\{1}.las".format(inFile_path,outFile2_name)),mode='w',header= h) else: file_out2 = lasfile.File(os.path.abspath("{0}\\{1}.las".format(outFile2_path,outFile2_name)),mode='w',header= h) for p in lasfile.File(inFile,None,'r'): if pseudo_switch(Parse) == int(NumVal): file_out1.write(p) elif pseudo_switch(Parse) != int(NumVal): file_out2.write(p) file_out1.close() file_out2.close() else: print "outFile1 and outFile2 cannot have the same name" Answer: What about something like this? def new_path(fcheck,fpath): """ fcheck --> filename to check fpath --> file with path component to transfer if fcheck has no path component """ head,tail = os.path.split(fcheck) return os.path.join(os.path.split(fpath)[0],tail) if not head else fcheck new_path('foo/bar','baz/qux') #'foo/bar' -- has path component. Leave alone new_path('bar','baz/qux') #'baz/bar' -- No path component. Transfer new_path('bar','qux') #'bar' -- Neither has path component. Path must be '.'. Leave alone.
Suppress cursor.execute() message in python MySQLdb Question: How to suppress the cursor.execute() message in MySQLdb. >>> from warnings import filterwarnings >>> import MySQLdb >>> filterwarnings('ignore', category = MySQLdb.Warning) >>> db = MySQLdb.connect('127.0.0.1', 'root', '','') >>> cursor = db.cursor() >>> cursor.execute("select version()") 1L I need to suppress this '1L' message Answer: What you see there is not a warning message, but the return value of `cursor.execute()`. It's the number of rows affected, 1. The API happens to return a Python [`long` integer](http://docs.python.org/2/library/stdtypes.html#numeric-types-int- float-long-complex), but it's otherwise the same as a regular `int` value: >>> 1L 1L >>> 1 1 >>> 1 == 1L True If you do not want the Python console to echo return values back to you, assign them to a variable: >>> somevariable = 1L
python - matplotlib - How to specify the scale for x axis Question: > **Possible Duplicate:** > [Python, Matplotlib, subplot: How to set the axis > range?](http://stackoverflow.com/questions/2849286/python-matplotlib- > subplot-how-to-set-the-axis-range) I would like to specify the scale of the x-axis for my scatter plot similar to excel. For example, I feed in the x axis values as follows: x_values = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100] However, only the numbers with an even 1st digit appear. Is there a way to make it such that all numbers appear on the x-axis? Thanks, Parth Answer: You need [`matplotlib.pyplot.xticks`](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.xticks). In your example: xticks(x_values) Prepend the function call with the module name, depending on your imports. For instance, in `ipython --pylab` I enter: In [1]: x_values = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100] In [2]: scatter(x_values, x_values) Out[2]: <matplotlib.collections.PathCollection at 0x39686d0> In [3]: xticks(x_values) and get ![enter image description here](http://i.stack.imgur.com/eiAdn.png) Note that this leaves some space at the right, because initially the ticks were up to 120 at every 20. To have it at every 10 without knowing the maximum, you can make two calls to `xticks`: xticks(range(0, int(xticks()[0][-1])+1, 10))
Stop New Relic / Celery From Polluting Logs on Heroku Question: # Question How can you stop New Relic / Celery from constantly printing the below to Heroku's log. app[scheduler.1]: [INFO/MainProcess] Starting new HTTP connection (1): collector-6.newrelic.com app[scheduler.1]: [INFO/MainProcess] Starting new HTTP connection (1): collector-6.newrelic.com app[scheduler.1]: [INFO/MainProcess] Starting new HTTP connection (1): collector-6.newrelic.com app[scheduler.1]: [INFO/MainProcess] Starting new HTTP connection (1): collector-6.newrelic.com ## Setup The above log entries appear every minute. The app reporting this is a celery worker started via `Procfile` that reads (in part): scheduler: newrelic-admin run-program python manage.py celery worker --loglevel=ERROR -B -E --maxtasksperchild=1000 Despite setting `loglevel` to `ERROR` it appears that celery is ignoring this argument. Interestingly enough, on my local machine, this setting is respected. ### Versions celery==3.0.12 django-celery==3.0.11 newrelic==1.6.0.13 Answer: Temporary solution, per New Relic support, is to add the below to your settings.py or a celeryconfig.py (depending on which you use for celery settings). import logging class RequestsConnectionFilter(logging.Filter): def filter(self, record): return False logging.getLogger('newrelic.lib.requests.packages.urllib3.connectionpool').addFilter(RequestsConnectionFilter()) NR support told me they plan to build this suppression into future versions of the python agent.
Prepending a function docstring with a decorator in python Question: How would you be able to prepend a functions docstring with a decorator? def g(func): someOtherDocString = "That is great" def wrap(*args, **kwargs): func(*args, **kwargs) return wrap @g def f(): """ This is awesome """ result: >>>help(f) Help on function f in module __main__: f() That is great That is awesome All the help would be greatly appreciated. Answer: Have you tried magic `__doc__`: from functools import wraps def g(func): func.__doc__ = "That is great" + func.__doc__ @wraps(func) def wrap(*args, **kwargs): return func(*args, **kwargs) return wrap
Scapy PcapReader and packets time Question: I'm reading a PCAP file using Scapy using a script such as the (semplified) following one: #! /usr/bin/env python from scapy.all import * # ... myreader = PcapReader(myinputfile) for p in myreader: pkt = p.payload print pkt.time In this case the packets time is not relative to PCAP capture time, but starts from the instant I've launched my script. I'd like to start from `0.0` or to be relative to the PCAP capture. How can I fix it (possibly without "manually" retrieving the first packet time and repeatedly using math to fix the problem)? Answer: I saw that using `pkt.time` is wrong, in this case. I should print `p.time` instead.
Running Salome script without graphics Question: I exported a script from Salome (dump), and I want to run it in python (I'm doing some geometric operation and I don't need any graphics). So I removed all the graphic command, but when I try to launch my python file, python cannot found the salome libraries. I tried to export the salome path ('install_path'/appli_V6_5_0p1/bin/salome/) in PYTHONPATH and LD_LIBRARY_PATH but it still doesn't work. I also would like to know if it's possible to use only the geompy library without salome, and if it's possible, how can I install only the geompy library? ( I need to launch some geompy script on a UAV with only 8gb of memory, so the less thing I install, the better) Answer: I had similar wishes to you but after much searching I ended up concluding that what we both want to do is not completely possible. In order to run a salome script on the command line without the GUI use `salome -t python script.py` or simply `salome -t script.py` In order to run a salome script you must call it using the salome executable. It seems that you cannot use the salome libraries (by importing them into a python script that is then called with `python script.py`) without the compiled program. The executables that salome uses contain much of what the platform needs to do its job. This frustrated me for a long time, but I found a workaround; for a simple example, if you have a salome script you can call the salome executable from within another python program with `os.system("salome -t python script.py")` But now you have a problem; salome does not automatically kill the session so if you run the above command a number of times your system will become clogged up with multiple instances of running salome processes. These can be killed manually by running killSalome.py, found in your salome installation folder. But beware! This will kill _all_ instances of salome running on your computer! This will be a problem if you are running multiple model generation scripts at once or if you also have the salome GUI open. Obviously, a better way is for your script to kill each specific instance of salome after it has been used. The following is one method (the exact paths to the executable etc will need to change depending on your installation): # Make a subprocess call to the salome executable and store the used port in a text file: subprocess.call('/salomedirectory/bin/runAppli -t python script.py --ns-port-log=/absolute/path/salomePort.txt', shell=True) # Read in the port number from the text file: port_file = open('/absolute/path/salomePort.txt','r') killPort = int(port_file.readline()) port_file.close() # Kill the session with the specified port: subprocess.call('/salomedirectory/bin/salome/killSalomeWithPort.py %s' % killPort,shell=True) EDIT: Typo correction to the python os command. EDIT2: I recently found that problems with this method are met when the port log file (here "salomePort.txt" but can be named arbitrarily) is given with only its relative path. It seems that giving it with its full, absolute path is necessarily for this to work.
Python - Accessing subclasses' variables from parent class while calling classmethods Question: i'm trying to build sort of a "mini django model" for working with Django and MongoDB without using the norel Django's dist (i don't need ORM access for these...). So, what i'm trying to do is to mimic the standart behavior or "implementation" of default models of django... that's what i've got so far: File "models.py" (the base) from django.conf import settings import pymongo class Model(object): @classmethod def db(cls): db = pymongo.Connection(settings.MONGODB_CONF['host'], settings.MONGODB_CONF['port']) @classmethod class objects(object): @classmethod def all(cls): db = Model.db() #Not using yet... not even sure if that's the best way to do it print Model.collection File "mongomodels.py" (the implementation) from mongodb import models class ModelTest1(models.Model): database = 'mymongodb' collection = 'mymongocollection1' class ModelTest2(models.Model): database = 'mymongodb' collection = 'mymongocollection2' File "views.py" (the view) from mongomodels import ModelTest1, ModelTest2 print ModelTest1.objects.all() #Should print 'mymongocollection1' print ModelTest2.objects.all() #Should print 'mymongocollection2' The problem is that it's not accessing the variables from ModelTest1, but from the original Model... what's wrong?? Answer: You must give `objects` some sort of link to class that contains it. Currently, you are just hard-coding it to use `Model()`s atttributes. Because you are not instantiating these classes, you will either have to use either a decorator or a metaclass to create the `object` class for you in each subclass of `Model()`.
How can I get a class object/instance from a previous __import__ in Python? Question: Let's suppose you have a module named "module" inside a hirearchical "folder2" with a class named "klass". How can I get the class like in a square bracket syntax? module = __import__('folder1.folder2.module', fromlist='*') sbs_module = module['klass'] #lets say we print a value from the method getValue print sbs_module.getValue() Answer: Replace this line: sbs_module = module['klass'] with this line: sbs_module = module.klass The reason for that is that you need to refer to module variables (in this case variable named `klass`) by using dot notation, not by accessing the variables by keys. Alternatively you can use two different notations: * by key: sbs_module = vars(module)['klass'] * by attribute: sbs_module = getattr(module, 'klass') but I do not recommend it unless absolutely necessary.
Python/Django: automatically log when exceptions occur, including request info Question: I have created a function log_error(request, traceback), which I call in my exceptions. This writes error information to the database. Now, before I open up every one of my views and add this in an exception handler, is there a way to automatically have all exceptions raise to a function, which then calls this? I have seen this [Python error logging](http://stackoverflow.com/questions/4941543/python-error-logging), which says to write your own version of sys.excepthook. This function is automatically called when there is an exception. I tried this, but my_excepthook was not called even though I copy-pasted the solution into views.py and raised an error. However, I didn't try too hard because it's not getting all the information that I need, anyway. I also need request so I can log information abut the user, url, etc. Maybe that's asking too much? (I'm using Django, but this does not seem like a Django-specific thing) Edit: yes, it is. Answer: J.F Sebastian's suggestion worked. This is a Django solution. In settings.py MIDDLEWARE_CLASSES: (I added it as the last one, not sure if this is right or will cause errors down the line. Works for now.) 'myapp.middleware.ExceptionMiddleware', In myapp.middleware.py: import traceback class ExceptionMiddleware(object): def process_exception(self, request, exception): log_error(traceback, request) That's it. log_error is my function and writes to the database. It also appears from the documentation <https://docs.djangoproject.com/en/dev/howto/error-reporting/> that I can get the local variables as well as the request attributes.
Is there a way to use PhantomJS in Python? Question: I want to use [PhantomJS](http://phantomjs.org/) in [Python](http://www.python.org/). I googled this problem but couldn't find proper solutions. I find `os.popen()` may be a good choice. But I couldn't pass some arguments to it. Using `subprocess.Popen()` may be a proper solution for now. I want to know whether there's a better solution or not. Is there a way to use PhantomJS in Python? Answer: The easiest way to use PhantomJS in python is via Selenium. The simplest installation method is 1. Install [NodeJS](http://nodejs.org/) 2. Using Node's package manager install phantomjs: `npm -g install phantomjs` 3. install selenium (in your virtualenv, if you are using that) After installation, you may use phantom as simple as: from selenium import webdriver driver = webdriver.PhantomJS() # or add to your PATH driver.set_window_size(1024, 768) # optional driver.get('https://google.com/') driver.save_screenshot('screen.png') # save a screenshot to disk sbtn = driver.find_element_by_css_selector('button.gbqfba') sbtn.click() If your system path environment variable isn't set correctly, you'll need to specify the exact path as an argument to `webdriver.PhantomJS()`. Replace this: driver = webdriver.PhantomJS() # or add to your PATH ... with the following: driver = webdriver.PhantomJS(executable_path='/usr/local/lib/node_modules/phantomjs/lib/phantom/bin/phantomjs') References: * <http://selenium-python.readthedocs.org/en/latest/api.html> * [How do I set a proxy for phantomjs/ghostdriver in python webdriver?](http://stackoverflow.com/questions/14699718/how-do-i-set-a-proxy-for-phantomjs-ghostdriver-in-python-webdriver/15699530#15699530) * <http://python.dzone.com/articles/python-testing-phantomjs>
How to pass parameter to Url with Python urlopen Question: I'm currently new to python programming. My problem is that my python program doesn't seem to pass/encode the parameter properly to the ASP file that I've created. This is my sample code: import urllib.request url = 'http://www.sample.com/myASP.asp' full_url = url + "?data='" + str(sentData).replace("'", '"').replace(" ", "%20").replace('"', "%22") + "'" print (full_url) response = urllib.request.urlopen(full_url) print(response) the output would give me something like: http://www.sample.com/myASP.asp?data='{%22mykey%22:%20[{%22idno%22:%20%22id123%22,%20%22name%22:%20%22ej%22}]}' The asp file is suppose to insert the acquired querystring to a database.. But whenever I check my database, no record is saved. Though if I do copy and paste the printed output on my browser url, the record is saved. Any input on this? TIA Update: Is it possible the python calls my ASP File A but it doesn't call my ASP File B? ASP File A is called by python while ASP File B is called by ASP File A. Because whenever I run the url on a browser, the saving goes well. But in python, no saving of database occurs even though the data passed from python is read by ASP File A.. Answer: Use firebug with Firefox and watch the network traffic when the page is loaded. If it is actually an HTTP POST, which I suspect it is, check the post parameters on that post and do something like this: from BeautifulSoup import BeautifulSoup import urllib post_params = { 'param1' : 'val1', 'param2' : 'val2', 'param3' : 'val3' } post_args = urllib.urlencode(post_params) url = 'http://www.sample.com/myASP.asp' fp = urllib.urlopen(url, post_args) soup = BeautifulSoup(fp) If its actually HTTP POST, this will work.
PyYAML, how to align map entries? Question: I use PyYAML to output a python dictionary to YAML format: import yaml d = { 'bar': { 'foo': 'hello', 'supercalifragilisticexpialidocious': 'world' } } print yaml.dump(d, default_flow_style=False) The output is: bar: foo: hello supercalifragilisticexpialidocious: world But I would like: bar: foo : hello supercalifragilisticexpialidocious : world Is there a simple solution to that problem, even a suboptimal one? Answer: Ok, here is what I've come up with so far. My solution involves two steps. The first step defines a dictionary representer for adding trailing spaces to keys. With this step, I obtain quoted keys in the output. This is why I add a second step for removing all these quotes: import yaml d = {'bar': {'foo': 'hello', 'supercalifragilisticexpialidocious': 'world'}} # FIRST STEP: # Define a PyYAML dict representer for adding trailing spaces to keys def dict_representer(dumper, data): keyWidth = max(len(k) for k in data)) aligned = {k+' '*(keyWidth-len(k)):v for k,v in data.iteritems()} return dumper.represent_mapping('tag:yaml.org,2002:map', aligned) yaml.add_representer(dict, dict_representer) # SECOND STEP: # Remove quotes in the rendered string print yaml.dump(d, default_flow_style=False).replace('\'', '')
halt processing of program AND stop traceback? Question: I have a program that needs to stop processing altogether once a certain condition has been triggered - similar to this question: [How do I abort the execution of a Python script?](http://stackoverflow.com/questions/179369/how- do-i-abort-the-execution-of-a-python-script) However, in my case, I want to capture the traceback that sys.exit() generates AND MOST IMPORTANTLY stop processing anything code blocks beyond that point. Can anyone help with this? try: a = 'i' / 1 print 'Does this work yet?' except TypeError: print "I should stop processing now" try: sys.exit() except: pass print 'I did not stop processing ' Answer: I'm not entirely sure what you are asking, as far as I know `sys.exit()` does not generate a traceback. If you try to execute your code as it is, `sys.exit()` raises an unhandled NameError exception because you have not imported the "sys" module. Surrounding the call to sys.exit() with a try/except block will capture the NameError exception, but it will also capture the normal exception raised by sys.exit(), as per the documentation: > Exit from Python. This is implemented by raising the SystemExit exception, > so cleanup actions specified by finally clauses of try statements are > honored, and it is possible to intercept the exit attempt at an outer level. Here is a version that stops processing once it encounters an error. import sys try: a = 'i' / 1 print 'Does this work yet?' except TypeError: print "I should stop processing now" sys.exit() print 'I did not stop processing ' If this code is in your main module, I suggest wrapping your entry-point code up like this: import sys def main(): try: a = 'i' / 1 print 'Does this work yet?' except TypeError: print "I should stop processing now" return 1 print 'I did not stop processing ' return 0 if __name__ == "__main__": sys.exit(main()) This way you only ever have one call to sys.exit(), and it returns an error code indicating if there was a problem. If your code is not in the top-level entry point, I suggest you just throw the exception up until it gets to the main() method, where it can then return the error code value. You can wrap the real exception in a new "StopMyApplicationException" that you have to define. Oh, and if you want to print the traceback, you can use the `traceback` module, something like `traceback.print_exc()` will do the job.
Install missing dependencies for scipy Question: I'm trying to install scipy but when I finish installing, I can't import it. is there any GUI for easy installation? The general information: Windows 7 - Python 2.7 - scipy 0.11.0 - I followed the steps in abel.ee.ucla.edu/cvxopt/install/index.html) for windows to install the libraries needed. I ran the setup for scipy using python setup.py install in command line and in the setup.py directory. The error messege [error link](https://www.dropbox.com/s/2hzrziuj2q18q4s/error.txt) Answer: If you are looking for a windows installer you can find one here: <http://www.lfd.uci.edu/~gohlke/pythonlibs/> But as it says: These are **unofficial** , **unsupported** and you might have to reinstall numpy and everything depending on numpy from there too! **Read the instructions very careful!**
How do I import a pre-existing python project into Eclipse? Question: I am using eclipse for python. How do I import an existing project into eclipse in the current workspace. Thanks Answer: New Project Dont use default Location Browse to existing project location ... if its an existing eclipse project with project files that have correct paths for your system you can just open the .proj file ...
Preserve colored output from python os.popen() Question: I'm using Python to build multiple VS solutions from a given directory using msbuild.exe. I only want specific output, but to save work, the best way to interpret the results is with the original ANSI encoding. Example: import os output = os.popen("MSBuild VSTests.sln").read() print output This prints the output, but without color. Is there any way to preserve this? Answer: You can't, as far as I know. The color is a property of the console and not of the output text itself. It's not like in Linux where you get escape characters in the text that set the color and reading that out back to console preserves color. Another side effect of this is that you can't have a separate color for `STDOUT` and `STDERR` as shown in this SO question: [set stdout/err text color windows C++](http://stackoverflow.com/questions/4920661/set-stdout-err-text-color- windows-c)
python fail to display list of modules Question: I have new installation of ubuntu 12.04 64-bit. I have python 2.7 and installed python 3.3 as python3 up to now did not have problems, but installation is fairly new. When I try to check modules I installed by using `help('modules')` I have an error: Python 2.7.3 (default, Aug 1 2012, 05:14:39) [GCC 4.6.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> help('modules') Please wait a moment while I gather a list of all available modules... /usr/lib/python2.7/dist-packages/gobject/constants.py:24: Warning: g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed import gobject._gobject /usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py:40: Warning: g_boxed_type_register_static: assertion `g_type_from_name (name) == 0' failed from gtk import _gtk ** (python:20929): CRITICAL **: pyg_register_boxed: assertion `boxed_type != 0' failed /usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py:40: Warning: cannot register existing type `GdkDevice' from gtk import _gtk /usr/lib/python2.7/dist-packages/gtk-2.0/gtk/__init__.py:40: Warning: g_type_get_qdata: assertion `node != NULL' failed from gtk import _gtk Segmentation fault (core dumped) I have no idea how to interpret this output and what I should do. Obviously if there is some problem I want to fix it now after OS installation and do not wait until will loose a work. Anyone had similar issue and know what should be done? Answer: This seems to be a popular bug of python on newer ubuntu versions. It has to do with the mixing of modules which have different bindings, see here: <http://blog.rabbitvcs.org/archives/312> I also have this problem and some suggestions how to solve it can be found here: <https://bugs.launchpad.net/ubuntu/+source/python2.7/+bug/896836>
which module to automating Voice/Speech using python Question: I wanted to automate voice using python not able to find the module which can automate voice or speech. Answer: Take a look at [pyspeech](http://code.google.com/p/pyspeech/). I think this is what you are looking for, speech automation. But this is only for windows. Example: import speech while True: phrase = speech.input() speech.say("You said %s" % phrase) if phrase == "turn off": break Or check out [dragonfly](http://code.google.com/p/dragonfly/). Example: from dragonfly.all import Grammar, CompoundRule # Voice command rule combining spoken form and recognition processing. class ExampleRule(CompoundRule): spec = "do something computer" # Spoken form of command. def _process_recognition(self, node, extras): # Callback when command is spoken. print "Voice command spoken." # Create a grammar which contains and loads the command rule. grammar = Grammar("example grammar") # Create a grammar to contain the command rule. grammar.add_rule(ExampleRule()) # Add the command rule to the grammar. grammar.load()
OS path insert command Question: There are some problems using the PYTHONPATH env variable. Therefore I have to figure out some other option in my code to import modules from another folder. I was trying the sys.path,insert based on "TEST_INSTALL_DIR" env variable ( value=C:\test). I want to create path = os.getenv("TEST_INSTALL_DIR")#C:\test path= path.replace("\\", "/") pypath= '%s/python/profile'%(path)#C:/test/python/profile pypath= "\'%s\'" %(pypath)# 'C:/test/python/profile' print "PYPATH:",pypath sys.path.insert(0, pypath) If set the path as below ,everything works fine. : sys.path.insert(0, 'C:/test/python/profile') Am I doing anything completely wrong in my code? Any ideas would help. Thanks. Answer: I think the comment of sberry should be correct. Just to be sure have you tried the following? path = os.getenv("TEST_INSTALL_DIR")#C:\test path = path.replace("\\", "/") pypath = '%s/python/profile'%(path)#C:/test/python/profile print "PYPATH:",pypath sys.path.insert(0, pypath)
OOP in gui testautomation scripts Question: I'm working on a test automation for a web gui. It's a large product so it's important to design a good software architecture. First i have decided to isolate test data from test logic. After that i want to build classes in oop style which helps to handle changes in short time. I have the following gui design: ![enter image description here](http://i.stack.imgur.com/DQFqg.png) I think i should isolate the menu, navigation and toolbar in extra classes, because so there is only one place to handle changes. I don't know if it is the best way to handle it. Are there other options to deal with? PS: I'm using Squish Gui Test Automation Tool with python Answer: Have you had a look at Django? Edit: I interpreted from the 1st paragraph that the OP was going to roll his own oop gui web framework.
can you recover from reassigning __builtins__ in python? Question: If I open up interactive mode and type: __builtins__ = 0 # breaks everything have I completely broken the session? If so, what is going on behind the scenes to assign __builtins__ to the builtin module that can't be handled by the interpreter? If not, how can I recover from this? Just a few of my own attempts to fix it: * Any attempt to import anything results in an error "ImportError __import__ not found" * all functions I might use to do anything other than evaluate numerical expressions are broken * There is another variable __package__ still accessible, but I don't know if/how it can be used. Answer: You can usually get access to anything you need, even when `__builtins__` has been removed. It's just a matter of digging far enough. For example: Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> __builtins__ = 0 >>> open Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'open' is not defined >>> dir Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'dir' is not defined >>> int Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'int' is not defined >>> float Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'float' is not defined >>> >>> __builtins__ = [t for t in ().__class__.__bases__[0].__subclasses__() if 'warning' in t.__name__][0]()._module.__builtins__ >>> >>> open <built-in function open> >>> int <type 'int'> >>> float <type 'float'> >>> For an explanation of what the heck just happened here, read [Eval really is dangerous](http://nedbatchelder.com/blog/201206/eval_really_is_dangerous.html), where similar techniques are used to demonstrate that you cannot safely execute untrusted Python code.
How can I add background color inside html code using beautifulsoup? Question: With beautifulsoup I get the html code of a site, let say it's this: <!DOCTYPE html> <html> <head> </head> <body> <h1>My First Heading</h1> <p>My first paragraph.</p> </body> </html> How I can add this line `body {background-color:#b0c4de;}` inside the **head** tag using beautifulsoup? Lets say python code is: #!/usr/bin/python import cgi, cgitb, urllib2, sys from bs4 import BeautifulSoup site = "www.example.com" page = urllib2.urlopen(site) soup = BeautifulSoup(page) Answer: You can use: soup.head.append('body {background-color:#b0c4de;}') But you should [create a `<style>` tag](http://www.crummy.com/software/BeautifulSoup/bs4/doc/#beautifulsoup-new- string-and-new-tag) before. For instance: head = soup.head head.append(soup.new_tag('style', type='text/css')) head.style.append('body {background-color:#b0c4de;}')
Why isn't print(\a) working in IDLE Question: **NOTE** I changed the q slightly so that it is not a duplicate anymore. Sorry. I have these time-consuming bioinformatics scripts I am running. I'd like them to sound a beep when they are done. I am on OS X. In a [similar thread](http://stackoverflow.com/questions/6537481/python- making-a-beep-noise) I found that `print '\a'` might work, but in Idle this just prints `[]` Why does this not work in IDLE Answer: The reason it doesn't beep is that `\a` (or `^G`) is the terminal bell code; it's up to the program handling stdout to turn that into a sound. Terminal.app will play a sound (unless you configure it to do "visual bell" instead, of turn it off entirely), but Idle will not. And, of course, if you're running without a tty, you get nothing. If you don't mind using PyObjC (which comes pre-installed with the Apple- installed Pythons on all recent versions of OS X): import Cocoa Cocoa.NSBeep() Of course this plays the OS X system beep, not the Terminal bell. Besides possibly being a different sound, this means if you disable the bell in Terminal, your script will still beep. (If you really want a Terminal bell, you can always script Terminal via, e.g., ScriptingBridge. But I don't think you care.)
Write Integer to XML Node in minidom Question: I'm using minidom in Python to create an XML formatted log file for completed tasks. Part of the process is to compare the last modified time of a file to the time that that files data was recorded into the log. I plan on doing that via: if modTime < recTime: do_something() For example, `foo.pdf` was modified at 10:40am, then at 10:46am the log recorded foo.pdf's modified time. So a portion of the log _should_ look something like this: <Printed Orders> <foo.pdf> <Date Recorded> 1352486780 </Date Recorded> </foo.pdf> However, when I attempt to write the times in their integer formats to the XML file I get the error: TypeError: node contents must be a string So, my questions are: Is there a way to write an integer to an XML file? (Preferrably using minidom as to not clutter my script with more imports) If there isn't, is there a better way to compare the modified time I pull from the file itself and recorded time I pull from the XML file than converting the recorded time to a string, writing to the XML file, pulling the rec time from the XML file later on, and then converting that string back to an integer? Also, in case you're wondering, the plan is to do once-daily purges of a directory, deleting `foo.pdf` and other files based on the comparison of their own mod/rec times. If `foo.pdf` hasn't been modified since it was entered into the log, it will be deleted. Thanks! Answer: Just look at the output you expect. How would XML know if that is an integer or a string. With XML in general, you have to say everything with tags. Thus, everything is treated as a string. You do not need to convert the string to a int, unless the other time is an int, because the time-string will not become any longer than it is now for a really long time (over 3,000 years). However, I am not sure why you have so much dislike for doing that conversion. If it's really a big deal, use JSON.
Suppressing pjsua output in Python script Question: I'm writing a script that uses curses to produce a main window and a log window at the bottom of the screen. It seems that when I import pjsua it insists on printing to the screen even though I have set log level to 0. Here's what it outputs: 15:49:09.716 os_core_unix.c !pjlib 2.0.1 for POSIX initialized 15:49:09.844 sip_endpoint.c .Creating endpoint instance... 15:49:09.844 pjlib .select() I/O Queue created (0x7f84690decd8) 15:49:09.844 sip_endpoint.c .Module "mod-msg-print" registered 15:49:09.844 sip_transport. .Transport manager created. 15:49:09.845 pjsua_core.c .PJSUA state changed: NULL --> CREATED 15:49:09.896 pjsua_media.c ..NAT type detection failed: Invalid STUN server or server not configured (PJNATH_ESTUNINSERVER) Note it doesn't send this through the logging callback, meaning I have no way to put it in the log window with the rest of my logging information. Can anyone give me some advice on dealing with this output please? Thanks Answer: If you can detect which stream it writes to, e.g. sys.stderr, you could redirect it somewhere by simple assignment of sys.stderr to another open file (or even /dev/null ?).
PySide QGLBuffer allocate input data Question: I'm trying to run PySide version of OpenGL Core Profile Qt demo - [How_to_use_OpenGL_Core_Profile_with_Qt](http://qt- project.org/wiki/How_to_use_OpenGL_Core_Profile_with_Qt). It uses QGLBuffer.allocate() method (C++ code) float points[] = { -0.5f, -0.5f, 0.0f, 1.0f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.5f, 0.0f, 1.0f }; m_vertexBuffer.allocate( points, 3 * 4 * sizeof( float ) ); Pythonic way will be: points = [-0.5, -0.5, 0.0, 1.0, 0.5, -0.5, 0.0, 1.0, 0.0, 0.5, 0.0, 1.0] m_vertexBuffer.allocate(points) But when I run this code I get following error TypeError: 'PySide.QtOpenGL.QGLBuffer.allocate' called with wrong argument types: PySide.QtOpenGL.QGLBuffer.allocate(list) Supported signatures: PySide.QtOpenGL.QGLBuffer.allocate(void, int = -1) PySide.QtOpenGL.QGLBuffer.allocate(int) I found [unit test](http://pyside.sourcearchive.com/documentation/1.0.1-1/qglbuffer__test_8py_source.html) that is used for this functionality - it uses QByteArray. data = QByteArray("12345") b.allocate(data) But I don't understand how to convert Python list to QByteArray or use allocate with list. Or is it bug in PySide wrapper function? Answer: It seems I found solution. We should use python modules like struct or array. For example: from array import * points = [0.5, 1, -0.5] data = array('f', points) # data.tostring() - returns packed data with size of len(data.tostring())
How to catch all errors and have reference to what's been thrown? Question: I've read this article <http://docs.python.org/2/tutorial/errors.html> two times to make sure. It avoids the topic entirely. 1. I've tried throwing something which is not an `Exception` or `ExceptionBase`, the interpreter told me I can only throw `ExceptionBase`-esque and old-style classes' object. 2. So I tried throwing old-style class' object: * * * >>> class Foo(): ... pass ... >>> try: ... raise Foo() ... except Exception as foo: ... print 'foo %s' % foo ... except: ... print 'not an exception' ... else: ... print 'it\'s all good' ... not an exception >>> Surprise... So, how do I catch all of them and examine what was caught? EDIT: Motivation. 1. Overly defensive coding has never been a bad practice. Suppose I'm writing a deamon program, why wouldn't I try to prevent as many potential dangers as is possible by doing something that by other standards is trivial? 2. Programs don't always run in a friendly environment, some times other programs try to compromise your program, in which case you do need to prevent all kinds of errors. Some times it makes sense. Think of, for example a CGI implementation - would you like your entire server to go down, if some idiot who rented a piece of it put a script in it, which threw this "unexpected" error? 3. Some frameworks already catch all errors for you, so getting your message from under the layers of the framework code isn't easy. Throwing something that the framework code isn't catching may be a good strategy to bypass that (this isn't just a theoretical case, I did that before). Answer: ~~ import types try: raise Foo() except (Exception, types.InstanceType) as foo: print 'foo %s' % foo else: print 'it\'s all good' ~~ ## EDIT: While that feels like it should work, it doesn't. Here's a hacky way: import sys try: raise Foo() except: etype, foo, traceback = sys.exc_info() print 'foo %s' % foo else: print 'it\'s all good'
Installing Python setup tools on Unix to run Tweepy Twitter client for Python Question: I am trying to run Tweepy client for Twitter on my Unix account. whenever i try to run setup for Tweepy using the command: python setup.py I get this error: Traceback (most recent call last): File "setup.py", line 3, in ? from setuptools import setup, find_packages ImportError: No module named setuptools Now i searched on some forums and found i need to add setup tools file. The file i found setuptools-0.6c11-py2.7.egg I FTP - ed this file to my unix directory where i have Tweepy client directory and my program which is using Tweepy. Now, whenever i try to install setup tools using the command python setuptools-0.6c11-py2.7.egg I get the error : python setuptools-0.6c11-py2.7.egg File "setuptools-0.6c11-py2.7.egg", line 2 if [ `basename $0` = "setuptools-0.6c11-py2.7.egg" ] ^ SyntaxError: invalid syntax Any clues/suggestions what i must be doing wrong here ? Answer: Don't use setuptools, use [distribute](http://pypi.python.org/pypi/distribute/). Setuptools is old and deprecated. Until python 3.4 with packaging/distutils2 is around use distribute, which is a fork of old setuptools/distutils. Simply download the distribute source tarball, unpack and run `python setup.py install`. Alternatively, you can download [distribute- setup.py](http://pypi.python.org/pypi/distribute/#distribute-setup-py) and just run it.
Converting a command line program to a GUI Question: I created a python script for a simple call with 19 functions. I'm having problems as it won't work in GUI mode. Do I have to recode the whole thing? I don't know much about GUI. Could someone help me here please. I've tried reading about GUI I'm just lost with the steps. import math q=1 while q == True: print("Please select one option from the menu below: ") print("\t 0: Expression Input") print("\t 1: Numbers Input") print("\t 2: Exit \n") op = int(input("Please enter (0 or 1 or 2): ")) def expression_input(str1): num3 = eval(str1) return num3 if op == 0: expression = input("Please input the expression for calculation: ") num3 = expression_input(expression) print(num3) q = int(input("Press 1 to continue or 0 to exit:")) exit elif op == 1: s = 1 while s == True: opr = float(input("Please select a method for calculation from 1 to 19 for the required function:\n\n\t1:+\n\t2:-\n\t3:*\n\t4:/\n\t5:power\n\t6:root\n\t7:sin\n\t8:cos\n\t9:tan\n\t10:arccos\n\t11:arcsin\n\t12:arctan\n\t13:log \n\t14:ln \n\t15:factorial\n\t16:hex\n\t17:octal\n\t18:decimal\n\t19:binary\n\t")) if opr > 19: print("Wrong choice, please reenter the option") exit elif opr <=4: p=1 numb1=input("Enter the first value for calculation") numb2=input("Enter the second value for calculation") while p==True: if numb1.isdigit(): if numb2.isdigit(): numb1=float(numb1) numb2=float(numb2) if opr == 1: result=numb1+numb2 print (result) if opr == 2: result=numb1-numb2 print (result) if opr == 3: result=numb1*numb2 print (result) if opr == 4: result=numb1/numb2 print (result) p=0 s=0 q = int(input("Press 1 to continue or 0 to exit:")) exit else: numb2=input("Sorry, the second value is not a number,please re enter the second value") else: numb1=input("Sorry,the first value is not a number. Please Re enter the first value") else : t=1 numb1=input("Enter the first value for calculation") while t== True: if numb1.isdigit(): numb1=float(numb1) if opr == 5: numb2=int(input("Enter the power value for calculation")) result = math.pow(numb1,numb2) print (result) if opr == 6: result = math.sqrt(numb1) print (result) if opr == 7: result = math.sin(numb1) print (result) if opr == 8: result = math.cos(numb1) print (result) if opr == 9: result = math.tan(numb1) print (result) if opr == 10: result = math.acos(numb1) print (result) if opr == 11: result = math.asin(numb1) print (result) if opr == 12: result = math.atan(numb1) print (result) if opr == 13: numb2=float(input("Enter the base value for calculation")) result = math.log(numb1,[numb2]) print (result) if opr == 14: result = math.log1p(numb1) print (result) if opr == 15: result = math.factorial(numb1) print (result) if opr == 16: numb1=int(numb1) result = hex(numb1) print (result) if opr == 17: numb1=int(numb1) result = oct(numb1) print (result) if opr == 18: numb1=int(numb1) result = float(numb1) print (result) if opr == 19: numb1=int(numb1) result = bin(numb1) print (result) q = int(input("Press 1 to continue or 0 to exit:")) t=0 p=0 s=0 exit else: numb1=input("Sorry, it is not a number. Please re enter") elif op==2 : q=0 print ("Thank you for using the program. Bye!") exit else: print("Wrong option, Please re enter again") s=0 exit Answer: Follow this basic layout for tkinter app: from Tkinter import * class App: def __init__(self, master): frame = Frame(master) frame.pack() self.button = Button(frame, text="QUIT", fg="red", command=frame.quit) self.button.pack(side=LEFT) self.hi_there = Button(frame, text="Hello", command=self.say_hi) self.hi_there.pack(side=LEFT) def say_hi(self): print "hi there, everyone!" root = Tk() app = App(root) root.mainloop() You can use entry widget to get number entries v = StringVar() e = Entry(master, textvariable=v) e.pack() The most important thing here will be to create a button for each function in the **init** function for example.... b1=Button(self,text='+') then bind the clicking of the mouse button to a function of say 'add' function add is: add(self,event): #your code to add to bind b1 use this syntax: b1.bind("<Button-1>",add) b1.pack() Bind all buttons to respective functions and you can display the result using message widget.
Throwing n dice m times, what is the probability of getting atleast one six Question: I have the following code trying to solve the problem below: Thrown n dice m times, calculate the probability of getting at least one 6. I know that the exact probability of getting at least 1 six when throwing 2 dice is 11/36. My program below seems to want the probability to be 0.333, which is close, but it should be 11/36 right? Great if the suggestions can continue on the standard code I have made, but vectorized code is also appreciated. import random from sys import argv m = int(argv[1]) # performing the experiment with m dice n times n = int(argv[2]) # Throwing m dice n times s = 0 # Counts the number of times m dies shows at least one 6 print '%.g dice are thrown %.g times' % (m, n) for i in xrange(n): list = [] # used to clear the list for new die count for q in xrange(m): r = random.randint(1,6)#Picks a random integer on interval [1,6] list.append(r) #appends integer value if len(list) == m: #when list is full, that is when m dice has been thrown for i in xrange(len(list)): #print list if list[i] == 6: #if the list of elements has a six add to the counter s += 1 pass #I want the loop to exit when it finds an element = 6 print 'Number of times one of the n dice show at least one 6: %.g' % s print 'Probability of at least 1 six from %.g dice is = %2.3f' % (m,s/float(n)) I will edit the code and questions if something is unclear. Sample on output: Terminal > python one6_ndice.py 2 1000000 2 dice are thrown 1e+06 times Number of times one of the n dice show atleast one 6: 3e+05 Probability of atleast 1 six from 2 dice is = 0.333 Answer: I think the problem is here: pass #I want the loop to exit when it finds an element = 6 `pass` won't exit the loop. `pass` is the no-operation command; it does nothing at all. You probably want `break` (which exits the loop). BTW, don't call your lists `list` \-- that clobbers the builtin `list`. For a more compact expression, you might consider sum(any(random.randint(1,6) == 6 for die in xrange(n)) for trial in xrange(m)) or sum(6 in (random.randint(1,6) for die in range(n)) for trial in range(m))
Python list sorting and indexing Question: I am having issue understanding and solving the following issue with lists, sorting and indexing. Here is the code example: import random as rdm a=[] for i in range(3): a.append([i,rdm.randint(-5,5)]) print a b = sorted(a,key=lambda a:a[1]) print b c = [] for j in range(len(b)): c.append(b[j]) print c c[0][1] = 0 print a print b print c Notice how changing one value of "C" changed "b" and "a". How can I prevent it from happening? In other words. I do not want the values of "a" changed doesn't matter what happens to "b" or "c" Answer: Congrajulations, welcome to the world of working with references. In your code, when you write: c.append(b[j]) You are adding into `c` a reference to the object `b[j]`. If you want `b` and `c` to be independant of `a`, you must do a **deep copy** of the objects. import copy b = sorted (copy.deepcopy(a), lambda a : a[1])
python tkinter how to bind key to a button Question: New to programming and especially python and tKinter. How can I create a way to bind the key "s" to the button or the function `sharpen`? Any help would be awesome. from Tkinter import * from PIL import Image, ImageTk, ImageFilter, ImageEnhance class Application(Frame): def __init__(self, master=None): Frame.__init__(self, master) master.wm_title("Image examples") self.pack() self.createWidgets() def createWidgets(self): self.img = Image.open("lineage.jpg") self.photo1 = ImageTk.PhotoImage(self.img.convert("RGB")) self.label1 = Label(self, image=self.photo1) self.label1.grid(row=0, column=0, padx=5, pady=5, rowspan=10) self.photo2 = ImageTk.PhotoImage(self.img.convert("RGB")) self.label2 = Label(self, image=self.photo2) self.label2.grid(row=0, column=1, padx=5, pady=5, rowspan=10) button5 = Button(self, text="Sharpen", command=self.sharpen) button5.grid(row=4, column= 2, sticky = N) def sharpen(self): img2 = self.img.filter(ImageFilter.SHARPEN) self.photo2 = ImageTk.PhotoImage(img2) self.label2 = Label(self, image=self.photo2) self.label2.grid(row=0, column=1, padx=5, pady=5, rowspan=10) Answer: You'll need to make two changes: 1. Add master.bind('s', self.sharpen) to `__init__`. (Binding to the Frame, `self`, does not seem to work.) 2. When `s` is pressed, `self.sharpen(event)` will be called. Since Tkinter will be sending a `Tkinter.Event` object, we must also change the call signature to def sharpen(self, event=None): Thus, when the button is pressed, `event` will be set to the default value, `None`, but when the `s` key is pressed, `event` will be assigned to the `Tkinter.Event` object.
Creating .dll from a Python file Question: > **Possible Duplicate:** > [py2exe to generate > dlls?](http://stackoverflow.com/questions/550446/py2exe-to-generate-dlls) I'm searching for a way to make a .dll-file out of a python file. I also wonder if I can use the functions in the dll from an another language like C++ oder Assembler? I have been looking at py2exe and pyapp, but it doesn't provide porting to dll. I'm sorry for this question. I feel sorry because I didn't search StackOverflow before for similar questions / possible duplicates . Answer: You can compile multiple python scripts into an assembly using clr. import clr clr.CompileModules("modules.dll", "module.py", "module2.py") This is IronPython, but to reference these modules from other languages (ex. C++ or C#) IronPython would be preferable in my opinion because it runs as a .NET interpreter that can be understood by other languages that implement .NET To reference the compiled dll from another IronPython script, for example: import clr clr.AddReference("modules.dll")
Using python Popen to read the last line Question: I have a simple python program: test.py: import time for i in range(100000): print i time.sleep(0.5) I want to use another program that executes the above one in order to read the last line output while the above program is counting. import subprocess process = subprocess.Popen("test",stdout=PIPE) sleep(20) # sleeps an arbitrary time print stdout.readlines()[-1] The problem is that `process.stdout.readlines()` waits until test.py finishes execution. Is there any way to read the last line that has been writen in the output while the program is executing? Answer: You could use `collections.deque` to save only the last specified number of lines: #!/usr/bin/env python import collections import subprocess import time import threading def read_output(process, append): for line in iter(process.stdout.readline, ""): append(line) def main(): process = subprocess.Popen(["program"], stdout=subprocess.PIPE) # save last `number_of_lines` lines of the process output number_of_lines = 1 q = collections.deque(maxlen=number_of_lines) t = threading.Thread(target=read_output, args=(process, q.append)) t.daemon = True t.start() # time.sleep(20) # print saved lines print ''.join(q), # process is still running # uncomment if you don't want to wait for the process to complete ##process.terminate() # if it doesn't terminate; use process.kill() process.wait() if __name__=="__main__": main() See [other tail-like solutions that print only the portion of the output](http://stackoverflow.com/a/4418891/4279) [See here](http://stackoverflow.com/a/12471855/4279) if your child program uses a block-buffering (instead of line-bufferring) for its stdout while running non-interactively.
How to make a pyparsing grammar dependent on an instance attribute? Question: I’ve got a pyparsing issue that I have spent days trying to fix, with no luck. Here’s the relevant pseudocode: class Parser(object): def __init__(self): self.multilineCommands = [] self.grammar = <pyparsing grammar> # depends on self.multilineCommands So, I’m trying to get a specific set of doctests to pass. But the tests in question update `self.multilineCommands` after instantiation. Although there are no issues setting the attribute correctly, `self.grammar` seems blind to the change, and fails the tests. However, if I set `self.multilineCommands` inside `__init__()`, then the tests all pass. How can I get `self.grammar` to stay up-to-date with `self.multilineCommands`? # Follow-Up So, part of the issue here is that I’m refactoring code I didn’t write. My experience with pyparsing is also exclusively limited to my work on this project. Pyparsing author Paul McGuire posted a helpful response, but I couldn’t get it to work. It could be an error on my part, but more likely the bigger issue is that I over-simplified the pseudo-code written above. So, I’m going to post the actual code. ### Warning! What you are about to see is uncensored. The sight of it might make you cringe…or maybe even cry. In the original module, this code was just a _single_ piece of a total “god class”. Splitting out what is below into the `Parser` class is just step 1 (and apparently, step 1 was enough to break the tests). * * * class Parser(object): '''Container object pyparsing-related parsing. ''' def __init__(self, *args, **kwargs): r''' >>> c = Cmd() >>> c.multilineCommands = ['multiline'] >>> c.multilineCommands ['multiline'] >>> c.parser.multilineCommands ['multiline'] >>> c.case_insensitive = True >>> c.case_insensitive True >>> c.parser.case_insensitive True >>> print (c.parser('').dump()) [] >>> print (c.parser('/* empty command */').dump()) [] >>> print (c.parser('plainword').dump()) ['plainword', ''] - command: plainword - statement: ['plainword', ''] - command: plainword >>> print (c.parser('termbare;').dump()) ['termbare', '', ';', ''] - command: termbare - statement: ['termbare', '', ';'] - command: termbare - terminator: ; - terminator: ; >>> print (c.parser('termbare; suffx').dump()) ['termbare', '', ';', 'suffx'] - command: termbare - statement: ['termbare', '', ';'] - command: termbare - terminator: ; - suffix: suffx - terminator: ; >>> print (c.parser('barecommand').dump()) ['barecommand', ''] - command: barecommand - statement: ['barecommand', ''] - command: barecommand >>> print (c.parser('COMmand with args').dump()) ['command', 'with args'] - args: with args - command: command - statement: ['command', 'with args'] - args: with args - command: command >>> print (c.parser('command with args and terminator; and suffix').dump()) ['command', 'with args and terminator', ';', 'and suffix'] - args: with args and terminator - command: command - statement: ['command', 'with args and terminator', ';'] - args: with args and terminator - command: command - terminator: ; - suffix: and suffix - terminator: ; >>> print (c.parser('simple | piped').dump()) ['simple', '', '|', ' piped'] - command: simple - pipeTo: piped - statement: ['simple', ''] - command: simple >>> print (c.parser('double-pipe || is not a pipe').dump()) ['double', '-pipe || is not a pipe'] - args: -pipe || is not a pipe - command: double - statement: ['double', '-pipe || is not a pipe'] - args: -pipe || is not a pipe - command: double >>> print (c.parser('command with args, terminator;sufx | piped').dump()) ['command', 'with args, terminator', ';', 'sufx', '|', ' piped'] - args: with args, terminator - command: command - pipeTo: piped - statement: ['command', 'with args, terminator', ';'] - args: with args, terminator - command: command - terminator: ; - suffix: sufx - terminator: ; >>> print (c.parser('output into > afile.txt').dump()) ['output', 'into', '>', 'afile.txt'] - args: into - command: output - output: > - outputTo: afile.txt - statement: ['output', 'into'] - args: into - command: output >>> print (c.parser('output into;sufx | pipethrume plz > afile.txt').dump()) ['output', 'into', ';', 'sufx', '|', ' pipethrume plz', '>', 'afile.txt'] - args: into - command: output - output: > - outputTo: afile.txt - pipeTo: pipethrume plz - statement: ['output', 'into', ';'] - args: into - command: output - terminator: ; - suffix: sufx - terminator: ; >>> print (c.parser('output to paste buffer >> ').dump()) ['output', 'to paste buffer', '>>', ''] - args: to paste buffer - command: output - output: >> - statement: ['output', 'to paste buffer'] - args: to paste buffer - command: output >>> print (c.parser('ignore the /* commented | > */ stuff;').dump()) ['ignore', 'the /* commented | > */ stuff', ';', ''] - args: the /* commented | > */ stuff - command: ignore - statement: ['ignore', 'the /* commented | > */ stuff', ';'] - args: the /* commented | > */ stuff - command: ignore - terminator: ; - terminator: ; >>> print (c.parser('has > inside;').dump()) ['has', '> inside', ';', ''] - args: > inside - command: has - statement: ['has', '> inside', ';'] - args: > inside - command: has - terminator: ; - terminator: ; >>> print (c.parser('multiline has > inside an unfinished command').dump()) ['multiline', ' has > inside an unfinished command'] - multilineCommand: multiline >>> print (c.parser('multiline has > inside;').dump()) ['multiline', 'has > inside', ';', ''] - args: has > inside - multilineCommand: multiline - statement: ['multiline', 'has > inside', ';'] - args: has > inside - multilineCommand: multiline - terminator: ; - terminator: ; >>> print (c.parser('multiline command /* with comment in progress;').dump()) ['multiline', ' command /* with comment in progress;'] - multilineCommand: multiline >>> print (c.parser('multiline command /* with comment complete */ is done;').dump()) ['multiline', 'command /* with comment complete */ is done', ';', ''] - args: command /* with comment complete */ is done - multilineCommand: multiline - statement: ['multiline', 'command /* with comment complete */ is done', ';'] - args: command /* with comment complete */ is done - multilineCommand: multiline - terminator: ; - terminator: ; >>> print (c.parser('multiline command ends\n\n').dump()) ['multiline', 'command ends', '\n', '\n'] - args: command ends - multilineCommand: multiline - statement: ['multiline', 'command ends', '\n', '\n'] - args: command ends - multilineCommand: multiline - terminator: ['\n', '\n'] - terminator: ['\n', '\n'] >>> print (c.parser('multiline command "with term; ends" now\n\n').dump()) ['multiline', 'command "with term; ends" now', '\n', '\n'] - args: command "with term; ends" now - multilineCommand: multiline - statement: ['multiline', 'command "with term; ends" now', '\n', '\n'] - args: command "with term; ends" now - multilineCommand: multiline - terminator: ['\n', '\n'] - terminator: ['\n', '\n'] >>> print (c.parser('what if "quoted strings /* seem to " start comments?').dump()) ['what', 'if "quoted strings /* seem to " start comments?'] - args: if "quoted strings /* seem to " start comments? - command: what - statement: ['what', 'if "quoted strings /* seem to " start comments?'] - args: if "quoted strings /* seem to " start comments? - command: what ''' # SETTINGS self._init_settings() # GRAMMAR self._init_grammars() # PARSERS # For easy reference to all contained parsers. # Hacky, I know. But I'm trying to fix code # elsewhere at the moment... :P) self._parsers = set() self._init_prefixParser() self._init_terminatorParser() self._init_saveParser() self._init_inputParser() self._init_outputParser() # intermission! :D # (update grammar(s) containing parsers) self.afterElements = \ pyparsing.Optional(self.pipe + pyparsing.SkipTo(self.outputParser ^ self.stringEnd, ignore=self.doNotParse)('pipeTo')) + \ pyparsing.Optional(self.outputParser('output') + pyparsing.SkipTo(self.stringEnd, ignore=self.doNotParse).setParseAction(lambda x: x[0].strip())('outputTo')) self._grammars.add('afterElements') # end intermission self._init_blankLineTerminationParser() self._init_multilineParser() self._init_singleLineParser() self._init_optionParser() # Put it all together: self.mainParser = \ ( self.prefixParser + ( self.stringEnd | self.multilineParser | self.singleLineParser | self.blankLineTerminationParser | self.multilineCommand + pyparsing.SkipTo( self.stringEnd, ignore=self.doNotParse) ) ) self.mainParser.ignore(self.commentGrammars) #self.mainParser.setDebug(True) # And we've got mainParser. # # SPECIAL METHODS # def __call__(self, *args, **kwargs): '''Call an instance for convenient parsing. Example: p = Parser() result = p('some stuff for p to parse') This just calls `self.parseString()`, so it's safe to override should you choose. ''' return self.parseString(*args, **kwargs) def __getattr__(self, attr): # REMEMBER: This is only called when normal attribute lookup fails raise AttributeError('Could not find {0!r} in class Parser'.format(attr)) @property def multilineCommands(self): return self._multilineCommands @multilineCommands.setter def multilineCommands(self, value): value = list(value) if not isinstance(value, list) else value self._multilineCommands = value @multilineCommands.deleter def multilineCommands(self): del self._multilineCommands self._multilineCommands = [] # # PSEUDO_PRIVATE METHODS # def _init_settings(self, *args, **kwargs): self._multilineCommands = [] self.abbrev = True # recognize abbreviated commands self.blankLinesAllowed = False self.case_insensitive = True self.identchars = cmd.IDENTCHARS self.legalChars = u'!#$%.:?@_' + pyparsing.alphanums + pyparsing.alphas8bit self.noSpecialParse = {'ed','edit','exit','set'} self.redirector = '>' # for sending output to file self.reserved_words = [] self.shortcuts = {'?' : 'help' , '!' : 'shell', '@' : 'load' , '@@': '_relative_load'} self.terminators = [';'] self.keywords = [] + self.reserved_words def _init_grammars(self, *args, **kwargs): # Basic grammars self.commentGrammars = (pyparsing.pythonStyleComment|pyparsing.cStyleComment).ignore(pyparsing.quotedString).suppress() self.commentInProgress = '/*' + pyparsing.SkipTo( pyparsing.stringEnd ^ '*/' ) self.doNotParse = self.commentGrammars | self.commentInProgress | pyparsing.quotedString self.fileName = pyparsing.Word(self.legalChars + '/\\') self.inputFrom = self.fileName('inputFrom') self.inputMark = pyparsing.Literal('<') self.pipe = pyparsing.Keyword('|', identChars='|') self.stringEnd = pyparsing.stringEnd ^ '\nEOF' # Complex grammars self.multilineCommand = pyparsing.Or([pyparsing.Keyword(c, caseless=self.case_insensitive) for c in self.multilineCommands ])('multilineCommand') self.multilineCommand.setName('multilineCommand') self.oneLineCommand = ( ~self.multilineCommand + pyparsing.Word(self.legalChars))('command') # Hack-y convenience access to grammars self._grammars = { # Basic grammars 'commentGrammars', 'commentInProgress', 'doNotParse', 'fileName', 'inputFrom', 'inputMark', 'noSpecialParse', 'pipe', 'reserved_words', 'stringEnd', # Complex grammars 'multilineCommand', 'oneLineCommand' } self.inputFrom.setParseAction(replace_with_file_contents) self.inputMark.setParseAction(lambda x: '') self.commentGrammars.addParseAction(lambda x: '') if not self.blankLinesAllowed: self.blankLineTerminator = (pyparsing.lineEnd * 2)('terminator') if self.case_insensitive: self.multilineCommand.setParseAction(lambda x: x[0].lower()) self.oneLineCommand.setParseAction(lambda x: x[0].lower()) def _init_all_parsers(self): self._init_prefixParser() self._init_terminatorParser() self._init_saveParser() self._init_inputParser() self._init_outputParser() # intermission! :D # (update grammar(s) containing parsers) self.afterElements = \ pyparsing.Optional(self.pipe + pyparsing.SkipTo(self.outputParser ^ self.stringEnd, ignore=self.doNotParse)('pipeTo')) + \ pyparsing.Optional(self.outputParser('output') + pyparsing.SkipTo(self.stringEnd, ignore=self.doNotParse).setParseAction(lambda x: x[0].strip())('outputTo')) self._grammars.setName('afterElements') self._grammars.add('afterElements') # end intermission # FIXME: # For some reason it's necessary to set this again. # (Otherwise pyparsing results include `outputTo`, but not `output`.) self.outputParser('output') self._init_blankLineTerminationParser() self._init_multilineParser() self._init_singleLineParser() self._init_optionParser() def _init_prefixParser(self): self.prefixParser = pyparsing.Empty() self.prefixParser.setName('prefixParser') self._parsers.add('prefixParser') def _init_terminatorParser(self): self.terminatorParser = pyparsing.Or([ (hasattr(t, 'parseString') and t) or pyparsing.Literal(t) for t in self.terminators])('terminator') self.terminatorParser.setName('terminatorParser') self._parsers.add('terminatorParser') def _init_saveParser(self): self.saveparser = (pyparsing.Optional(pyparsing.Word(pyparsing.nums)|'*')('idx') + pyparsing.Optional(pyparsing.Word(self.legalChars + '/\\'))('fname') + pyparsing.stringEnd) self.saveparser.setName('saveParser') self._parsers.add('saveParser') def _init_outputParser(self): # outputParser = (pyparsing.Literal('>>') | (pyparsing.WordStart() + '>') | pyparsing.Regex('[^=]>'))('output') self.outputParser = self.redirector * 2 | (pyparsing.WordStart() + self.redirector) | pyparsing.Regex('[^=]' + self.redirector)('output') self.outputParser.setName('outputParser') self._parsers.add('outputParser') def _init_inputParser(self): # a not-entirely-satisfactory way of distinguishing < as in "import from" from < # as in "lesser than" self.inputParser = self.inputMark + \ pyparsing.Optional(self.inputFrom) + \ pyparsing.Optional('>') + \ pyparsing.Optional(self.fileName) + \ (pyparsing.stringEnd | '|') self.inputParser.ignore(self.commentInProgress) self.inputParser.setName('inputParser') self._parsers.add('inputParser') def _init_blankLineTerminationParser(self): self.blankLineTerminationParser = pyparsing.NoMatch if not self.blankLinesAllowed: self.blankLineTerminationParser = ((self.multilineCommand ^ self.oneLineCommand) + pyparsing.SkipTo(self.blankLineTerminator, ignore=self.doNotParse).setParseAction(lambda x: x[0].strip())('args') + self.blankLineTerminator ) # FIXME: Does this call *really* have to be reassigned into the variable??? self.blankLineTerminationParser = self.blankLineTerminationParser.setResultsName('statement') self.blankLineTerminationParser.setName('blankLineTerminationParser') self._parsers.add('blankLineTerminationParser') def _init_multilineParser(self): #self.multilineParser = self.multilineParser.setResultsName('multilineParser') self.multilineParser = ( ( (self.multilineCommand('multilineCommand') ^ self.oneLineCommand) + pyparsing.SkipTo(self.terminatorParser, ignore=self.doNotParse).setParseAction(lambda x: x[0].strip())('args') + self.terminatorParser )('statement') + pyparsing.SkipTo( self.outputParser ^ self.pipe ^ self.stringEnd, ignore=self.doNotParse ).setParseAction(lambda x: x[0].strip())('suffix') + self.afterElements) self.multilineParser.ignore(self.commentInProgress) self.multilineParser.setName('multilineParser') self._parsers.add('multilineParser') def _init_singleLineParser(self): #self.singleLineParser = self.singleLineParser.setResultsName('singleLineParser') self.singleLineParser = ((self.oneLineCommand + pyparsing.SkipTo(self.terminatorParser ^ self.stringEnd ^ self.pipe ^ self.outputParser, ignore=self.doNotParse).setParseAction(lambda x:x[0].strip())('args'))('statement') + pyparsing.Optional(self.terminatorParser) + self.afterElements) self.singleLineParser.setName('singleLineParser') self._parsers.add('singleLineParser') def _init_optionParser(self): # Different from the other parsers. # This one is based on optparse.OptionParser, # not pyparsing. # # It's included here to keep all parsing-related # code under one roof. # TODO: Why isn't this using cmd2's OptionParser? self.optionParser = optparse.OptionParser() self._parsers.add('optionParser') def parseString(self, *args, **kwargs): '''Parses a string using `self.mainParser`.''' return self.mainParser.parseString(*args, **kwargs) * * * There you have it. The ugly truth. ☺ **Edited 2012-11-12:** I incorrectly used the term “class attribute” in the original title for this question. It‘s a silly mistake, and I apologize for any confusion. It has now been corrected to “instance attribute”. Answer: Define `self.multilineCommands` as a Forward, like this: self.multlineCommands = Forward() and then define the rest of your grammar using `self.multilineCommands` as you would normally. In your tests, “inject” different expressions for `self.multilineCommands` using the `<<` operator: self.multilineCommands << (test expression 1) Then when you parse using the overall grammar, your pyparsing test expression will be used where ever `self.multilineCommands` is. (**Note:** Be sure to enclose the right-hand side in `()`’s to guard against precedence of operations problems due to my unfortunate choice of `<<` for this operator. In the next release of pyparsing, I’ll add support for `<<=` and deprecate `<<` for this operation, which will resolve most of this problem.) **EDIT** Here is a flexible parser that has a write-only property that will accept a list of strings to take as allowed keywords. The parser itself is a simple function call parser that parses functions that take a single numeric argument, or the constants `pi` or `π` or `e`. # coding=UTF-8 from pyparsing import * class FlexParser(object): def __init__(self, fixedPart): self._dynamicExpr = Forward() self.parser = self._dynamicExpr + fixedPart def _set_keywords(self, kw_list): # accept a list of words, convert it to a MatchFirst of # Keywords defined using those words self._dynamicExpr << (MatchFirst(map(Keyword, kw_list))) keywords = property(fset=_set_keywords) def parseString(self,s): return self.parser.parseString(s) E = CaselessKeyword("e").setParseAction(replaceWith(2.71828)) PI = (CaselessKeyword("pi") | "π").setParseAction(replaceWith(3.14159)) numericLiteral = PI | E | Regex(r'[+-]?\d+(\.\d*)?').setParseAction(lambda t:float(t[0])) fp = FlexParser('(' + numericLiteral + ')') fp.keywords = "sin cos tan asin acos atan sqrt".split() print fp.parseString("sin(30)") print fp.parseString("cos(π)") print fp.parseString("sqrt(-1)") Now change the keywords by just assigning a word list to the `keywords` property. The setter method converts the list to a MatchFirst of Keywords. Note that now, parsing "sin(30)" will raise an exception: fp.keywords = "foo bar baz boo".split() print fp.parseString("foo(1000)") print fp.parseString("baz(e)") print fp.parseString("bar(1729)") print fp.parseString("sin(30)") # raises a ParseException