Search Results

Search found 13693 results on 548 pages for 'python metaprogramming'.

Page 214/548 | < Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >

  • Return a list of important Python modules in a script?

    - by Jono Bacon
    Hi All, I am writing a program that categorizes a list of Python files by which modules they import. As such I need to scan the collection of .py files ad return a list of which modules they import. As an example, if one of the files I import has the following lines: import os import sys, gtk I would like it to return: ["os", "sys", "gtk"] I played with modulefinder and wrote: from modulefinder import ModuleFinder finder = ModuleFinder() finder.run_script('testscript.py') print 'Loaded modules:' for name, mod in finder.modules.iteritems(): print '%s ' % name, but this returns more than just the modules used in the script. As an example in a script which merely has: import os print os.getenv('USERNAME') The modules returned from the ModuleFinder script return: tokenize heapq __future__ copy_reg sre_compile _collections cStringIO _sre functools random cPickle __builtin__ subprocess cmd gc __main__ operator array select _heapq _threading_local abc _bisect posixpath _random os2emxpath tempfile errno pprint binascii token sre_constants re _abcoll collections ntpath threading opcode _struct _warnings math shlex fcntl genericpath stat string warnings UserDict inspect repr struct sys pwd imp getopt readline copy bdb types strop _functools keyword thread StringIO bisect pickle signal traceback difflib marshal linecache itertools dummy_thread posix doctest unittest time sre_parse os pdb dis ...whereas I just want it to return 'os', as that was the module used in the script. Can anyone help me achieve this?

    Read the article

  • Best way to convert a Unicode URL to ASCII (UTF-8 percent-escaped) in Python?

    - by benhoyt
    I'm wondering what's the best way -- or if there's a simple way with the standard library -- to convert a URL with Unicode chars in the domain name and path to the equivalent ASCII URL, encoded with domain as IDNA and the path %-encoded, as per RFC 3986. I get from the user a URL in UTF-8. So if they've typed in http://?.ws/? I get 'http://\xe2\x9e\xa1.ws/\xe2\x99\xa5' in Python. And what I want out is the ASCII version: 'http://xn--hgi.ws/%E2%99%A5'. What I do at the moment is split the URL up into parts via a regex, and then manually IDNA-encode the domain, and separately encode the path and query string with different urllib.quote() calls. # url is UTF-8 here, eg: url = u'http://?.ws/?'.encode('utf-8') match = re.match(r'([a-z]{3,5})://(.+\.[a-z0-9]{1,6})' r'(:\d{1,5})?(/.*?)(\?.*)?$', url, flags=re.I) if not match: raise BadURLException(url) protocol, domain, port, path, query = match.groups() try: domain = unicode(domain, 'utf-8') except UnicodeDecodeError: return '' # bad UTF-8 chars in domain domain = domain.encode('idna') if port is None: port = '' path = urllib.quote(path) if query is None: query = '' else: query = urllib.quote(query, safe='=&?/') url = protocol + '://' + domain + port + path + query # url is ASCII here, eg: url = 'http://xn--hgi.ws/%E3%89%8C' Is this correct? Any better suggestions? Is there a simple standard-library function to do this?

    Read the article

  • How to compare 2 lists and merge them in Python/MySQL?

    - by NJTechGuy
    I want to merge data. Following are my MySQL tables. I want to use Python to traverse though a list of both Lists (one with dupe = 'x' and other with null dupes). For instance : a b c d e f key dupe -------------------- 1 d c f k l 1 x 2 g h j 1 3 i h u u 2 4 u r t 2 x From the above sample table, the desired output is : a b c d e f key dupe -------------------- 2 g c h k j 1 3 i r h u u 2 What I have so far : import string, os, sys import MySQLdb from EncryptedFile import EncryptedFile enc = EncryptedFile( os.getenv("HOME") + '/.py-encrypted-file') user = enc.getValue("user") pw = enc.getValue("pw") db = MySQLdb.connect(host="127.0.0.1", user=user, passwd=pw,db=user) cursor = db.cursor() cursor2 = db.cursor() cursor.execute("select * from delThisTable where dupe is null") cursor2.execute("select * from delThisTable where dupe is not null") result = cursor.fetchall() result2 = cursor2.fetchall() for cursorFieldname in cursor.description: for cursorFieldname2 in cursor2.description: if cursorFieldname[0] == cursorFieldname2[0]: ### How do I compare the record with same key value and update the original row null field value with the non-null value from the duplicate? Please fill this void... cursor.close() cursor2.close() db.close() Thanks guys!

    Read the article

  • Python/Sqlite program, write as browser app or desktop app?

    - by ChrisC
    I am in the planning stages of rewriting an Access db I wrote several years ago in a full fledged program. I have very slight experience coding, but not enough to call myself a programmer by far. I'll definitely be learning as I go, so I'd like to keep everything as simple as possible. I've decided on Python and SQLite for my program, but I need help on my next decision. Here is my situation 1) It'll be a desktop program, run locally on each machine, all Windows 2) I would really like a nice looking GUI with colors, nice screens, menus, lists, etc, 3) I'm thinking about using a browser interface because (a) from what I've read, browser apps can look really great, and (b) I understand there are lots of free tools to assist in setting up the GUI/GUI code with drag and drop tools, so that helps my "keep it simple" goal. 4) I want the program to be totally portable so it runs completely from one single folder on a user's PC, with no installation(s) needed for it to run (If I did it as a browser app, isn't there the possibility that a user's browser settings could affect or break the app. How likely is this?) For my situation, should I make it a desktop app or browser app?

    Read the article

  • How can you get the call tree with python profilers?

    - by Oliver
    I used to use a nice Apple profiler that is built into the System Monitor application. As long as your C++ code was compiled with debug information, you could sample your running application and it would print out an indented tree telling you what percent of the parent function's time was spent in this function (and the body vs. other function calls). For instance, if main called function_1 and function_2, function_2 calls function_3, and then main calls function_3: main (100%, 1% in function body): function_1 (9%, 9% in function body): function_2 (90%, 85% in function body): function_3 (100%, 100% in function body) function_3 (1%, 1% in function body) I would see this and think, "Something is taking a long time in the code in the body of function_2. If I want my program to be faster, that's where I should start." Does anyone know how I can most easily get this exact profiling output for a python program? I've seen people say to do this: import cProfile, pstats prof = cProfile.Profile() prof = prof.runctx("real_main(argv)", globals(), locals()) stats = pstats.Stats(prof) stats.sort_stats("time") # Or cumulative stats.print_stats(80) # 80 = how many to print but it's quite messy compared to that elegant call tree. Please let me know if you can easily do this, it would help quite a bit. Cheers!

    Read the article

  • Python's asyncore to periodically send data using a variable timeout. Is there a better way?

    - by Nick Sonneveld
    I wanted to write a server that a client could connect to and receive periodic updates without having to poll. The problem I have experienced with asyncore is that if you do not return true when dispatcher.writable() is called, you have to wait until after the asyncore.loop has timed out (default is 30s). The two ways I have tried to work around this is 1) reduce timeout to a low value or 2) query connections for when they will next update and generate an adequate timeout value. However if you refer to 'Select Law' in 'man 2 select_tut', it states, "You should always try to use select() without a timeout." Is there a better way to do this? Twisted maybe? I wanted to try and avoid extra threads. I'll include the variable timeout example here: #!/usr/bin/python import time import socket import asyncore # in seconds UPDATE_PERIOD = 4.0 class Channel(asyncore.dispatcher): def __init__(self, sock, sck_map): asyncore.dispatcher.__init__(self, sock=sock, map=sck_map) self.last_update = 0.0 # should update immediately self.send_buf = '' self.recv_buf = '' def writable(self): return len(self.send_buf) > 0 def handle_write(self): nbytes = self.send(self.send_buf) self.send_buf = self.send_buf[nbytes:] def handle_read(self): print 'read' print 'recv:', self.recv(4096) def handle_close(self): print 'close' self.close() # added for variable timeout def update(self): if time.time() >= self.next_update(): self.send_buf += 'hello %f\n'%(time.time()) self.last_update = time.time() def next_update(self): return self.last_update + UPDATE_PERIOD class Server(asyncore.dispatcher): def __init__(self, port, sck_map): asyncore.dispatcher.__init__(self, map=sck_map) self.port = port self.sck_map = sck_map self.create_socket(socket.AF_INET, socket.SOCK_STREAM) self.bind( ("", port)) self.listen(16) print "listening on port", self.port def handle_accept(self): (conn, addr) = self.accept() Channel(sock=conn, sck_map=self.sck_map) # added for variable timeout def update(self): pass def next_update(self): return None sck_map = {} server = Server(9090, sck_map) while True: next_update = time.time() + 30.0 for c in sck_map.values(): c.update() # <-- fill write buffers n = c.next_update() #print 'n:',n if n is not None: next_update = min(next_update, n) _timeout = max(0.1, next_update - time.time()) asyncore.loop(timeout=_timeout, count=1, map=sck_map)

    Read the article

  • django + south + python: strange behavior when using a text string received as a parameter in a func

    - by carlosescri
    Hello, this is my first question. I'm trying to execute a SQL query in django (south migration): from django.db import connection # ... class Migration(SchemaMigration): # ... def transform_id_to_pk(self, table): try: db.delete_primary_key(table) except: pass finally: cursor = connection.cursor() # This does not work cursor.execute('SELECT MAX("id") FROM "%s"', [table]) # I don't know if this works. try: minvalue = cursor.fetchone()[0] except: minvalue = 1 seq_name = table + '_id_seq' db.execute('CREATE SEQUENCE "%s" START WITH %s OWNED BY "%s"."id"', [seq_name, minvalue, table]) db.execute('ALTER TABLE "%s" ALTER COLUMN id SET DEFAULT nextval("%s")', [table, seq_name + '::regclass']) db.create_primary_key(table, ['id']) # ... I use this function like this: self.transform_id_to_pk('my_table_name') So it should: Find the biggest existent ID or 0 (it crashes) Create a sequence name Create the sequence Update the ID field to use sequence Update the ID as PK But it crashes and the error says: File "../apps/accounting/migrations/0003_setup_tables.py", line 45, in forwards self.delegation_table_setup(orm) File "../apps/accounting/migrations/0003_setup_tables.py", line 478, in delegation_table_setup self.transform_id_to_pk('accounting_delegation') File "../apps/accounting/migrations/0003_setup_tables.py", line 20, in transform_id_to_pk cursor.execute(u'SELECT MAX("id") FROM "%s"', [table.encode('utf-8')]) File "/Library/Python/2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) psycopg2.ProgrammingError: relation "E'accounting_delegation'" does not exist LINE 1: SELECT MAX("id") FROM "E'accounting_delegation'" ^ I have shortened the file paths for convenience. What does that "E'accounting_delegation'" mean? How could I get rid of it? Thank you! Carlos.

    Read the article

  • Python modules not updating after restarting the main module.

    - by Ian
    I've recently come back to a project having had to stop for about 6 months, and after reinstalling my operating system and coming back to it I'm having all kinds of crazy things happen. I made sure to install the same version(2.6) of python that I was using before. It started by giving me strange tkinter error that I hadn't had trouble with before, the program is relatively simple and the 2 or 3 bugs that were left when i quit, I had documented and weren't related to the interface. Things got even weirder when the same error would pop up even after I had removed the offending section of code. In fact, the traceback pointed to a line that didn't even exist in the module it was referencing, eg: line 262 when the module was only 200 lines long. After just starting a completely new file for the main module and copy/pasting it finally recognized that the offending code was gone and I stopped getting the error only to find that any updates to the code I made in another module didn't show up when I restarted the program through the shell. (I didn't forget to save.) After fiddling with this, of course, the old interface error came back, only in a different section of code that had been working previously. In fact, if I revert back to the files I had six months ago, the program works fine. As soon as I change anything in the main module, however, the interface bug comes back. Here's the original error: Exception in Tkinter callback Traceback (most recent call last): File "C:\Python26\lib\lib-tk\Tkinter.py", line 1410, in __call__ return self.func(*args) File "C:\PyStuff\interface.py", line 202, in dispOne __main__.top.destroy() File "C:\Python26\lib\lib-tk\Tkinter.py", line 1938, in destroy self.tk.call('destroy', self._w) TclError: can't invoke "destroy" command: application has been destroyed I'm guessing something else is going on here other than my own poor programming. Anyone have any ideas?

    Read the article

  • Why is my simple python gtk+cairo program running so slowly/stutteringly?

    - by synapz
    My program draws circles moving on the window. I think I must be missing some basic gtk/cairo concept because it seems to be running too slowly/stutteringly for what I am doing. Any ideas? Thanks for any help! #!/usr/bin/python import gtk import gtk.gdk as gdk import math import random import gobject # The number of circles and the window size. num = 128 size = 512 # Initialize circle coordinates and velocities. x = [] y = [] xv = [] yv = [] for i in range(num): x.append(random.randint(0, size)) y.append(random.randint(0, size)) xv.append(random.randint(-4, 4)) yv.append(random.randint(-4, 4)) # Draw the circles and update their positions. def expose(*args): cr = darea.window.cairo_create() cr.set_line_width(4) for i in range(num): cr.set_source_rgb(1, 0, 0) cr.arc(x[i], y[i], 8, 0, 2 * math.pi) cr.stroke_preserve() cr.set_source_rgb(1, 1, 1) cr.fill() x[i] += xv[i] y[i] += yv[i] if x[i] > size or x[i] < 0: xv[i] = -xv[i] if y[i] > size or y[i] < 0: yv[i] = -yv[i] # Self-evident? def timeout(): darea.queue_draw() return True # Initialize the window. window = gtk.Window() window.resize(size, size) window.connect("destroy", gtk.main_quit) darea = gtk.DrawingArea() darea.connect("expose-event", expose) window.add(darea) window.show_all() # Self-evident? gobject.idle_add(timeout) gtk.main()

    Read the article

  • How do I search & replace all occurrences of a string in a ms word doc with python?

    - by Mark
    Hello there, I am pretty stumped at the moment. Based on http://stackoverflow.com/questions/1045628/can-i-use-win32-com-to-replace-text-inside-a-word-document I was able to code a simple template system that generates word docs out of a template word doc (in Python). My problem is that text in "Text Fields" is not find that way. Even in Word itself there is no option to search everything - you actually have to choose between "Main Document" and "Text Fields". Being new to the Windows world I tried to browse the VBA docs for it but found no help (probably due to "text field" being a very common term). word.Documents.Open(f) wdFindContinue = 1 wdReplaceAll = 2 find_str = '\{\{(*)\}\}' find = word.Selection.Find find.Execute(find_str, False, False, True, False, False, \ True, wdFindContinue, False, False, False) while find.Found: t = word.Selection.Text.__str__() r = process_placeholder(t, answer_data, question_data) if type(r) == dict: errors.append(r) else: find.Execute(t, False, True, False, False, False, \ True, False, False, r, wdReplaceAll) This is the relevant portion of my code. I was able to get around all problems by myself by now (hint: if you want to replace strings with more than 256 chars, you have to do it via clipboard, etc ...) Hope, someone can help me.

    Read the article

  • Python: Does one of these examples waste more memory?

    - by orokusaki
    In a Django view function which uses manual transaction committing, I have: context = RequestContext(request, data) transaction.commit() return render_to_response('basic.html', data, context) # Returns a Django ``HttpResponse`` object which is similar to a dictionary. I think it is a better idea to do this: context = RequestContext(request, data) response = render_to_response('basic.html', data, context) transaction.commit() return response If the page isn't rendered correctly in the second version, the transaction is rolled back. This seems like the logical way of doing it albeit there won't likely be many exceptions at that point in the function when the application is in production. But... I fear that this might cost more and this will be replete through a number of functions since the application is heavy with custom transaction handling, so now is the time to figure out. If the HttpResponse instance is in memory already (at the point of render_to_response()), then what does another reference cost? When the function ends, doesn't the reference (response variable) go away so that when Django is done converting the HttpResponse into a string for output Python can immediately garbage collect it? Is there any reason I would want to use the first version (other than "It's 1 less line of code.")?

    Read the article

  • I don't like Python functions that take two or more iterables. Is it a good idea?

    - by Xavier Ho
    This question came from looking at this question on Stackoverflow. def fringe8((px, py), (x1, y1, x2, y2)): Personally, it's been one of my pet peeves to see a function that takes two arguments with fixed-number iterables (like a tuple) or two or more dictionaries (Like in the Shotgun API). It's just hard to use, because of all the verbosity and double-bracketed enclosures. Wouldn't this be better: >>> class Point(object): ... def __init__(self, x, y): ... self.x = x ... self.y = y ... >>> class Rect(object): ... def __init__(self, x1, y1, x2, y2): ... self.x1 = x1 ... self.y1 = y1 ... self.x2 = x2 ... self.y2 = y2 ... >>> def fringe8(point, rect): ... # ... ... >>> >>> point = Point(2, 2) >>> rect = Rect(1, 1, 3, 3) >>> >>> fringe8(point, rect) Is there a situation where taking two or more iterable arguments is justified? Obviously the standard itertools Python library needs that, but I can't see it being pretty in maintainable, flexible code design.

    Read the article

  • My python program always brings down my internet connection after several hours running, how do I debug and fix this problem?

    - by Shane
    I'm writing a python script checking/monitoring several server/websites status(response time and similar stuff), it's a GUI program and I use separate thread to check different server/website, and the basic structure of each thread is using an infinite while loop to request that site every random time period(15 to 30 seconds), once there's changes in website/server each thread will start a new thread to do a thorough check(requesting more pages and similar stuff). The problem is, my internet connection always got blocked/jammed/messed up after several hours running of this script, the situation is, from my script side I got urlopen error timed out each time it's requesting a page, and from my FireFox browser side I cannot open any site. But the weird thing is, the moment I close my script my Internet connection got back on immediately which means now I can surf any site through my browser, so it must be the script causing all the problem. I've checked the program carefully and even use del to delete any connection once it's used, still get the same problem. I only use urllib2, urllib, mechanize to do network requests. Anybody knows why such thing happens? How do I debug this problem? Is there a tool or something to check my network status once such situation occurs? It's really bugging me for a while... By the way I'm behind a VPN, does it have something to do with this problem? Although I don't think so because my network always get back on once the script closed, and the VPN connection never drops(as it appears) during the whole process.

    Read the article

  • How do I structure my tests with Python unittest module?

    - by persepolis
    I'm trying to build a test framework for automated webtesting in selenium and unittest, and I want to structure my tests into distinct scripts. So I've organised it as following: base.py - This will contain, for now, the base selenium test case class for setting up a session. import unittest from selenium import webdriver # Base Selenium Test class from which all test cases inherit. class BaseSeleniumTest(unittest.TestCase): def setUp(self): self.browser = webdriver.Firefox() def tearDown(self): self.browser.close() main.py - I want this to be the overall test suite from which all the individual tests are run. import unittest import test_example if __name__ == "__main__": SeTestSuite = test_example.TitleSpelling() unittest.TextTestRunner(verbosity=2).run(SeTestSuite) test_example.py - An example test case, it might be nice to make these run on their own too. from base import BaseSeleniumTest # Test the spelling of the title class TitleSpelling(BaseSeleniumTest): def test_a(self): self.assertTrue(False) def test_b(self): self.assertTrue(True) The problem is that when I run main.py I get the following error: Traceback (most recent call last): File "H:\Python\testframework\main.py", line 5, in <module> SeTestSuite = test_example.TitleSpelling() File "C:\Python27\lib\unittest\case.py", line 191, in __init__ (self.__class__, methodName)) ValueError: no such test method in <class 'test_example.TitleSpelling'>: runTest I suspect this is due to the very special way in which unittest runs and I must have missed a trick on how the docs expect me to structure my tests. Any pointers?

    Read the article

  • Can you dynamically combine multiple conditional functions into one in Python?

    - by erich
    I'm curious if it's possible to take several conditional functions and create one function that checks them all (e.g. the way a generator takes a procedure for iterating through a series and creates an iterator). The basic usage case would be when you have a large number of conditional parameters (e.g. "max_a", "min_a", "max_b", "min_b", etc.), many of which could be blank. They would all be passed to this "function creating" function, which would then return one function that checked them all. Below is an example of a naive way of doing what I'm asking: def combining_function(max_a, min_a, max_b, min_b, ...): f_array = [] if max_a is not None: f_array.append( lambda x: x.a < max_a ) if min_a is not None: f_array.append( lambda x: x.a > min_a ) ... return lambda x: all( [ f(x) for f in f_array ] ) What I'm wondering is what is the most efficient to achieve what's being done above? It seems like executing a function call for every function in f_array would create a decent amount of overhead, but perhaps I'm engaging in premature/unnecessary optimization. Regardless, I'd be interested to see if anyone else has come across usage cases like this and how they proceeded. Also, if this isn't possible in Python, is it possible in other (perhaps more functional) languages?

    Read the article

  • How can I override list methods to do vector addition and subtraction in python?

    - by Bobble
    I originally implemented this as a wrapper class around a list, but I was annoyed by the number of operator() methods I needed to provide, so I had a go at simply subclassing list. This is my test code: class CleverList(list): def __add__(self, other): copy = self[:] for i in range(len(self)): copy[i] += other[i] return copy def __sub__(self, other): copy = self[:] for i in range(len(self)): copy[i] -= other[i] return copy def __iadd__(self, other): for i in range(len(self)): self[i] += other[i] return self def __isub__(self, other): for i in range(len(self)): self[i] -= other[i] return self a = CleverList([0, 1]) b = CleverList([3, 4]) print('CleverList does vector arith: a, b, a+b, a-b = ', a, b, a+b, a-b) c = a[:] print('clone test: e = a[:]: a, e = ', a, c) c += a print('OOPS: augmented addition: c += a: a, c = ', a, c) c -= b print('OOPS: augmented subtraction: c -= b: b, c, a = ', b, c, a) Normal addition and subtraction work in the expected manner, but there are problems with the augmented addition and subtraction. Here is the output: >>> CleverList does vector arith: a, b, a+b, a-b = [0, 1] [3, 4] [3, 5] [-3, -3] clone test: e = a[:]: a, e = [0, 1] [0, 1] OOPS: augmented addition: c += a: a, c = [0, 1] [0, 1, 0, 1] Traceback (most recent call last): File "/home/bob/Documents/Python/listTest.py", line 35, in <module> c -= b TypeError: unsupported operand type(s) for -=: 'list' and 'CleverList' >>> Is there a neat and simple way to get augmented operators working in this example?

    Read the article

  • In Python, how to make sure database connection will always close before leaving a code block?

    - by Cawas
    I want to prevent database connection being open as much as possible, because this code will run on an intensive used server and people here already told me database connections should always be closed as soon as possible. def do_something_that_needs_database (): dbConnection = MySQLdb.connect(host=args['database_host'], user=args['database_user'], passwd=args['database_pass'], db=args['database_tabl'], cursorclass=MySQLdb.cursors.DictCursor) dbCursor = dbConnection.cursor() dbCursor.execute('SELECT COUNT(*) total FROM table') row = dbCursor.fetchone() if row['total'] == 0: print 'error: table have no records' dbCursor.execute('UPDATE table SET field="%s"', whatever_value) return None print 'table is ok' dbCursor.execute('UPDATE table SET field="%s"', another_value) # a lot more of workflow done here dbConnection.close() # even more stuff would come below I believe that leaves a database connection open when there is no row on the table, tho I'm still really not sure how it works. Anyway, maybe that is bad design in the sense that I could open and close a DB connection after each small block of execute. And sure, I could just add a close right before the return in that case... But how could I always properly close the DB without having to worry if I have that return, or a raise, or continue, or whatever in the middle? I'm thinking in something like a code block, similar to using try, like in the following suggestion, which obviously doesn't work: def do_something_that_needs_database (): dbConnection = MySQLdb.connect(host=args['database_host'], user=args['database_user'], passwd=args['database_pass'], db=args['database_tabl'], cursorclass=MySQLdb.cursors.DictCursor) try: dbCursor = dbConnection.cursor() dbCursor.execute('SELECT COUNT(*) total FROM table') row = dbCursor.fetchone() if row['total'] == 0: print 'error: table have no records' dbCursor.execute('UPDATE table SET field="%s"', whatever_value) return None print 'table is ok' dbCursor.execute('UPDATE table SET field="%s"', another_value) # again, that same lot of line codes done here except ExitingCodeBlock: closeDb(dbConnection) # still, that "even more stuff" from before would come below I don't think there is anything similar to ExitingCodeBlock for an exception, tho I know there is the try else, but I hope Python already have a similar feature... Or maybe someone can suggest me a paradigm move and tell me this is awful and highly advise me to never do that. Maybe this is just something to not worry about and let MySQLdb handle it, or is it?

    Read the article

  • Intelligent search and generation of Java code, preferrably using Python?

    - by Ipsquiggle
    Basically, I do lots of one-off code generation, large-scale refactorings, etc. etc. in Java. My tool language of choice is Python, but I'll take whatever solutions you can offer. Here is a simplified illustration of what I would like, in a pseudocode Generating an implementation for an interface search within my project: for each Interface as iName: write class(name=iName+"Impl", implements=iName) search within the body of iName: for each Method as mName: write method(name=mName, body="// TODO implement this...") Basically, the tool I'm searching for would allow me to: parse files according to their Java structure ("search for interfaces") search for words contextualized by language elements and types ("variables of type SomeClass", "doStuff() method calls on SomeClass instances") to run searches with structural context ("within the body of the current result") easily replace or generate code (with helpers to generate, as above, or functions for replacing, "rename the interface to Foo", "insert the line Blah.Blah()", etc.) The point is, I don't want to spend a lot of time writing these things, as they are usually throwaway. But sometimes I need something just a little smarter than what grep offers. It wouldn't be too hard to write up a simplistic version of this, but if I'm going to use something like this at all, I'd expect it to be robust. Any suggestions of a tool/library that will help me accomplish this?

    Read the article

  • How to debug python del self.callbacks[s][cid] keyError when the error message does not indicate where in my code the error is

    - by lkloh
    In a python program I am writing, I get an error saying Traceback (most recent call last): File "/Applications/Canopy.app/appdata/canopy-1.4.0.1938.macosx- x86_64/Canopy.app/Contents/lib/python2.7/lib-tk/Tkinter.py", line 1470, in __call__ return self.func(*args) File "/Users/lkloh/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/backends/backend_tkagg.py", line 413, in button_release_event FigureCanvasBase.button_release_event(self, x, y, num, guiEvent=event) File "/Users/lkloh/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 1808, in button_release_event self.callbacks.process(s, event) File "/Users/lkloh/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/matplotlib/cbook.py", line 525, in process del self.callbacks[s][cid] KeyError: 103 Do you have any idea how I can debug this/ what could be wrong? The error message does not point to anywhere in code I have personally written. I get the error message only after I close my GUI window, but I want to fix it even though it does not break the functionality of my code. The error is part of a very big program I am writing, so I cannot post all my code, but below is code I think is relevant: def save(self, event): self.getSaveAxes() self.save_connect() def getSaveAxes(self): saveFigure = figure(figsize=(8,1)) saveFigure.clf() # size of save buttons rect_saveHeaders = [0.04,0.2,0.2,0.6] rect_saveHeadersFilterParams = [0.28,0.2,0.2,0.6] rect_saveHeadersOverride = [0.52,0.2,0.2,0.6] rect_saveQuit = [0.76,0.2,0.2,0.6] #initalize axes saveAxs = {} saveAxs['saveHeaders'] = saveFigure.add_axes(rect_saveHeaders) saveAxs['saveHeadersFilterParams'] = saveFigure.add_axes(rect_saveHeadersFilterParams) saveAxs['saveHeadersOverride'] = saveFigure.add_axes(rect_saveHeadersOverride) saveAxs['saveQuit'] = saveFigure.add_axes(rect_saveQuit) self.saveAxs = saveAxs self.save_connect() self.saveFigure = saveFigure show() def save_connect(self): #set buttons self.bn_saveHeaders = Button(self.saveAxs['saveHeaders'], 'Save\nHeaders\nOnly') self.bn_saveHeadersFilterParams = Button(self.saveAxs['saveHeadersFilterParams'], 'Save Headers &\n Filter Parameters') self.bn_saveHeadersOverride = Button(self.saveAxs['saveHeadersOverride'], 'Save Headers &\nOverride Data') self.bn_saveQuit = Button(self.saveAxs['saveQuit'], 'Quit') #connect buttons to functions they trigger self.cid_saveHeaders = self.bn_saveHeaders.on_clicked(self.save_headers) self.cid_savedHeadersFilterParams = self.bn_saveHeadersFilterParams.on_clicked(self.save_headers_filterParams) self.cid_saveHeadersOverride = self.bn_saveHeadersOverride.on_clicked(self.save_headers_override) self.cid_saveQuit = self.bn_saveQuit.on_clicked(self.save_quit) def save_quit(self, event): self.save_disconnect() close()

    Read the article

  • How do I subtract two dates in Django/Python?

    - by Ryan
    Hi! I'm working on a little fitness tracker in order to teach myself Django. I want to graph my weight over time, so I've decided to use the Python Google Charts Wrapper. Google charts require that you convert your date into a x coordinate. To do this I want to take the number of days in my dataset by subtracting the first weigh-in from the last weigh-in and then using that to figure out the x coords (for example, I could 100 by the result and increment the x coord by the resulting number for each y coord.) Anyway, I need to figure out how to subtract Django datetime objects from one another and so far, I am striking out on both google and here at the stack. I know PHP, but have never gotten a handle on OO programming, so please excuse my ignorance. Here is what my models look like: class Goal(models.Model): goal_weight = models.DecimalField("Goal Weight",max_digits=4, decimal_places=1) target_date = models.DateTimeField("Target Date to Reach Goal") set_date = models.DateTimeField("When did you set your goal?") comments = models.TextField(blank=True) def __unicode__(self): return unicode(self.goal_weight) class Weight(models.Model): """Weight at a given date and time """ goal = models.ForeignKey(Goal) weight = models.DecimalField("Current Weight",max_digits=4, decimal_places=1) weigh_date = models.DateTimeField("Date of Weigh-In") comments = models.TextField(blank=True) def __unicode__(self): return unicode(self.weight) def recorded_today(self): return self.date.date() == datetime.date.today() Any ideas on how to proceed in the view? Thanks so much!

    Read the article

  • Python: Calling method A from class A within class B?

    - by Tommo
    There are a number of questions that are similar to this, but none of the answers hits the spot - so please bear with me. I am trying my hardest to learn OOP using Python, but i keep running into errors (like this one) which just make me think this is all pointless and it would be easier to just use methods. Here is my code: class TheGUI(wx.Frame): def __init__(self, title, size): wx.Frame.__init__(self, None, 1, title, size=size) # The GUI is made ... textbox.TextCtrl(panel1, 1, pos=(67,7), size=(150, 20)) button1.Bind(wx.EVT_BUTTON, self.button1Click) self.Show(True) def button1Click(self, event): #It needs to do the LoadThread function! class WebParser: def LoadThread(self, thread_id): #It needs to get the contents of textbox! TheGUI = TheGUI("Text RPG", (500,500)) TheParser = WebParser TheApp.MainLoop() So the problem i am having is that the GUI class needs to use a function that is in the WebParser class, and the WebParser class needs to get text from a textbox that exists in the GUI class. I know i could do this by passing the objects around as parameters, but that seems utterly pointless, there must be a more logical way to do this that doesn't using classes seem so pointless? Thanks in advance!

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • How to match words as if in a dictionary, based on len-1 or len+1? Python

    - by pearbear
    If I have a word 'raqd', how would I use python to have a spellcheck, so to speak, to find the word 'rad' as an option in 'spellcheck'? What I've been trying to do is this: def isbettermatch(keysplit, searchword): i = 0 trues = 0 falses = 0 lensearchwords = len(searchword) keysplits = copy.deepcopy(keysplit) searchwords = copy.deepcopy(searchword) #print keysplit, searchwords if len(keysplits) == len(searchwords)-1: i = 0 while i < len(keysplits): j = 0 while j < lensearchwords: if keysplits[i] == searchwords[j]: trues +=1 searchwords.pop(j) lensearchwords = len(searchwords) elif keysplits[i] != searchwords[j]: falses +=1 j +=1 i +=1 if trues >= len(searchwords)-1: #print "-------------------------------------------------------", keysplits return True keysplit is a list like ['s', 'p', 'o', 'i', 'l'] for example, and the searchword would be a list ['r', 'a', 'q', 'd']. If the function returns True, then it would print the keyword that matches. Ex. 'rad', for the searchword 'raqd'. I need to find all possible matches for the searchword with a single letter addition or deletion. so ex. 'raqd' would have an option to be 'rad', and 'poted' could be 'posted' or 'potted'. Above is what I have tried, but it is not working well at all. Help much appreciated!

    Read the article

  • Elegant ways to print out a bunch of instance attributes in python 2.6?

    - by wds
    First some background. I'm parsing a simple file format, and wish to re-use the results in python code later, so I made a very simple class hierarchy and wrote the parser to construct objects from the original records in the text files I'm working from. At the same time I'd like to load the data into a legacy database, the loader files for which take a simple tab-separated format. The most straightforward way would be to just do something like: print "%s\t%s\t....".format(record.id, record.attr1, len(record.attr1), ...) Because there are so many columns to print out though, I thought I'd use the Template class to make it a bit easier to see what's what, i.e.: templ = Template("$id\t$attr1\t$attr1_len\t...") And I figured I could just use the record in place of the map used by a substitute call, with some additional keywords for derived values: print templ.substitute(record, attr1_len=len(record.attr1), ...) Unfortunately this fails, complaining that the record instance does not have an attribute __getitem__. So my question is twofold: do I need to implement __getitem__ and if so how? is there a more elegant way for something like this where you just need to output a bunch of attributes you already know the name for?

    Read the article

  • pygame double buffering

    - by BaldDude
    I am trying to use double buffering in pygame. What I'm trying to do is display a red then green screen, and switch from one to the other. Unfortunately, all I have is a black screen. I looked through many sites, but have been unable to find a solution. Any help would be appreciated. import pygame, sys from pygame.locals import * RED = (255, 0, 0) GREEN = ( 0, 255, 0) bob = 1 pygame.init() #DISPLAYSURF = pygame.display.set_mode((500, 400), 0, 32) DISPLAYSURF = pygame.display.set_mode((1920, 1080), pygame.OPENGL | pygame.DOUBLEBUF | pygame.HWSURFACE | pygame.FULLSCREEN) glClear(GL_COLOR_BUFFER_BIT) glMatrixMode(GL_MODELVIEW) glLoadIdentity() running = True while running: if bob==1: #pygame.draw.rect(DISPLAYSURF, RED, (0, 0, 1920, 1080)) #pygame.display.flip() glBegin(GL_QUADS) glColor3f(1.0, 0.0, 0.0) glVertex2f(-1.0, 1.0) glVertex2f(-1.0, -1.0) glVertex2f(1.0, -1.0) glVertex2f(1.0, 1.0) glEnd() pygame.dis bob = 0 else: #pygame.draw.rect(DISPLAYSURF, GREEN, (0, 0, 1920, 1080)) #pygame.display.flip() glBegin(GL_QUADS) glColor3f(0.0, 1.0, 0.0) glVertex2f(-1.0, 1.0) glVertex2f(-1.0, -1.0) glVertex2f(1.0, -1.0) glVertex2f(1.0, 1.0) glEnd() pygame.dis bob = 1 for event in pygame.event.get(): if event.type == pygame.QUIT: running = False elif event.type == KEYDOWN: if event.key == K_ESCAPE: running = False pygame.quit() sys.exit() I'm using Python 2.7 and my code need to be os independent. Thanks for your help.

    Read the article

< Previous Page | 210 211 212 213 214 215 216 217 218 219 220 221  | Next Page >