Search Results

Search found 20677 results on 828 pages for 'python team'.

Page 86/828 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Spawning a thread in python

    - by morpheous
    I have a series of 'tasks' that I would like to run in separate threads. The tasks are to be performed by separate modules. Each containing the business logic for processing their tasks. Given a tuple of tasks, I would like to be able to spawn a new thread for each module as follows. from foobar import alice, bob charles data = getWorkData() # these are enums (which I just found Python doesn't support natively) :( tasks = (alice, bob, charles) for task in tasks # Ok, just found out Python doesn't have a switch - @#$%! # yet another thing I'll need help with then ... switch case alice: #spawn thread here - how ? alice.spawnWorker(data) No prizes for guessing I am still thinking in C++. How can I write this in a Pythonic way using Pythonic 'enums' and 'switch'es, and be able to run a module in a new thread. Obviously, the modules will all have a class that is derived from a ABC (abstract base class) called Plugin. The spawnWorker() method will be declared on the Plugin interface and defined in the classes implemented in the various modules. Maybe, there is a better (i.e. Pythonic) way of doing all this?. I'd be interested in knowing

    Read the article

  • Calling Python app/script from C#

    - by Maxim Z.
    I'm building an ASP.NET MVC (C#) site where I want to implement STV (Single Transferable Vote) voting. I've used OpenSTV for voting scenarios before, with great success, but I've never used it programmatically. The OpenSTV Google Code project offers a Python script that allows usage of OpenSTV from other applications: import sys sys.path.append("path to openstv package") from openstv.ballots import Ballots from openstv.ReportPlugins.TextReport import TextReport from openstv.plugins import getMethodPlugins (ballotFname, method, reportFname) = sys.argv[1:] methods = getMethodPlugins("byName") f = open(reportFname, "w") try: b = Ballots() b.loadUnknown(ballotFname) except Exception, msg: print >> f, ("Unable to read ballots from %s" % ballotFname) print >> f, msg sys.exit(-1) try: e = methods[method](b) e.runElection() except Exception, msg: print >> f, ("Unable to count votes using %s" % method) print >> f, msg sys.exit(-1) try: r = TextReport(e, outputFile=f) r.generateReport(); except Exception, msg: print >> f, "Unable to write report" print >> f, msg sys.exit(-1) f.close() Is there a way for me to make such a Python call from my C# ASP.NET MVC site? If so, how? Thanks in advance!

    Read the article

  • Parallel Tasking Concurrency with Dependencies on Python like GNU Make

    - by Brian Bruggeman
    I'm looking for a method or possibly a philosophical approach for how to do something like GNU Make within python. Currently, we utilize makefiles to execute processing because the makefiles are extremely good at parallel runs with changing single option: -j x. In addition, gnu make already has the dependency stacks built into it, so adding a secondary processor or the ability to process more threads just means updating that single option. I want that same power and flexibility in python, but I don't see it. As an example: all: dependency_a dependency_b dependency_c dependency_a: dependency_d stuff dependency_b: dependency_d stuff dependency_c: dependency_e stuff dependency_d: dependency_f stuff dependency_e: stuff dependency_f: stuff If we do a standard single thread operation (-j 1), the order of operation might be: dependency_f -> dependency_d -> dependency_a -> dependency_b -> dependency_e \ -> dependency_c For two threads (-j 2), we might see: 1: dependency_f -> dependency_d -> dependency_a -> dependency_b 2: dependency_e -> dependency_c Does anyone have any suggestions on either a package already built or an approach? I'm totally open, provided it's a pythonic solution/approach. Please and Thanks in advance!

    Read the article

  • Python opening a file and putting list of names on separate lines

    - by Jeremy Borton
    I am trying to write a python program using Python 3 I have to open a text file and read a list of names, print the list, sort the list in alphabetical order and then finally re-print the list. There's a little more to it than that BUT the problem I am having is that I'm supposed to print the list of names with each name on a separate line Instead of printing each name on a separate line, it prints the list all on one line. How can I fix this? def main(): #create control loop keep_going = 'y' #Open name file name_file = open('names.txt', 'r') names = name_file.readlines() name_file.close() #Open outfile outfile = open('sorted_names.txt', 'w') index = 0 while index < len(names): names[index] = names[index].rstrip('\n') index += 1 #sort names print('original order:', names) names.sort() print('sorted order:', names) #write names to outfile for item in names: outfile.write(item + '\n') #close outfile outfile.close() #search names while keep_going == 'y' or keep_going == 'Y': search = input('Enter a name to search: ') if search in names: print(search, 'was found in the list.') keep_going = input('Would you like to do another search Y for yes: ') else: print(search, 'was not found.') keep_going = input('Would you like to do another search Y for yes: ') main()

    Read the article

  • Python function correctly/incorrectly?

    - by Anthony Kernan
    I'm just starting too use python, learning experience. I know the basics logic of programming. I have a function in python that is running everytime, even when it's not supposed to. I use an if statement in the beginning of the function. I don't know why this if statement is not working, confused. I have another function that is similar and works correctly. Am I missing something simple? Here's the function that is not working... def check_artist_art(): if os.path.exists("/tmp/artistinfo") and open("/tmp/artistinfo").read() != title: #if artist == "": if os.path.exists(home + "/.artist"): os.remove(home + "/.artist") if os.path.exists("/tmp/artistinfo"): os.remove("/tmp/artistinfo") print artist return False else: os.path.exists("/tmp/artistinfo") and open("/tmp/artistinfo").read() == artist return False return True And this is the similar function that is working correctly.. def check_album(): if os.path.exists("/tmp/albuminfo") and open("/tmp/albuminfo").read() != album: if os.path.exists(home + "/.album"): os.remove(home + "/.album") if os.path.exists("/tmp/albuminfo"): os.remove("/tmp/albuminfo") return False elif os.path.exists("/tmp/trackinfo") and open("/tmp/trackinfo").read() == artist + album: return False return True Any help is greatly appreciated.

    Read the article

  • Program structure in long running data processing python script

    - by fmark
    For my current job I am writing some long-running (think hours to days) scripts that do CPU intensive data-processing. The program flow is very simple - it proceeds into the main loop, completes the main loop, saves output and terminates: The basic structure of my programs tends to be like so: <import statements> <constant declarations> <misc function declarations> def main(): for blah in blahs(): <lots of local variables> <lots of tightly coupled computation> for something in somethings(): <lots more local variables> <lots more computation> <etc., etc.> <save results> if __name__ == "__main__": main() This gets unmanageable quickly, so I want to refactor it into something more manageable. I want to make this more maintainable, without sacrificing execution speed. Each chuck of code relies on a large number of variables however, so refactoring parts of the computation out to functions would make parameters list grow out of hand very quickly. Should I put this sort of code into a python class, and change the local variables into class variables? It doesn't make a great deal of sense tp me conceptually to turn the program into a class, as the class would never be reused, and only one instance would ever be created per instance. What is the best practice structure for this kind of program? I am using python but the question is relatively language-agnostic, assuming a modern object-oriented language features.

    Read the article

  • LDAP query using Python: always no result

    - by Grey
    I am trying to use python to query LDAP server, and it always returns me no result. and anyone help me find what wrong with my python code? it runs fine without excpetions, and it always has no result. i played around with the filter like "cn=partofmyname" but just no luck. thanks for help import ldap try: l = ldap.open("server") l.protocol_version = ldap.VERSION3 l.set_option(ldap.OPT_REFERRALS, 0) output =l.simple_bind("cn=username,cn=Users,dc=domian, dc=net",'password$R') print output except ldap.LDAPError, e: print e baseDN = "DC=rim,DC=net" searchScope = ldap.SCOPE_SUBTREE ## retrieve all attributes - again adjust to your needs - see documentation for more options retrieveAttributes = None Filter = "(&(objectClass=user)(sAMAccountName=myaccount))" try: ldap_result_id = l.search(baseDN, searchScope, Filter, retrieveAttributes) print ldap_result_id result_set = [] while 1: result_type, result_data = l.result(ldap_result_id, 0) if len(result_data) == 0: print 'no reslut' break else: for i in range(len(result_set)): for entry in result_set[i]: try: name = entry[1]['cn'][0] email = entry[1]['mail'][0] phone = entry[1]['telephonenumber'][0] desc = entry[1]['description'][0] count = count + 1 print "%d.\nName: %s\nDescription: %s\nE-mail: %s\nPhone: %s\n" %\ (count, name, desc, email, phone) except: pass ## here you don't have to append to a list ## you could do whatever you want with the individual entry #if result_type == ldap.RES_SEARCH_ENTRY: # result_set.append(result_data) # print result_set except ldap.LDAPError, e: print e l.unbind()

    Read the article

  • A trivial Python SWIG error question

    - by Alex
    I am trying to get Python running with swig to do C/C++. I am running the tutorial here, 'building a python module'. When I do the call gcc -c example.c example_wrap.c -I /my_correct_path/python2.5 I get an error: my_correct_path/python2.5/pyport.h:761:2: error: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." example_wrap.c: In function 'SWIG_Python_ConvertFunctionPtr': example_wrap.c:2034: warning: initialization discards qualifiers from pointer target type example_wrap.c: In function 'SWIG_Python_FixMethods': example_wrap.c:3232: warning: initialization discards qualifiers from pointer target type It actually does create an example.o file, but it doesn't work. I am using python2.5 not 2.1 as in the example, is this a problem? The error (everything else is just a 'warning') says something about wrong platform. This is a 64bit machine; is this a problem? Is my gcc configured wrong for my machine? How do I get past this? UPDATE: I am still having problems. How do I actually implement this "fix"?

    Read the article

  • How to speed-up python nested loop?

    - by erich
    I'm performing a nested loop in python that is included below. This serves as a basic way of searching through existing financial time series and looking for periods in the time series that match certain characteristics. In this case there are two separate, equally sized, arrays representing the 'close' (i.e. the price of an asset) and the 'volume' (i.e. the amount of the asset that was exchanged over the period). For each period in time I would like to look forward at all future intervals with lengths between 1 and INTERVAL_LENGTH and see if any of those intervals have characteristics that match my search (in this case the ratio of the close values is greater than 1.0001 and less than 1.5 and the summed volume is greater than 100). My understanding is that one of the major reasons for the speedup when using NumPy is that the interpreter doesn't need to type-check the operands each time it evaluates something so long as you're operating on the array as a whole (e.g. numpy_array * 2), but obviously the code below is not taking advantage of that. Is there a way to replace the internal loop with some kind of window function which could result in a speedup, or any other way using numpy/scipy to speed this up substantially in native python? Alternatively, is there a better way to do this in general (e.g. will it be much faster to write this loop in C++ and use weave)? ARRAY_LENGTH = 500000 INTERVAL_LENGTH = 15 close = np.array( xrange(ARRAY_LENGTH) ) volume = np.array( xrange(ARRAY_LENGTH) ) close, volume = close.astype('float64'), volume.astype('float64') results = [] for i in xrange(len(close) - INTERVAL_LENGTH): for j in xrange(i+1, i+INTERVAL_LENGTH): ret = close[j] / close[i] vol = sum( volume[i+1:j+1] ) if ret > 1.0001 and ret < 1.5 and vol > 100: results.append( [i, j, ret, vol] ) print results

    Read the article

  • Need a better way to execute console commands from python and log the results

    - by Wim Coenen
    I have a python script which needs to execute several command line utilities. The stdout output is sometimes used for further processing. In all cases, I want to log the results and raise an exception if an error is detected. I use the following function to achieve this: def execute(cmd, logsink): logsink.log("executing: %s\n" % cmd) popen_obj = subprocess.Popen(\ cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (stdout, stderr) = popen_obj.communicate() returncode = popen_obj.returncode if (returncode <> 0): logsink.log(" RETURN CODE: %s\n" % str(returncode)) if (len(stdout.strip()) > 0): logsink.log(" STDOUT:\n%s\n" % stdout) if (len(stderr.strip()) > 0): logsink.log(" STDERR:\n%s\n" % stderr) if (returncode <> 0): raise Exception, "execute failed with error output:\n%s" % stderr return stdout "logsink" can be any python object with a log method. I typically use this to forward the logging data to a specific file, or echo it to the console, or both, or something else... This works pretty good, except for three problems where I need more fine-grained control than the communicate() method provides: stdout and stderr output can be interleaved on the console, but the above function logs them separately. This can complicate the interpretation of the log. How do I log stdout and stderr lines interleaved, in the same order as they were output? The above function will only log the command output once the command has completed. This complicates diagnosis of issues when commands get stuck in an infinite loop or take a very long time for some other reason. How do I get the log in real-time, while the command is still executing? If the logs are large, it can get hard to interpret which command generated which output. Is there a way to prefix each line with something (e.g. the first word of the cmd string followed by :).

    Read the article

  • Python Loop for mysql statement

    - by user552974
    Hi, I have a project that i need to compile number of cities in each state and make an insert statement for mysql database. I think the easiest way to do it is via python but since i m a complete noob i would like to reach out all the python gurus here. Here is what the input looks like. Example below is for Florida. cities = ['Boca Raton', 'Boynton Beach', 'Bradenton', 'Cape Coral', 'Deltona'] and this what the output should be. INSERT INTO `oc_locations` (`idLocation`, `name`, `idLocationParent`, `friendlyName`) VALUES (1, 'Florida', 0, 'Florida'), (2, 'Boca Raton', 1, 'Boca Raton'), (3, 'Boynton Beach', 1, 'Boynton Beach'), (4, 'Bradenton', 1, 'Bradenton'), (5, 'Cape Coral', 1, 'Cape Coral'), (6, 'Deltona', 1, 'Deltona'), If you look at carefully the "idLocationParent" for "Florida" value is "0" so which means it is a top level value. This will be done for 50 states so ability to plug the state name into the mysql statement would be icing on the cake if there is a easy way to do it. Also alphabetical order and auto increment for the idLocation would be great. Here is an example of what i m trying to achieve concatenation is the part i need to figure out. for city in cities: print (1, 'city', 0, 'city'), city

    Read the article

  • python os.mkfifo() for Windows

    - by user302099
    Hello. Short version (if you can answer the short version it does the job for me, the rest is mainly for the benefit of other people with a similar task): In python in Windows, I want to create 2 file objects, attached to the same file (it doesn't have to be an actual file on the hard-drive), one for reading and one for writing, such that if the reading end tries to read it will never get EOF (it will just block until something is written). I think in linux os.mkfifo() would do the job, but in Windows it doesn't exist. What can be done? (I must use file-objects). Some extra details: I have a python module (not written by me) that plays a certain game through stdin and stdout (using raw_input() and print). I also have a Windows executable playing the same game, through stdin and stdout as well. I want to make them play one against the other, and log all their communication. Here's the code I can write (the get_fifo() function is not implemented, because that's what I don't know to do it Windows): class Pusher(Thread): def __init__(self, source, dest, p1, name): Thread.__init__(self) self.source = source self.dest = dest self.name = name self.p1 = p1 def run(self): while (self.p1.poll()==None) and\ (not self.source.closed) and (not self.source.closed): line = self.source.readline() logging.info('%s: %s' % (self.name, line[:-1])) self.dest.write(line) self.dest.flush() exe_to_pythonmodule_reader, exe_to_pythonmodule_writer =\ get_fifo() pythonmodule_to_exe_reader, pythonmodule_to_exe_writer =\ get_fifo() p1 = subprocess.Popen(exe, shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE) old_stdin = sys.stdin old_stdout = sys.stdout sys.stdin = exe_to_pythonmodule_reader sys.stdout = pythonmodule_to_exe_writer push1 = Pusher(p1.stdout, exe_to_pythonmodule_writer, p1, '1') push2 = Pusher(pythonmodule_to_exe_reader, p1.stdin, p1, '2') push1.start() push2.start() ret = pythonmodule.play() sys.stdin = old_stdin sys.stdout = old_stdout

    Read the article

  • Using the Queue class in Python 2.6

    - by voipme
    Let's assume I'm stuck using Python 2.6, and can't upgrade (even if that would help). I've written a program that uses the Queue class. My producer is a simple directory listing. My consumer threads pull a file from the queue, and do stuff with it. If the file has already been processed, I skip it. The processed list is generated before all of the threads are started, so it isn't empty. Here's some pseudo-code. import Queue, sys, threading processed = [] def consumer(): while True: file = dirlist.get(block=True) if file in processed: print "Ignoring %s" % file else: # do stuff here dirlist.task_done() dirlist = Queue.Queue() for f in os.listdir("/some/dir"): dirlist.put(f) max_threads = 8 for i in range(max_threads): thr = Thread(target=consumer) thr.start() dirlist.join() The strange behavior I'm getting is that if a thread encounters a file that's already been processed, the thread stalls out and waits until the entire program ends. I've done a little bit of testing, and the first 7 threads (assuming 8 is the max) stop, while the 8th thread keeps processing, one file at a time. But, by doing that, I'm losing the entire reason for threading the application. Am I doing something wrong, or is this the expected behavior of the Queue/threading classes in Python 2.6?

    Read the article

  • Python base classes share attributes?

    - by tad
    Code in test.py: class Base(object): def __init__(self, l=[]): self.l = l def add(self, num): self.l.append(num) def remove(self, num): self.l.remove(num) class Derived(Base): def __init__(self, l=[]): super(Derived, self).__init__(l) Python shell session: Python 2.6.5 (r265:79063, Apr 1 2010, 05:22:20) [GCC 4.4.3 20100316 (prerelease)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import test >>> a = test.Derived() >>> b = test.Derived() >>> a.l [] >>> b.l [] >>> a.add(1) >>> a.l [1] >>> b.l [1] >>> c = test.Derived() >>> c.l [1] I was expecting "C++-like" behavior, in which each derived object contains its own instance of the base class. Is this still the case? Why does each object appear to share the same list instance?

    Read the article

  • Python refuses text.replace() in one environment

    - by gx
    Hi fellow programmers, I've been mocking about with the following bit of dirty support-code for a pylons app, which works fine in a python-shell, a separate python file, or when running in paster. Now, we've put the application on-line through mod_wsgi and apache and this specific piece of code stopped working completely. First off, the code itself: def fixStyle(self, text): t = text.replace('<p>', '<p style="%s">' % (STYLEDEF,)) t = t.replace('class="wide"', 'style="width: 125px; %s"' % (DEFSTYLE,)) t = t.replace('<td>', '<td style="%s">' % (STYLEDEF,)) t = t.replace('<a ', '<a style="%s" ' % (LINKSTYLE,)) return t It seems pretty straightforward, and to be honest, it is. So what happens when I put a piece of text in it, for example: <table><tr><td>Test!</td></tr></table> The output should be: <table><tr><td style="stuff-from-styledef">Test!</td></tr></table> and it is, on most systems. When we put it through the app on Apache/mod_wsgi though, the following happens: <table><tr><td>Test!</td></tr></table> You guessed it. I'm currently at a loss and have no idea where to go next. Googling doesn't really work out, so I'm hoping on you guys to help out and perhaps point out a fundamental issue with using whatever-is-causing-this. If anything is missing I'll edit it in.

    Read the article

  • Counting combinations in c or in python

    - by Dennis
    Hello I looked a bit on this topic here but I found nothing that could help me. I need a program in Python or in C that will give me all possible combinations of a and b that will meet the requirement n=2*a+b, for n from 0 to 10. a, b and n are integers. For example if n=0 both a and b must be 0. For n=1 a must be zero and b must be 1, for n=2 a can be 1 and b=0, or a=0 and b=2, etc. I'm not that good with programming. I made this: #include <stdio.h> int main(void){ int a,b,n; for(n = 0; n <= 10; n++){ for(a = 0; a <= 10; a++){ for(b = 0; b <= 10; b++) if(n == 2*a + b) printf("(%d, %d), ", (a,b)); } printf("\n"); } } But it keeps getting strange results like this: (0, -1079628000), (1, -1079628000), (2, -1079628000), (0, -1079628000), (3, -1079628000), (1, -1079628000), (4, -1079628000), (2, -1079628000), (0, -1079628000), (5, -1079628000), (3, -1079628000), (1, -1079628000), (6, -1079628000), (4, -1079628000), (2, -1079628000), (0, -1079628000), (7, -1079628000), (5, -1079628000), (3, -1079628000), (1, -1079628000), (8, -1079628000), (6, -1079628000), (4, -1079628000), (2, -1079628000), (0, -1079628000), (9, -1079628000), (7, -1079628000), (5, -1079628000), (3, -1079628000), (1, -1079628000), (10, -1079628000), (8, -1079628000), (6, -1079628000), (4, -1079628000), (2, -1079628000), (0, -1079628000), ideone Any idea what is wrong? Also if I could do this for Python it would be even cooler. :D

    Read the article

  • Python program to search for specific strings in hash values (coding help)

    - by Diego
    Trying to write a code that searches hash values for specific string's (input by user) and returns the hash if searchquery is present in that line. Doing this to kind of just learn python a bit more, but it could be a real world application used by an HR department to search a .csv resume database for specific words in each resume. I'd like this program to look through a .csv file that has three entries per line (id#;applicant name;resume text) I set it up so that it creates a hash, then created a string for the resume text hash entry, and am trying to use the .find() function to return the entire hash for each instance. What i'd like is if the word "gpa" is used as a search query and it is found in s['resumetext'] for three applicants(rows in .csv file), it prints the id, name, and resume for every row that has it.(All three applicants) As it is right now, my program prints the first row in the .csv file(print resume['id'], resume['name'], resume['resumetext']) no matter what the searchquery is, whether it's in the resumetext or not. lastly, are there better ways to doing this, by searching word documents, pdf's and .txt files in a folder for specific words using python (i've just started reading about the re module and am wondering if this may be the route, rather than putting everything in a .csv file.) def find_details(id2find): resumes_f=open("resume_data.csv") for each_line in resumes_f: s={} (s['id'], s['name'], s['resumetext']) = each_line.split(";") resumetext = str(s['resumetext']) if resumetext.find(id2find): return(s) else: print "No data matches your search query. Please try again" searchquery = raw_input("please enter your search term") resume = find_details(searchquery) if resume: print resume['id'], resume['name'], resume['resumetext']

    Read the article

  • Advice: Python Framework Server/Worker Queue management (not Website)

    - by Muppet Geoff
    I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).

    Read the article

  • String comparison in Python: is vs. ==

    - by Coquelicot
    I noticed a Python script I was writing was acting squirrelly, and traced it to an infinite loop, where the loop condition was "while line is not ''". Running through it in the debugger, it turned out that line was in fact ''. When I changed it to != rather than 'is not', it worked fine. I did some searching, and found this question, the top answer to which seemed to be just what I needed. Except the answer it gave was counter to my experience. Specifically, the answerer wrote: For all built-in Python objects (like strings, lists, dicts, functions, etc.), if x is y, then x==y is also True. I double-checked the type of the variable, and it was in fact of type str (not unicode or something). Is his answer just wrong, or is there something else afoot? Also, is it generally considered better to just use '==' by default, even when comparing int or Boolean values? I've always liked to use 'is' because I find it more aesthetically pleasing and pythonic (which is how I fell into this trap...), but I wonder if it's intended to just be reserved for when you care about finding two objects with the same id.

    Read the article

  • Easy ways to investigate unknown Python APIs

    - by jedi_coder
    When studying a snippet of unknown Python code, I occasionally bump into the varName.methodName() pattern. To figure out what's this, I shall study the code more, find where varName was instantiated, find its type. So if varName proves to be an instance of ClassName class, I would knew that methodName() is a method of ClassName. Sometimes varName == self and methodName() is a method of this class, or a method inherited from some other class, if the current class is subclassing some other classes. Are there quick ways / tools that could take 'methodName' as input, scan over all installed Python modules and show which classes have methodName()? The closest thing related to this I know of is ipython. If I type a class name, then dot ('.') then TAB, it can show the class members. Instead of a class I could use a name of an object (which is an instance of a certain class) and it would work too. As soon as I choose a method name from the provided options, I can type '?' or '??' and get some help if there's a docstring. I wonder if ipython can do some intelligent scanning based only on 'methodName' string. If you know alternatives to ipython that could possibly help with this, please do suggest them.

    Read the article

  • Enterprise Platform in Python, Design Advice

    - by Jason Miesionczek
    I am starting the design of a somewhat large enterprise platform in Python, and was wondering if you guys can give me some advice as to how to organize the various components and which packages would help achieve the goals of scalability, maintainability, and reliability. The system is basically a service that collects data from various outside sources, with each outside source having its own separate application. These applications would poll a central database and get any requests that have been submitted to perform on the external source. There will be a main website and REST/SOAP API that should also have access to the central data service. My initial thought was to use Django for the web site, web service and data access layer (using its built-in ORM), and then the outside source applications can use the web service(s) to get the information they need to process the request and save the results. Using this method would allow me to have multiple instances of the service applications running on the same or different machines to balance out the load. Are there more elegant means of accomplishing this? i've heard of messaging systems such as MQ, would something like that be beneficial in this scenario? My other thought was to use a completely separate data service not based on Django, and use some kind of remoting or remote objects (in they exist in Python) to interact with the data model. The downside here would be with the website which would become much slower if it had to push all of its data requests through a second layer. I would love to hear what other developers have come up with to achieve these goals in the most flexible way possible.

    Read the article

  • Using custom Qt subclasses in Python

    - by kwatford
    First off: I'm new to both Qt and SWIG. Currently reading documentation for both of these, but this is a time consuming task, so I'm looking for some spoilers. It's good to know up-front whether something just won't work. I'm attempting to formulate a modular architecture for some in-house software. The core components are in C++ and exposed via SWIG to Python for experimentation and rapid prototyping of new components. Qt seems like it has some classes I could use to avoid re-inventing the wheel too much here, but I'm concerned about how some of the bits will fit together. Specifically, if I create some C++ classes, I'll need to expose them via SWIG. Some of these classes likely subclass Qt classes or otherwise have Qt stuff exposed in their public interfaces. This seems like it could raise some complications. There are already two interfaces for Qt in Python, PyQt and PySide. Will probably use PySide for licensing reasons. About how painful should I expect it to be to get a SWIG-wrapped custom subclass of a Qt class to play nice with either of these? What complications should I know about upfront?

    Read the article

  • Building a survey to put in a WordPress website using Python/Django

    - by chiurox
    So I've been given a task to build a survey to get data regarding time slot preferences of prospective students for a particular course. I know there are really quick solutions to this like Google Forms, SurveyMonkey, but since it's not unusually hard, I want to implement the survey myself in a totally new language as an opportunity to get started with it and also be able to customize and provide dynamic info to the users who are voting. Although I have done some stuff in PHP, C++, javascript, etc, I'm pretty new to Python+Django framework but it's something I've been meaning to get into since a long time ago. Initially, what I want is to make a grid with the days of the week as columns and time-durations as rows. In each cell I want to provide users a way to choose how strong (high/medium/low) their preference for this particular day+time is. I also want to show how many "votes" have already been cast for this particular preference because this will influence a lot in their decisions and as a result make this process easier when we are going to define the classes. I'll probably store the data in MySQL. Could anyone point me to some really good Python+Django tutorials for my particular purpose? Does anyone think I'm wasting my time with this trivial task by choosing new tools and that I should just use something I already know (like PHP) or a free service or plugin for Wordpress? Thanks!

    Read the article

  • Process a set of files from a source directory to a destination directory in Python

    - by Spoike
    Being completely new in python I'm trying to run a command over a set of files in python. The command requires both source and destination file (I'm actually using imagemagick convert as in the example below). I can supply both source and destination directories, however I can't figure out how to easily retain the directory structure from the source to the destination directory. E.g. say the srcdir contains the following: srcdir/ file1 file3 dir1/ file1 file2 Then I want the program to create the following destination files on destdir: destdir/file1, destdir/file3, destdir/dir1/file1 and destdir/dir1/file2 So far this is what I came up with: import os from subprocess import call srcdir = os.curdir # just use the current directory destdir = 'path/to/destination' for root, dirs, files in os.walk(srcdir): for filename in files: sourceFile = os.path.join(root, filename) destFile = '???' cmd = "convert %s -resize 50%% %s" % (sourceFile, destFile) call(cmd, shell=True) The walk method doesn't directly provide what directory the file is under srcdir other than concatenating the root directory string with the file name. Is there some easy way to get the destination file, or do I have to do some string manipulation in order to do this?

    Read the article

  • Python: need to get energies of charge pairs.

    - by Two786
    I am new to python. I have to make a program for a project that takes a PDB format file as input and returns a list of all the intra-chain and inter-chain charge pairs and their energies (using coulomb’s law assuming a dielectric constant of (?) of 40.0). For simplicity, the charged residues for this program are just Arg (CZ), Lys (NZ), Asp (CG) and Glu (CD) with the charge bearing atoms for each indicated in parentheses. The program should report any attractive or repulsive interactions within 8.0 Å. Here is some additional information needed for the program. Eij = energy of interaction between atoms i and j in kilocalories/mole (kcals/mol) qi = charge for atom i (+1 for Lys or Arg, -1 for Glu or Asp) rij = distance between atoms i and j in angstroms using the distance formula The output should adhere to the following format: First residue : Second residue Distance Energy Lys 10 Chain A: ASP 46 Chain A D= 4.76 ang E= -2.32 kcals/mol (For some reason I can't organize the top two rows, but the first row should be lables and below it the corresponding values.) I really have no idea how to tackle this problem, any and all help is greatly appreciated. I hope this is the right place to ask. Thank you in advance. Using python 2.5

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >